Hacker News new | past | comments | ask | show | jobs | submit login
Can “effective altruism” maximise the bang for each charitable buck? (economist.com)
208 points by edward on June 1, 2018 | hide | past | favorite | 214 comments



I think the Effective Altruism movement really belies its own values and cause with the fact that one of its own funds is for supporting:

> organizations that work on improving long-term outcomes for humanity. Grants will likely go to organizations that seek to reduce global catastrophic risks, especially those relating to advanced artificial intelligence. [1]

Yes, an argument can be made that it's important to fund prevention of global catastrophes, as while they're unlikely compared to the immediate threat of malaria, they'll cause much greater damage, thus increasing risk. However, to consider artificial intelligence to be a potential global catastrophe at all, let alone the single one requiring extra funding, is mostly unfounded. We currently can barely even define what the actual risk is, let alone how to mitigate it.

It's one thing to walk past homeless people in my city and not give them money, because I know that money could much more easily and effectively safe a life in malaria-ridden parts of the world. I think it's absolutely morally repugnant to walk past them and not give them money, because instead you're paying people to sit around thinking about AI.

[1] https://app.effectivealtruism.org/funds/far-future


I used to feel similarly until I realized that the EA movement isn't a hierarchical organization: it's just a bunch of totally separate orgs who have a common philosophy about how to do good in the world.

Sure, OpenPhil funds AI research. But Givewell (who, last I checked, share the same office) has in their Top Charities list, only those that work in poor countries

https://www.givewell.org/charities/top-charities

[2] https://www.givewell.org/charities/top-charities


Thank you; this has opened my eyes


Agreed. I've loosely held an EA-like philosophy for about a decade and I think that OpenPhil orientation towards AI is pretty disappointing.

I account for my time in terms of things like number of people saved from blindness or death due to malaria, and I definitely do not count future simulated persons as worthy of any of the same concern as actual humans who exist today.


I watched a debate involving William Macaskill last summer and he poses the hypothetical question:

"You are outside a burning building and are told that inside one room is a child and inside another is a painting by Picasso. You can save one of them. To do the most good, which do you choose?"

The point he's trying to illustrate is that, if you knew for certain that you could turn around and sell the Picaso for millions and use that money to purchase malaria bed nets, the expected number of lives saved by using the Picaso could be hundreds, and so there's a moral dilemma present.

Like many hypothetical questions, this one feels a bit "off" or "unrealistic", but if you don't get hung up on the oddities, I think one can sense the essence of his question, and it reminded me of the point you're making here as well as a responder's question asking you why you think the way you do.

I do think these questions are hard for us to wrap our heads around -- how to value high probability immediacy against somewhat uncertain non-proximal/non-immediate things that might be "much higher value". Part of my human brain goes splat when I try to weigh these things.

In terms of the moral dilemma with the painting, I do have quite a bit of sympathy for the argument that one should do what they feel will produce the most good, which might be to save the painting and purchase malaria nets. My father on the other hand seemed to believe that to be absolutely morally wrong, which seems to be siding with your sentiments. Practically speaking, I think I'd almost certainly save the child's life, because one's human impulses would be so strong that they would override any high-and-lofty-rationality, and one wouldn't have time anyway to do deep analysis. But the question in a hypothetical sense does seem quite valid and hard.


> But the question in a hypothetical sense does seem quite valid and hard.

I appreciate this sentiment, and tend to think likewise. However, the situation is a hypothetical. In the end, perhaps most important is the practical decisions we make, which is almost never situations like the ones you describe above, but more like "what cause should I donate to"? In that sense, it might be hard to discern between different causes in the Givewell top lists, but picking either of those at random is probably a good heuristic that already beats a fairly widespread heuristic of just giving to something like Make-A-Wish, if you're starting from the point of donating €x to a charity.


Out of curiosity: why?


Because they don't exist. It's like predicating your actions on the possible future existence of the easter bunny.

Why are these people not attempting to research the ability to contact or create deities, or perpetual motion machines? Because they don't exist.


However, to consider artificial intelligence to be a potential global catastrophe at all, let alone the single one requiring extra funding, is mostly unfounded. We currently can barely even define what the actual risk is, let alone how to mitigate it.

Although I happen agree with you personally, I don't think we should commit the fallacy of assuming that because their position seems absurd to us that it comes from a place of bias or ignorance.

To the contrary, when I listen to Holden and other EA leaders talk, it's clear they've spent way more time thinking about this stuff than I have. He is and they are thoughtful and humble about exactly the questions you pose: How much should we weight "known good" good done today vs "potential good" done in the future, how confident should we be in our ability to predict the future, etc.

As one example, Open Phil has hired historians to conduct research on how well people have (in the past) been able to predict the future, precisely to inform their thinking in this regard.

Holden is also open about how his thoughts on the importance of AI alignment have changed over time.

Again, we can disagree with him (as I do). But we absolutely should not claim that because we disagree, his position must be out of a place of thoughtlessness or bias.


It's perfectly reasonable for them to hold that view. The unfortunate thing is to insist that it's the most rational and/or correct view. To say that they're biased isn't an insult. We all have biases, and it's useful to recognize and admit them.


Could you link/cite where they insist that it's the Right Thing?


It's called "Effective Altruism"


That's an aspirational statement, not a claim to have attained perfection.


>I don't think we should commit the fallacy of assuming that because their position seems absurd to us that it comes from a place of bias or ignorance.

I had a really smart person talk about AI and how to deal with it. His conclusion was a gigantic let down. He was out of his element, hes an economist, but his conclusion was-

Either AI is going to be peaceful, or its going to kill us and there is nothing we can do to stop it.

Maybe most civilizations end like this, but why not look for third options?


> Either AI is going to be peaceful, or its going to kill us and there is nothing we can do to stop it.

Have you heard of the AI Box experiments?

http://yudkowsky.net/singularity/aibox/

The problem of containing a hostile AI does not seem to be particularly tractable to me.


My comment is going to be unpopular, and I'll admit my bias upfront: I think Yudkowsky is a crank, and is neither a psychology nor an AI expert (he's a self-proclaimed expert, but he's not actually engaged in academic research on AI, because of reasons).

His "experiment" is hard to control or reproduce, its goals are ill-defined, its results are hidden (really, which sort of experiment hides its results and merely asks us to have faith the result was positive?). He makes a lot of unwarranted assumptions, like "a transhuman mind will likely be able to convince a human mind" (why? where is the scientific or psychological evidence that a superior mind must necessarily be able to convince inferior minds of arbitrary things? This is a huge, unwarranted assumption right there).

This kind of psychological experiments -- because this is what they really are, rather than about AI -- are really hard to conduct properly, its results hard to interpret and difficult to reproduce even for subject matter experts, which Yudkowsky isn't. This one looks like it was designed by an amateur who happens to be a fan of sci-fi.


I am aware of one reproduction of the experiment, the goals seem pretty darn explicit, and its results are public. He has stated the rules of engagement, and has said that he did it "the hard way". If nothing else, one should at least be confident that Yudkowsky is honest.

His claim that "a transhuman mind will likely be able to convince a human mind" is what his experiment demonstrates, not what is assumes, and frankly it is absurd to make it sound like he has not repeatedly given justifications for the statement.

What actual misinterpretations or other issues are you worried about?


- What reproduction? What would you consider a successful reproduction, for that matter? If I told you I reenacted the experiment at home with a friend, would you consider this a reproduction? Someone saying they reproduced it on the internet would convince you? What are your standards of quality?

- What is the goal of the experiment? Is the goal "show that a transhuman AI can convince a human gatekeeper to set it free"? Or is it actually "show that a huthatman can talk another human into performing a task", or even "an internet (semi)celebrity can convince a like-minded person into saying they would perform a task of very low real-world stakes". How would you tell each of these goals apart?

- The results are most definitely not public. What is public is what Yudkowsky claims the results were, but since the transcripts are secret and there are no witnesses, how do we know they are true (or even not assuming dishonesty or advanced crankiness, how can we tell if they are flawed?). Would you believe me if I told you I have a raygun that miniaturizes people, that I have tested it at home and it works, and that I have a (very small) group of people who will tell you what I say is true? No, I cannot show you the raygun or the miniaturized people, but I can tell you it was a success!

- "A transhuman mind will likely be able to convince a human mind" is what is stated as truth in the fictional conversation at the top of the AI-box experiment web page. Yudkowsky has repeatedly provided "justifications", but these are unscientific and unreasonable.

Yudkowsky claims that because a person can convince another person of claiming they would perform a task (setting an hypothetical AI free), that then a "transhuman" mind is likely to convince a human gatekeeper. The logical disconnect is huge. First, that people can convince other people of things is no big revelation. Unfortunately, it doesn't follow that because some people can convince other people of some things in certain scenarios, then people can universally convince other people of arbitrary things in every context. Worse, we don't even know what a "transhuman" mind would be like; assuming it means "faster thoughts" (a random assumption), why would more thoughts per minute translate into higher convincing capacity? Is it true, for that matter, that higher intelligence translates into higher ability to convince others of stuff?

----

Another example of methodological flaws: in both runs of the experiment, the participants seem to be selected from a pool of people fascinated by this kind of questions and who would be open to suggestion that a "transhuman" mind can convince them of stuff. Let's look at them:

First participant: Nathan Russell. Introduces himself as

> "I'm a sophomore CS major, with a strong interest in transhumanism, and just found this list."

Then shows interests in a similar experiment and considers how it could be designed. Note that the list itself, SL4, is for people interested in the "Singularity". Enough said.

Second participant: David McFadzean. Correctly claims the first experiment is not proof of anything, and is willing to take part in a second experiment. Later Yudkowsky describes him like this:

> "David McFadzean has been an Extropian for considerably longer than I have - he maintains extropy.org's server, in fact - and currently works on Peter Voss's A2I2 project."

The mentioned website still exists and it has something to do with a Transhumanist Institute. I start to see a pattern here.


The only experiment I know of and would consider a serious attempt at reproduction would be Tuxedage's series,

https://www.lesswrong.com/posts/FmxhoWxvBqSxhFeJn/i-attempte...

https://www.lesswrong.com/posts/dop3rLwFhW5gtpEgz/i-attempte...

https://www.lesswrong.com/posts/oexwJBd3zAjw9Cru8/i-played-t...

His total is 3 for 3. I do not know how to explain these results without either taking them to be honest attempts at a fair experiment or by assuming those involved colluded. I find the latter absurd, given my priors about the honesty of members of LessWrong (Yudkoswky in particular, though he wasn't involved in the reproduction).

> If I told you I reenacted the experiment at home with a friend, would you consider this a reproduction? Someone saying they reproduced it on the internet would convince you?

It is not so simple. I would want evidence that you and your friend were smart and had a decent understanding of the domain, and that your friend was in a similar state of unbelief about the plausibility of being convinced. I would want a statement that it was a serious attempt at doing things "the hard way" and true to the experiment, on both sides, lest you get [1]. Of course I would want the standard rules, or a reasonable modification publicly stated, in addition.

[1] https://pastebin.com/Jee2P6BD

> What is the goal of the experiment?

To show that "I can't imagine anything that even a transhuman could say to me which would change [my mind]" is not evidence, and should not be treated as such. To provide evidence that "humans are not secure systems".

You say "very low stakes", but Yudkowsky convinced someone who had offered a $5000 handicap. That hardly seems like a trivial quantity.

> [maybe it's all a lie]

You have to be very cynical to take this worldview.

> we don't even know what a "transhuman" mind would be like

The experiment is under assumption of a true singularity, ergo. nigh-unlimited intelligence. I can discuss what outcomes I think are likely for AI development, or which are merely plausible, but the experiment is about one particular hypothetical, so that would be a different conversation.

> the participants seem to be selected from a pool of people fascinated by this kind of questions and who would be open to suggestion that a "transhuman" mind can convince them of stuff

I am unconvinced that this experiment would work if the gatekeepers did not have an understanding of the topic; they are meant to play a gatekeeper, after all. A person who considers the singularity plausible but thinks an AI box is effective seems like the perfect control should the singularity happen and people want to figure out whether to AI box it.


But that's just it: I simply don't think Yudkowsky or any of the sort of people who would be enthusiastic about the sci-fi theories on SL4, or host extropy.org, or believe in Roko's Basilisk, or read Harry Potter fanfic and find it philosophically insightful, have a decent understanding of the AI domain. Everything about him and his followers smacks of fringe cultists completely outside mainstream research.

I don't think the chosen participants have a particularly deep understanding of the domain, they just think they do (because that's what defines Singularity believers, LessWrong readers, and people who believe they are hyper "rational" and that this is some kind of superpower). I think they understand AI no more than a Star Wars fan understands space travel.


Sure, I don't particularly care if you or anyone else wants to disengage from LessWrong-esque ideas because they sound weird. I only entered this discussion because it sounded like you might have had an actual argument.


This could be really cool, but the conversation was impossible to read. The links/website is designed badly.

Howd he let the AI out both times?


There have been other instances of the AI box experiment where the dialog is public. Like this one https://www.lesswrong.com/posts/fbekxBfgvfc7pmnzB/how-to-win... .

Yudkowsky's original intention to not release the dialog was to prevent people from saying "I wouldn't have been swayed by that, therefore AI escaping is impossible!". Even if we grant the first part of the sentence, that an AI escaping is impossible doesn't follow. It's very much possible, and strong evidence is that only human-level intelligences have escaped the same situation.


> and strong evidence is that only human-level intelligences have escaped the same situation.

I don't know if this is the same situation, but laboratory beagles, farmed mink, etc show that non-humans can pursuade humans to release them from cages.


> Howd he let the AI out both times?

No one knows. That's the point.

An AI will eventually be much smarter than a human, so it doesn't matter how the human succeeds - it's enough to know that a human has succeeded even once.


Hmm, would like a different human to try it.

I read some of the dialog and the user being the gatekeeper talks openly about his inability to socialize that caused him to receive special social treatment until 10th grade.

What if there were 2 gatekeepers, north korean style?


It seems pretty tractable to not build such a thing as a hostile AI.


You have to find every group of humans trying to build an AI, and either convince or force them not to do so.

This seems hard, given:

- (a) The cost of starting an AI venture is minimal (perfectly within reach for a small group or a smart individual with $10 million or less for salaries plus cloud computing expenses). It's a lot harder to keep tabs on small and non-obvious activity like this than, say, the enormous centrifuge facilities needed to refine uranium for nuclear weapons.

- (b) The personal and societal economic, and other, benefits of advancing the state of the art in AI are potentially enormous. People will be motivated to try, whether their goal is to line their pockets, to better understand what intelligence is by advancing the state of the art of trying to put it into machines, or to make the world a better place.

- (c) The problem domain is not well understood, so it's possible to stumble into a dangerous design by accident.

It seems like only some extreme dystopian scenario could halt the progress of technology to the point where an AI-related disaster becomes impossible.

For example, a global war or disaster that destroys civilization's logistical and technological bases, kills much of the world population and forces the survivors to focus mainly on survival. Or an ideological revolution that sees all the nerds of the world lined up against a wall and shot, and those who remain alive become culturally permanently uninterested in advancing technology to avoid the same fate. Or a world-dominating tyrannical government that keeps a close watch on every expert programmer and computer system in the world, monitoring to make sure unauthorized AI experiments don't take place.

Even that might not be enough, because if any of the human population survives, even the biggest global catastrophe, cultural revolution or tyrannical world empire's effects probably won't last more than a few thousand years.


I'm not so sure. Even prosaic, current AI, can be very unpredictable. See this blog post http://aiweirdness.com/post/172894792687/when-algorithms-sur... , which is a somewhat more causal summary of this paper "The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities" https://arxiv.org/abs/1803.03453v2

The authors of the paper explain a bunch of anecdotes when they wanted the AI to do one thing, but the measure they told it to optimise for wasn't actually what they intended, and so unexpected things happened.


That's an amusing read, and sure, AI can do things we don't expect. I think we're more on the order of a bridge failure and less on the order of Skynet here, though. To end up in that kind of nightmare scenario we have to entrust some AI with capabilities we could just elect not to.



Note that GiveWell is sticking to its original mission of evaluating charities based on scientific studies and isn't recommending AI funding.

However I'll push back on this. If it's okay for the government to fund basic scientific research where the benefits are long-term and unpredictable, why not private individuals? Or are you saying research shouldn't be funded at all?


I think it's fine for private individuals to do it (though I disagree with their choice), but not to claim that it's evidence-based Effective Altruism while doing so - it is an emotionally-led value judgment, just like donating to my local dog shelter would be (which I understand others may disgree with). That said, I do recognise now that effectivealtruism.org does not represent the entirety of the movement; I have no fault with what GiveWell are doing.


Of course, there'll never be evidence about the effectiveness of research into existential risks until it's too late. So there's no way to objectively allocate funding between, say, AI safety and disease pandemics.

While I think it's important to fund such research, it's a good point that funding orgs should either be evidence-based or not. Funding some things based on evidence and other things based on hunches might be an unstable combination. Once hunches are allowed, they could easily drift into funding things where the evidence says it's ineffective, but it gets overruled.


You could probably calculate this.

Since AI can kill 7,000,000,000 people, You could probably toss a non-zero probability on it. Any math people know how to model this?

Even with a bunch of different variations for the probability, you could understand why AI is so dangerous. potentially.

There are probably lots of non zero probability events like getting hit by a rogue quasar. The difference is we cannot stop the quasar and that humans are required to discover AI.

I dont give money to other charities, I spend money on mine. We dont have any employees, just volunteers. I spent a total of 50$ on the LLC paperwork and 8$/mo for a website host. Everything else has went directly to the kids/families we are teaching.


The Center for Existential Risk does solid work on this. https://www.cser.ac.uk/

For estimating the likelihood of AGI, you can start with estimates from people in the field. Some say 0%, others say 80% chance in 50 years. Any way averaging of these produces a risk high enough to justify serious funding into safety research.


Your assumption seems to be that either it is a pure emotional value judgement (like donating to a dog shelter because you felt like it) or a perfectly objective judgement (ie. meta-reviews of randomised controlled trials). Perhaps that's a false dichotomy and there's a difference between supporting a cause based that is now supported by a significant body of philosophical and technical literature and just donating to whatever you feel like? There's nothing wrong with the later, just that you are kidding yourself if you think it is likely to be effective.


Sure, in some cases "speculative altruism" might actually be a better name.


Effective Altruism is not a single monolithic organization and I think you really go too far to assume all/most advocates think AI research is more important than feeding the homeless.

Personally, I wholeheartedly subscribe to effective altruism and also don't think AI research is a good use of my funds. All of my donations go to malaria relief.

Effective altruism is about principles, not particular causes.


Isn’t this just a statement that you disagree either with the relative probability of an AI-related disaster or the relative degree of harm it could cause?

It seems harsh to say it’s “morally repugnant” when in the end you’re just saying the way you assign probabilities and degrees of negative outcomes would lead you to invest in a different portfolio of charities or efforts than what this other group would do.

(I’m not arguing for or against the correctness of a belief that AI poses attention- and funding-worthy threats. Only that these other people motivated to allocate resources to it have studied the problem in great detail and their sincere belief after looking into it is that it is worth being part of the overall portfolio of charity investment, and this really would (in their sincere judgment) mitigate big-scale harm, no different than fundamental research into global warming or drug-resistant bacteria, regardless of whether lay people can more easily envision the types of harm from those other threats.


> Isn’t this just a statement that you disagree either with the relative probability of an AI-related disaster or the relative degree of harm it could cause?

Yes, but the point of this movement is that they're supposed to be evidenced-based and maximising ROI. There is evidence for the existence and threat of global warming and drug-resistant bacterial. When you move away from charitable efforts whose effectiveness we can directly measure, you're not doing EA any more, just regular emotional/value-based charity.

I do have different values and feelings about such non-evidence-based charity, and I think theirs are morally repugnant, especially when they claim to be doing EA.


That's a fair point about moving into areas in which the effectiveness is hard to measure. However, even apart from AI threats, I'd argue that there are many areas like this that pose potentially great harm. For example, I think that we ought to spend more effort to prevent regulatory capture in corporate-friendly legislation. But whether we can find a metric we'd all agree on to target this from an empirical point of view is a very difficult question.

It reminds me of Gilb's Law, "Anything can be measured in a way that is superior to not measuring it at all."

So if your sincere beliefs were such that the threat posed is high enough, and total harm would be astronomical, then you might believe that accepting a present-day metric that has a lot of variance and which is hotly debated might still be ok. Empirical progress along that rough metric might be more valuable, still in an expected value sense, than progress in other possibly less-harmful areas even if they have more clearly defined metrics.

Again, not arguing for or against, just trying to represent why someone might be sincere about investing in AI safety and why, under their particular beliefs and preferences, it could still be rational from an EA point of view, despite more ambiguity.

There are always risks that a certain model's loss function does not correctly correspond to the goals they wish to optimize towards, or that there is a large amount of idiosyncratic noise in the observation of the loss criteria. But you can still account for the risks of these sources of error in an overall method that is still empirical.


It's like calling someone morally repugnant because they agreed to give you money equal to the sum of two and two and then gave you three dollars, insisting that the math actually does come out that way. If they honestly do believe that, they are simply mistaken, not morally repugnant.


The founder of Open Philanthropy claims to have sought out the most effective uses of charity money in the entire world, and it happens to include paying his roommate and brother-in-law to think about AI.

The big problem with this decision isn't that it's mistaken, it's that it's corrupt. He should have excluded things that were in his self-interest from consideration, even if he honestly, mistakenly believes they are the best thing anyone can do with their money.


This would be an argument about corruption or disingenuous claims by a specific person involved with the charity movement, and could definitely heighten skepticism about other involved parties.

But it would not be a criticism of the general idea of risk-return optimization applied to charity, and would not exclude a rational participant from belief that allocating some capital towards AI research was part of an optimized approach.

The original comment made it seem like the generic idea of choosing to invest in AI safety as part of the EA framework was, in spirit, intrinsically "morally repugnant."

The specific repugnant actions of one party would not necessarily support that, anymore than say a pro football player's racist comments would imply that all of football is inherently morally repugnant.

It would seem valuable to separate and distinguish vitriol directed at the specific suspicious actions of one person from generic and wide-sweeping criticism of an entire framework that person happens to be associated with, especially when the framework itself attempts to be value-neutral conditioned on one's beliefs after looking at some evidence.


If you're starting to talk about relative values and how one person values different things than another, who could disagree? But then the argument for effective altruism starts to crumble.


Huh? How does that affect the argument for effective altruism (which is essentially just mean-variance optimization applied to charity, and says nothing about what expected value you ought to believe about any given charity, nor what personal tolerance for risk you should have, apart from summaries of how certain other groups have come to believe about those topics)?

The original comment was the one that brought up "morally repugnant" choices in investing. That's what brought in relative value judgments. I was trying to ask how it differs from simply disagree with someone else's assessment of the evidence.

If Person A evaluated the evidence and sincerely believe you should give $3 to X and $2 to Y, based on empirical outcome optimization, they are using the EA framework.

Person B might look at the same evidence and believe you should give $5 to X and $0 to Y, but that hardly makes Person A "morally repugnant" (which might be Person B's unrelated moral judgment) nor does it make their choices fall outside the scope of the EA framework of decision-making.

As far as people disagreeing with posterior distribution over the goal-maximizing choices after seeing the same evidence, this would ideally fall under something like Aumann's Agreement Theorem (of course with the exception that people are not fully rational, Bayesian agents).

Given that, there is some expectation that if both parties are really rational and have the same value of goals, then they ought to come to the same conclusions given the same evidence.

But if they don't have the same goals (e.g. maybe you just happen to care more about animal welfare than me), then the EA framework doesn't say anything about you and I having the same investment priorities.

Really, popular EA press is mostly just saying, "Look, we take value judgment X on issues A, B, and C. If you agree and you start from the same value judgment we do, then based on the following evidence, we believe it's optimal to invest in foo, etc."

It seems like you're saying that it's not possible for two different people, *both making decisions in the effective altruism framework" could come to different beliefs about what to invest money into.

But there's no part of EA that requires that for two agents with different value judgments at the start.


Corruption, nepotism, and hypocrisy all in one --- beautiful, modern American values.


It's not being "simply mistaken", it's a calculated belief. Someone might think it's a moral act to fund, say, ecoterrorists to bomb the headquarters of a major oil company, believing that the loss of life is worth it for helping the world by destablising the company. We might disagree, and consider that highly immoral.

That's an extreme example of course, and in this case the worst that can happen is some loss of money, rather than loss of lives, but I hope it illustrates my point: them choosing to pay people to think about AI, rather than giving that money to other causes, is a moral choice with which I disagree.


IMO, the problem with AI risk funding isn't that it is 'wrong', per se. Instead it is that it is very speculative and not grounder in real world data, or measurable benefits that can be achieved TODAY.

The whole point of effective altruism is to be very ground in evidence based causes. IE, if I spend 10K on this cause, then it will definitely save X lived by next year.

AI risks can't be reduced to these kinds of numbers just yet.

It could still be true, we just don't have any evidence yet to prove it though.


I would say they can be reduced to numbers like that today, it's just that the implied error bars around the numbers would be huge.

Then in the optimization problem of determining what to invest donations into, the amount of gain from putting money into AI research would be correspondingly penalized by the amount of uncertainty (this is known as mean-variance optimization).

It's very much like a choice to invest in a solid, fundamental company which you know will give you 5% return next quarter, or invest in a very uncertain start-up which might give you 1000% return next quarter or might give you -50%.

The start-up investment might have a higher expected value, but also a much higher risk. At that point, the way to distinguish between whether you prefer to invest in the "sure bet" company or the "risky and ambiguous" start-up would be down to your personal tolerance for risk.

It could be perfectly rational to invest in the start-up in that situation. If you personally have a high risk tolerance. If you don't, then the start-up would look like a crazy, speculative bet with huge downside.

So some people might look at AI and say, "We don't have a good idea what the capabilities will be in ~50 years. So there is a huge risk that my charity donations will be wasted because in 50 years we realize we never needed to worry about this problem. Or we might use current AI research to thwart an unimaginable global crisis."

That person then looks into various sources of thought on putting actual numbers and actual error bars, the best we can, onto the problem, say by reading the Global Catastrophic Risk book, or stuff by Bostrom, or consulting surveys of current practitioners' estimates.

At the end of that process, it could totally end up being the rational course of action for that person to still invest in AI charities. Yes, their estimate for the expected value of the donation might have huge error bars around it, but under their particular beliefs about risk-return trade-offs, that might not cause their optimized decision to change.


Even as a massive sceptic of Cyberpunk AI (and for that matter, the efficacy of preventing it by setting up charities to do AI research), I think people are perfectly entitled to spend their money on it if it's what they're interested in. I also think it's plausible some such programmes might create extremely commercially valuable byproducts even though they don't lead anywhere useful directly, much like the Apollo missions.

Trouble is, if you invest enormous amounts of time and energy in arguing that a general belief stuff might work isn't good enough for philanthropy and even writing "why we don't recommend" articles about specific charities dispensing aid that don't deprive enough people of it to have a control group for RCTs, you deserve every criticism you get when you then funnel foundation money to very well-funded AI startups with unfalsifiable solutions to a purely hypothetical problem, even if you didn't have close personal relationships with people working for them. Even though I consider some EA analysis to be good and well reasoned (and they're certainly not the only people making evidence based bearish arguments about, say, microfinance) it's a little difficult to see how AI research could possibly pass a decision-making heuristic so supposedly rigorous that it writes off sanitation as an area to invest in because the estimations of diarrhea reduction aren't blinded. Ultimately, EAs have their cognitive biases just like anyone else, and I'm more sympathetic than they are to aid organizations' view that conducting RCTs is difficult and expensive and they don't want people suffering in control groups to quantify how well common sense healthcare interventions work, and disagree that such organizations should be held to higher standards of analytical rigour than AI researchers hoping to be Sarah Connor.


I think your reply is interesting, as it exactly the sort of thing that Effective Altruism talks about.

The example you gave is endemic of the way we think of charity precisely because it deals with both emotion and things we can individually see. Effective Altruism aims to make charity about things that don't directly affect us, that we can't see, and find areas that where the "bang per buck" is high.

Seattle spent $195 million on the homelessness according to this: https://www.seattletimes.com/seattle-news/homeless/how-much-..., to help a group that this advocacy organisation http://www.homelessinfo.org/what_we_do/one_night_count/ puts at 10,000 people. I wonder how a person can rationally look at that - $195 million at ~$19.5K per person and think they can make a dent. Compare that to https://www.givedirectly.org/basic-income and doubling the poorest of the poor's daily income by giving them $1 a day for 12 years. These are people I can't see and don't have to think about, yet for less than $5000 I can provide someone with security and an income for over a decade. If I give $70 a week for 12 years, I can take 10 people out of abject poverty for 12 years. That's fractionally above my coffee budget.

That's the question Effective Altruism wants people to ask themselves. How can I, with the amount of money I have, do the most tangible good in the world, and what is an equation that helps me decide.

Far from being "absolutely morally repugnant" to walk past a homeless person and preference things we can't see, I think it is morally honest to look at the risks humanity faces, and preference things that are under funded, where $1 can go far. That takes several forms, both immediate - like the Give Directly example - versus long term, where risks that could wipe out our species are considered. I think a strong argument can be made that, for minimal expense, it would be possible to have an affect on a potentially species destroying technology like AI, and that the money spent there is more likely to do good than adding that to an already well funded area.

That's catastrophic risk, minimal expense vs human suffering and high cost. How you solve for the equilibrium there.


> However, to consider artificial intelligence to be a potential global catastrophe at all, let alone the single one requiring extra funding, is mostly unfounded.

As another commenter points out, EA is not a single unified group, but rather a disparate set of people who are united by the belief that we should put money where it has the largest effect. Some, but not all, of them are convinced by the arguments in favor of doing AI alignment research. If you aren't, then don't donate money for that, donate it to what you think is a more effective cause!

I'm curious, though, how people come to the conclusion that AI risk is nonsense? Is it a gut reaction, or does it come from thinking or reading about the problem? Every popsci article I've seen on the topic has been atrocious, so kudos if you thought "nonsense!" after reading such an article.

Scott Alexander has a pretty readable introduction to AI risk: http://slatestarcodex.com/superintelligence-faq/


I assume that investing in AI risk is nonsense for one simple reason: I have yet to see a single study showing that a single undesired behavior has been stopped or even slowed down.

DDOS attacks are as stupid as you can get from an intelligence point of view, and yet not a single proponent for AI risk has come up with a way to stop even a single one AFAIK. I haven't heard either of a single military program that has been thwarted, and those computers are killing people right now. I also doubt that they could get a Pentagon official to stop doing anything.

Until the AI risk community show that they can do anything other than talk, no matter how small, I'll remain skeptic.


I'm confused. The research in question is "how can we make sure that a superintelligence would be morally aligned with humans". Why would dealing with DDOS attacks would be the right first step? It seems completely unrelated.


Because (IMHO) if they want to morally align a superintelligence, the first step would be to either morally align a dumb intelligence or to show that you can convince the people building this intelligence to steer it in this direction.

If they do neither, they risk either coming up with a plan that doesn't work (because it was not tested) or that no one cares about.


i don't think it's nonsense exactly. i think it merits some funding, and we'll probably be glad we did it even if it turns out there never was a risk of an unfriendly superintelligence.

but I have misgivings. in particular, the idea that marginal charitable dollars ought to be used to fund philosophers to think abstract thoughts is very counterintuitive!

(every argument that it's THE MOST IMPORTANT THING IN THE WORLD seems to have the same form. first give me a bunch of weird historical and metaphysical assumptions, then make me admit I'd assign them a finite probability of being true, then multiply by a kajillion future simulated lives or whatever.)

most of the philosophers who research this stuff, and most of the people involved in Effective Altruism, are part of the same niche subculture. i don't think this is a grift, I think they are quite honest in their convictions. but it makes me go "hmmmmm".


AI risk has a profile that is very, very difficult to deal with rationally. By that I don't just mean "think about rationally rather than emotionally", but that it is difficult to even use rational tools to analyze it. It is what most people would consider a very, very small probability of what is arguably the worst possible outcome (i.e., considered from a strictly materialist perspectiv, there are outcomes worse that "the total extinction of life on Earth"!). Tiny magnitude probabilities of huge magnitude disasters are mathematically unstable; very small shifts in the value of the probability, and relatively small changes in the log value of the disaster's size, result in radically different scenarios and correct risk mitigations.

It's even worse if you try to draw the probability distribution of something like "given strong AI will be acheived within the next 100 years, what is the distribution of the resulting likely outcomes?" In this case, the uncertainties are so large that, again, people can come to radically different conclusions even assuming both sides are being generally rational. It asks people trying to draw that distribution to consider what the most likely outcome is of at least a radically super-intelligent being is, and more likely, an arbitrarily large community of radically super-intelligent beings. Who can seriously claim to have an accurate probability distribution of that? If we understood radically super-intelligent beings, we'd already be radically super-intelligent beings, so we are not good at modeling them almost by definition.

But certainly by observation; I am opaque in many ways even to my 7 and 10 year old children, and as with most humans, they are already exceptionally intelligent as "intelligent systems" go. (A 5 year old of normal intelligence is already exceptionally intelligent as intelligent systems go.) The idea that I could model something that was even my clone otherwise but operating at a hundred times the speed is already absurd; add an actual increase in intelligence to it and I stand no chance. Our only commonality will be those things imposed on us by the universe; i.e., I can be confident it must consume some negentropy to survive, etc. But any higher-level actions would be impossible for me to model.

People who think intelligence is unlike to emerge quickly are unlikely to consider it a serious risk. People who think it's unlikely for an intelligent being to augment itself, then use its augmentations to augment itself, in an explosion of uncontrollable intelligence, will not consider AI risk a very likely outcome. And it's not as if it's an irrational or impossible outcome; for all we know, while there may be low-hanging fruit above human intelligence it is absolutely entirely possible that O(work to increase intelligence) > O(ability to perform intelligence-increasing work as intelligence goes up). After all, I can't help but look up in the sky and observe that there does not seem to be a near-lightspeed expanding bubble of computronium coming my way, per the most dramatic fears of the singularity. We could end up with something very intelligent, even dangerously intelligent, but not end up with us waking up one day to a digital god among us.


The precise distribution of outcomes is not actually all that important for just figuring out whether the research is valuable. Alignment research would be valuable if it moved a 1% probability mass from worst-possible-world to human-extinction, a 1% probability mass from human-extinction to human-survival, or a 1% probability mass from human-survival to human-flourishing. As long as you can convince yourself that there is a probability of things being far from optimal in some sense at some time in a way that funding research now could nontrivially affect, you have a justification.

Remember that people were created by evolution, and evolution is dumb. Nonetheless, the returns on moderate deltas in intelligence are very strongly selected for, and major changes have happened with small perturbations in evolutionary history. This gradient is so sharp that the same species that produces perfectly healthy 80 IQ individuals also produces Feynman and Euler, the latter so mathematically productive that the Wikipedia page listing things named after him says that "In an effort to avoid naming everything after Euler, some discoveries and theorems are attributed to the first person to have proved them after Euler."

Note that necessarily society develops as soon as intelligence reaches the point at which is is achievable, not with some evolutionarily significant delay, so you cannot judge the gradient by looking at the cap on intelligence of existing species; we are the necessarily the smartest, because we are first. Rather, you have to look at those behind us, and there it seems the gradient is extremely steep; our closest intellectual competitors can not so much as write a single coherent sentence.

The probable conclusion is that human intelligence is more than sufficient to build effective general intelligence, and improvements to intelligence are likely extremely steep on the slope to superintelligence, even ignoring the nine orders of magnitude improvement you get for free by merely running in silicon.


FYI, the majority of your arguments are explicitly discussed in Scott Alexander's post.


> I think it's absolutely morally repugnant to walk past them and not give them money, because instead you're paying people to sit around thinking about AI.

This is not a new argument. Sister Mary Jucunda, a Zambian nun, wrote to a NASA scientist (Ernst Stuhlinger) in the early 70's with a similar critique of funding for space research.

You can read Ernst's cogent reply here: http://www.lettersofnote.com/2012/08/why-explore-space.html

My favorite excerpt follows:

> About 400 years ago, there lived a count in a small town in Germany. He was one of the benign counts, and he gave a large part of his income to the poor in his town. This was much appreciated, because poverty was abundant during medieval times, and there were epidemics of the plague which ravaged the country frequently. One day, the count met a strange man. He had a workbench and little laboratory in his house, and he labored hard during the daytime so that he could afford a few hours every evening to work in his laboratory. He ground small lenses from pieces of glass; he mounted the lenses in tubes, and he used these gadgets to look at very small objects. The count was particularly fascinated by the tiny creatures that could be observed with the strong magnification, and which he had never seen before. He invited the man to move with his laboratory to the castle, to become a member of the count's household, and to devote henceforth all his time to the development and perfection of his optical gadgets as a special employee of the count.

> The townspeople, however, became angry when they realized that the count was wasting his money, as they thought, on a stunt without purpose. "We are suffering from this plague," they said, "while he is paying that man for a useless hobby!" But the count remained firm. "I give you as much as I can afford," he said, "but I will also support this man and his work, because I know that someday something will come out of it!"

> Indeed, something very good came out of this work, and also out of similar work done by others at other places: the microscope. It is well known that the microscope has contributed more than any other invention to the progress of medicine, and that the elimination of the plague and many other contagious diseases from most parts of the world is largely a result of studies which the microscope made possible.

I think there are many parallels between AI/ML and the microscope, and I think safety research is a very reasonable inquisitive lens for developing these new technologies.


Well, fighting poverty also grows the economy and number of brains available for research in the future. Today we are bringing people out of poverty for good, much charity isn't money in a bottomless pit.

The pit has a bottom as evident by the very realistic goal to end extreme poverty by 2030.

Still basic research is important, because it has such a long tail. Just saying both approaches have validity and will bring massive progress.


I certainly won't argue with that! We should be spending much more money on poverty reduction than on basic science.


That's a rather condescending reply, lecturing her like a schoolchild. Funding space research is one thing, but spending $billions on poor designs like the Shuttle is another and on expensive American lifestyles is another.


> That's a rather condescending reply

The reply begins with a sincere "...First, however, I would like to express my great admiration for you...".

The denoted intent of the letter is certainly not condescending.

Perhaps the content is what makes the letter condescending? The letter, if written today, might come off as condescending. It states many obvious truths, such as the observation that space development provides valuable tools to Zambian nuns. Today, this is obvious [1], and to point out such obvious facts to an expert borders on condescension.

But GPS wasn't at all an obvious implication of NASA funding in 1970!

If the letter comes off as condescending today, it's only because time (and science funding) has turned the impossible into the pedestrian.

[1] http://www.slate.com/articles/technology/future_tense/2011/0...


Not remotely applicable.

For one, so far everyone predicting doom about AI has been a layman a subject. Maybe you aren't aware, but this is a field that people do PhDs and get professorships in. Noone respected agrees with the doom-sayers.


> For one, so far everyone predicting doom about AI has been a layman a subject.

This is a myth. It was arguably true 5-10 years ago, but concern with AI safety is not a fringe position even among the highest levels of AI researchers now.

Stuart Russel (https://en.wikipedia.org/wiki/Stuart_J._Russell) is the co-author of one of the most popular AI textbooks in the world and he has repeatedly said that he thinks the alignment problem is important and that AI presents an existential risk: https://www.technologyreview.com/s/602776/yes-we-are-worried... https://www.youtube.com/watch?v=WvmeTaFc_Qw

Marcus Hutter (https://en.wikipedia.org/wiki/Marcus_Hutter) is another respected AI researcher who, along with, Tom Everitt (http://www.tomeveritt.se/) (a researcher at DeepMind, one of the most advanced AI companies in the world), is also working on the alignment problem: http://www.tomeveritt.se/papers/alignment.pdf

You can read through list of grants granted by the Future of Life institute for AI Safety research, almost all of which are to researchers associated with respected universities, not laymen, here: https://futureoflife.org/ai-safety-research/


> Maybe you aren't aware, but this is a field that people do PhDs and get professorships in.

I'm aware of my own existence, thanks ;-)

> Noone respected agrees with the doom-sayers

But many, many, many respected AI researchers (who I've talked to about this) certainly do agree that AI/ML safety/robustness are important topics of inquiry.

Including researchers at all the top CS departments, at Deepmind, at OpenAI, etc.

Consider critiquing the concrete projects that are funded with AI safety money. Very little of that money flows to research about "preventing malicious superintelligence" or whatever strawman you have in mind. And even projects that consider those questions also consider many much more near-term AI safety questions.


There is a difference between AI/ML safety/robustness (as in ensuring self driving cars don't behave erratically given unexpected inputs), and the skynet predictions.

> Consider critiquing the concrete projects that are funded with AI safety money. Very little of that money flows to research about "preventing malicious superintelligence" or whatever strawman you have in mind. And even projects that consider those questions also consider many much more near-term AI safety questions.

But this ("preventing malicious superintelligence") is EXACTLY what we are talking about. The comment in the OP was quoting the following

> Grants will likely go to organizations that seek to reduce global catastrophic risks, especially those relating to advanced artificial intelligence.


It's one thing to walk past homeless people in my city and not give them money, because I know that money could much more easily and effectively safe a life in malaria-ridden parts of the world.

The homeless in America live a life that's close to as miserable as the poor anywhere. One of the problem of most charities is bureaucracy eating up funds. Despite a lot of claims by bureaucracy, people who are homeless can often spend the money better on themselves than a bureaucracy.

I don't think there's any particularly good not to give to the homeless if you feel like it and have the money.


Not to defend not giving money to the homeless, but isn't the calculation more complex than that? It's not just about how much money the bureaucracy eats, it's also how effectively the remaining portion is used, and I would think that would include actual resources delivered to people.

More simply put, which helps more people and in a larger amount, $10 in the US or $5 in some extremely poor part of the world, where food and services may be much cheaper?

I don't know the answer, but I think it's not as simple as "half is taken by bureaucracy so I shouldn't give to a bureaucracy".


Not to defend not giving money to the homeless, but isn't the calculation more complex than that? It's not just about how much money the bureaucracy eats, it's also how effectively the remaining portion is used, and I would think that would include actual resources delivered to people.

Certainly, altogether the calculations are extremely complicated. There's whether a given organization knows what resources to provide, whether the provided resources are going to be used even if they are otherwise "right", etc.

With all the complexity and uncertainty, directly giving to people seems like one entirely legitimate approach since it guarantees that people get resources, not that other necessary are problematic but certainly other approaches deserve certainty. Direct giving at least has "what you see is what you get".


>Not to defend not giving money to the homeless

Then I'll defend it. The money would be much better utilized donated to programs that try to target homelessness as a whole. The old "they'll just spend it on drugs/alcohol/etc" cliche is actually backed by studies: https://www.theatlantic.com/business/archive/2011/03/should-...

Some points from the article:

"We choose to donate money based on the level of perceived need. Beggars known this, so there is an incentive on their part to exaggerate their need, by either lying about their circumstances or letting their appearance visibly deteriorate rather than seek help."

"If you travel to a poor city, for example, you'll find swarms of beggars by touristy locations. If the tourists become more generous, the local beggars don't get richer, they only multiply."

Further, a controversial issue is that of fake homeless people, which undoubtedly exist. Some say the easiest way to tell is if they'll accept food instead of money. From what From what I remember of someone's experience relayed in a reddit thread (which I'll admit has the potential to be exaggerated or unreliable) the majority of homeless people who said they needed "money for food" wouldn't accept food or would throw it away as soon as they thought the giver wasn't looking, indicating they were actually just after money.

If you feel like you absolutely must give directly to homeless people, carry around wrapped protein bars or something similar. Encouraging panhandling is just exacerbating the problems.

For some anecdotes, the last time I went to a convention in Baltimore, I heard stories about multiple people being assaulted or verbally harassed by panhandlers. In one case the provocation was that the person "didn't give them enough." The area is full of career-beggars who have no interest in actually improving themselves, like this one guy who pretends to be a youth baseball coach to collect fake donations every year. I try to stay indoors as much as possible but they even come inside the hotel lobbies to try to scam people sometimes. My experience and hearing those assault stories made me wonder how many of the people on this site decrying homeless-deterrent architecture have actually been in an affected area before.


>> Not to defend not giving money to the homeless

> Then I'll defend it.

I really included that to cut off possible misinterpretations of my point before they began so I didn't have to waste time on counterpoints to some argument other than the one I was making. That said, tangents with interesting info dumps is one of my favorite parts of HN, so thanks. :)


To provide the counter argument I imagine the Effective Altruism folks may give (not that I necessarily support this position):

A number of specialist AIs could potentially eliminate 50%+ of all jobs in the next 50 years (a generation). What does society look like when we don't have the safety nets to support a massive unemployed population? What could it mean for geo-politics if the rich countries population is simply rich because they have the most advanced algorithms, but most people don't work? We're talking about massive social unrest worldwide potentially - capitalism cannot necessarily survive this paradigm shift and since the fall of the Berlin Wall the world has no other social organization in modern times, unless we'd like to revert to totalitarian rule.

This all presumes of course, that the first AGI they turn on doesn't just turn the universe into a pile of paperclips (ducks)


Re homelessness in the US, someone recently shared this with me and it looks promising, though it seems to only be in Seattle at the moment:

https://www.samaritan.city/


I don't know that I really need an app to give beggars money or buy the Spare Change News from someone.


You absolutely don't need an app to give homeless individuals money. But when I was homeless, I found that it was vastly more helpful for me if someone gave me a few bucks than if I had to stand in line for hours, fill out reams of paper, etc to try to get some assistance. I am happy to encourage any model that fosters a little more direct giving of this sort.

I am also looking to create a pilot program to try to help homeless individuals begin to develop an online income from the street as I did. Charity helps you survive another day, but you need an income of your own to get your life back and many programs seem to actively be against the idea of homeless people trying to work for a living. People seem to see it as cruel to expect them to. But earned income is the only real path out of dire poverty.


Fair enough. I also find it unfortunate that so many people seem to see themselves as freelance interrogators of the homeless asking about what they will do with the money they're given.


Yes, thank you for saying that. The problem runs deeper than you may realize. Even very well meaning people would walk up to me and ask me "Are you homeless? What's your name?" They didn't volunteer their name, address or socioeconomic status when asking such questions.

They were trying to ascertain if I would need help. A much better question format that isn't so problematic would be along the lines of "I have X (clothes, blankets, whatever). Would you like to have that?" If someone isn't actually homeless but would be happy to have a free blanket, what do you care? Maybe that blanket will help prevent them from becoming homeless.


The blanket example reminds me of all the handwringing about people "misusing" LLINs as a fishing aid, as though fighting malaria is more important than not starving.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2532690/


This study investigated the extent of bed net misuse in fishing villages.

Wow. There is something incredibly fucked up about funding a study into bed net misuse rather than a study on the dire need for more help with meeting basic necessities like adequate nourishment as evidenced by so-called bed net misuse.

Or, you know, the sarcastic reply: If you die of starvation, I guess you no longer need to worry about details like malaria.


No kidding. It's an illustration of a charity out of touch with the people it serves that you would call too overwrought if it appeared in a novel.


This seems like a good study that will save lives.

Using malarial nets to dry fish means not using them to prevent malaria.

Ok, good chance the answer is "give more nets". How are you supposed to know that without this study?

The insecticides might also be fish poison, this could lead to a collapse of an important food source. A study like this could help surface that before it becomes a disaster.


My problem is not with them trying to determine what is being done with bed nets and why. My problem is with the framing. It is incredibly judgy and it is the kind of language that goes along with policies that boil down to "the beatings shall continue until morale improves."

Such language tends to point to an agenda and to an underlying hostile attitude towards the population supposedly being served. I was homeless for a few years. A lot of homeless programs are actively hostile to homeless individuals. This helped sharpen my existing tendency to be critical of such details.

If you think how it is framed doesn't matter, perhaps we can discuss some choice words for you or your profession or your demographic and see if you still think details of that ilk don't matter. Hint: When you say it matters if it is done to you, but it is irrelevant when done to some downtrodden group receiving "assistance" (often of the "Please stop helping me!" variety), then you are prejudiced and the existence of this prejudice out in the world is likely one of the root causes of the group in question being downtrodden and unable to make their lives work.


It reminds me of an anecdote about CS Lewis. He is supposed to have given a beggar some change, and he friend asked him "what did you do that for? He's just going to spend it on booze." And Lewis is supposed to have replied "well, that's all I was going to use it for anyway."


I don't need an app to send email or shop on Amazon either, yet here we are.


The value proposition for the homelessness app is less clear to me.


The painful truth is that it is morally repugnant to donate. I hate saying this, by the way. If you had any experience with outreach, You would understand it is actually quite destructive to give them money, and it is much better to donate to, for example, Union Gospel Mission, a place in Seattle that does excellent homeless outreach. The city in fact contracts out to them for their services.


> The painful truth is that it is morally repugnant to donate. I hate saying this, by the way.

I hate that you said it without explaining what you meant. :P

Seriously, when you say "it is morally repugnant to donate" and "it is much better to donate to X" in the same paragraph, I am not sure what to think about it. Do you think that donating to X is still morally repugnant, although a bit less than donating to other causes? Or that donating can either be wrong or right depending on who you donate to (i.e. how the money is going to be used)? Because the latter seems like something that Effective Altruists would totally agree with.


Sorry, my comment was quite unclear. I meant it is dangerous to street people to give the money directly. It is much better to give it to you in the Stabley Schomann organization that knows how to deal with their problems.


It's not clear why they see this as a nascent movement. My impression of the Gates Foundation is that it uses the funds it donates in a highly analytical fashion to maximize the 'net positive gain' in the welfare of humanity. First example that comes to mind is their commitment to eradicating polio.

This analytical approch been their M.O. for 20 years.


Effective altruism is well-known among a small number of nerds and not at all known or used in many other sectors. I do grant writing for nonprofit and public agencies, and I just wrote about how little government is (really) interested in it: http://seliger.com/2018/06/01/youre-not-organization-isnt-wo...

Those of you interested should read the philanthropy chapter in Robin Hanson and Kevin Simler's book The Elephant and the Brain.


EA has been around for a while and the Gates foundation is the exception not the rule.


I guess the new aspect is the attempt to make it a mass movement. Not just something that Gates family does and everyone else goes "well, it's their choice how they spend their own money", but something that average people talk about.

Also, the outsourcing of research. If Gates Foundation wants to donate their millions effectively, they can use a part of the budget to find out what are the most effective ways to help. If an average person wants to donate $100, that's not enough budget to also include research. Now the person with $100 can simply read some Effective Altruist websites and follow their recommendations; thus getting the advantage of research they would not be able to finance themselves.


It's mentioned in the article several times, but if you're interested in making some analytically driven donations, quickly, https://www.givewell.org/ is the place to start.


For those interested in Effective Altruism, Will MacAskill's book Doing Good Better is a great read that raises a lot of thought-provoking questions.

- https://www.effectivealtruism.org/doing-good-better/


Its a great book!


This also assumes that decisions to donate to a cause are driven by logic based on an ROI calculation but they are often driven by emotion, not logic. Donors give to causes they care about and feel connected to. If you grew up homeless for example, you're more likely to give to a cause that helps alleviate homelessness vs buying nets to combat Malaria (where you may have no connection) even if the ROI for donating to buy Malaria nets is higher.


> If you grew up homeless for example, you're more likely to give to a cause that helps alleviate homelessness

But does your charity actually alleviate homelessness? Effective altruism is still useful even if you have picked the cause to which you wish to contribute. You picked a great example, because homeless charities are notably bad at alleviating homelessness, and indeed may exacerbate it.


Effective altruism is saying one should donate by logic/ROI.


The way I read it is people would follow this method if the data was more readily available and the issue is access to data.


They are both generally true. In the absence of data, people default towards their emotional connections. Ie, a cancer survivor donating to a cancer charity. But if someone could provide data showing persuasively that you can save 10x more lives via malaria charities, that would often override the emotional appeals for many people.

It's much easier to give in to your emotions, when you aren't encumbered by contradictory data.


That is an interesting comparison.

If they adopt either an Ayn Rand-style, "all altruism is selfish", or else a cynical, "altruism is just virtue signaling", then promoters of effective altruism should try to market it as sexy.

They could develop marketing materials portraying effective altruists as hard-working, hard-partying, sophisticated, game-winning, lady-killing, truly conscientious but aspirational millionaires that have developed such complete mastery over their own lives that they now work toward effective as the last game worth playing.

Like a mix between the hunter in "The Most Dangerous Game", Ozymandias from Watchmen, and Gandhi. Or something.


I would recommend checking out the podcast ‘The 80,000 hours’ if you are interested in this kind of stuff. It’s hosted by Rob Wiblin who is the executive director at Effective Altruism. He talks to very interesting people (mostly academics) about “the world's most pressing problems and how you can use your career to solve them” (taken from his website).


Just to clarify, Rob is the Director of Research at the nonprofit, 80,000 Hours, whose aim is to help people have high impact careers. There is no organisation called Effective Altruism. Effective altruism is a social movement in the same way that environmentalism is a social movement, made up of many individuals, nonprofits and even for profits, working on different things, of which 80,000 Hours is just one. The podcast is excellent though and I second your recommendation.

http://www.robwiblin.com/ https://80000hours.org/podcast/


Technically, there kinda is an EA foundation: https://ea-foundation.org/

Tho for me it's mostly an umbrella org, that allows to easily donate to orgs following the EA movement (since many of them are not registered in Europe, I wouldn't be able to get tax deductions otherwise).


Yes you are right; I meant to say the Center for Effective Altruism.


Sign me up for some scientific rigor in all policy work, journalism.

State hypothesis, compare predictions to reality, show your work, cite your sources, peer review.

The "replication crisis" touches everything. What I like to call "governance technology".

We live in exciting times. I'm irrationally optimistic. Eager to see what happens next.


And publish your failures.

The part where we don't publish our failures is a huge failing.


And fund opposing research and opposing hypothesis -- we need more than ever diversity of thought. There's no sense in doing all of this science with nothing to challenge it. The best policies or whatever it may be should be able to stand up against it.


Maybe apply that specific bit of scientific rigor to regular science as well.


"Governance technology", awesome term, I'm stealing it, thanks


GiveWell’s CEO’s compensation is 200k: http://files.givewell.org/files/ClearFund/Meeting_2017_06_06...

That, after working at a investment firm and having a BA in religion.

Not sure that’s the skills and salary I would trust the use of the word « reason » and « science » with. Having a 200k salary in San Francisco definitely skews your vision of the world.


Wow. I remember when Elie and Holden started Givewell they made a huge deal about taking $70k/yr in salary.

https://issarice.com/givewell-executive-compensation

> From 2008–2014, their salary doubled from $60,000 to $120,000

> At the June 2015 GiveWell board meeting, Holden and Elie proposed a salary increase from $130,000 to $150,000

> At the June 2016 GiveWell board meeting, Holden and Elie proposed a salary increase from $150,000 to $175,000

> At the June 2017 GiveWell board meeting, Elie proposed a salary increase from $175,000 to $200,000. The proposal was approved.


If something goes wrong with an entity dealing with over a hundred million dollars, the CEO is expected to respond relatively immediately, as in, on call 24/7, because a hundred million represents the interests of a lot of people most likely.

The variety of things that can go wrong is large, and the necessary responses are nuanced, potentially requiring sophisticated knowledge in several domains, from legal, accounting, personnel, leadership, to communication and reputation management.

I have no idea what such a role should pay, but 0.2% of the money moved by the organization doesn't sound crazy, insofar as money moved correlates somewhat to the risk of lawsuits or the death of the organization after an emergency is handled poorly.

Aside from the day to day management, look at it like an insurance policy on the whole enterprise. In those terms it's actually a pretty reasonable rate.

This theory doesn't justify any price whatsoever. Some CEO compensations are surely still too large. I just don't think this is the best case study for that point.


For charities, a pertinent question is: Why should I donate money to someone who takes home more money than I do? Why isn't he donating?


GiveWell is not the top-ranked GiveWell charity. Typically if you use GiveWell you don't donate to GiveWell, you give to AMF or whatever directly.

And he does donate:

https://blog.givewell.org/2016/12/22/front-loading-personal-...


Reminds me of Whit Stillman's Metropolitan, where one of the more bourgeois characters suggests to his middle class friend that he not worry so much about the less fortunate, and instead consider he might be one of the less fortunate himself.

If you're struggling, feed yourself first. Don't create new needs, robbing Peter to pay Paul. If you have something extra, then sit down and figure out where your dollars actually do the most good, and resist the temptation to donate randomly with no research, based on a heartfelt appeal or good promotional materials.

Maybe GiveWell doesn't meet that test for you, that's fine, if everyone just thought it through that much we'd be way better off than we are now.


It seems rather likely that he is donating.


200k in SF is...not a lot. At all.


I get where you’re coming from, I think, but the way you phrased this I can’t agree with.

200k in SF is vastly less than a bunch of smart techies are making. But it is far, far more than the median SF income, and it is a very livable salary in the Bay Area.

We should remember that even though SF is expensive, and tech employees can make much more than 200k, that much money is, in fact, after taxes, a lot of money. Even in SF.

The spirit of what you’re saying is, I think, that talented people who have demonstrated organizational/management skills could easily make twice that much. I think that’s true, and I think the $200k salary is reasonable for the role in the organization.


Why does the CEO of Givewell need to live in SF?


This is a question that should be taken seriously. I am not sure whether they did or did not. But I can imagine there could be valid reasons. For example, living close to potential major donors could increase the chance of convincing some of them to donate. But this too is a thing that should be measured.


What part of investment firm and academic study of religions is incompatibility with reason and science? Both of those experiences suggest someone quite experienced at poring over large troves of documents to look for useful information, which is what Givewell does.

And how does salary affect reason and science?


Well, measuring overhead isn't a great way to measure efficacy (trying to game this metric often leads to behaviors that are penny-wise and pound-foolish), but I don't find utilitarianism a compelling moral philosophy and really find the idea of trying to quantify how much good a charity does kind of silly.


Obviously, it's impossible to capture all the variables, but in quantifying the good we do, we value all lives the same, and strive to maximize the good we do.

If the EA movement forces big charities to focus a bit more on bang for buck, then that's a win. Perfect efficiency will never be possible anyways.


I have some experience in the nonprofit sector. While you obviously don't want to see people being cavalier with donor dollars, nonprofits that are obsessed with cost cutting are more effective at scolding people for wanting a chair that isn't falling apart or making a few too many photocopies than they are at their missions.


Looking at givewells analysis, they don't seem too focused on office supplies, but rather whether the charity did any follow-up studies, how far it scales and what it costs.

But the follow-up studies seem far more important, than the per unit cost of mosquito nets.


> I don't find utilitarianism a compelling moral philosophy

Could you expand on this some more? The primary argument against utilitarianism I recall is the hypothetical situation of there being that gets infinitely happy if you donate to it, so that the maximally useful thing would always be to donate to that being (or something like that, but you get the gist).

But such a being does not exist, and if we accept that our utilitarian gifts are not going to be perfect, isn't it a reasonable assumption that by trying to maximise the amount of good that can be done for our bucks, on average, the amount of good that will be done for our bucks will be higher than if we did not approach it like that?


Here's a classic problem for utilitarianism:

You are a doctor. You have four patients. Three are young men in otherwise good health who need an organ or they will die very soon -- two of them need kidneys and one of them needs a heart. Your fourth patient is a homeless person who's broken his leg but is otherwise well. You know for a fact he has no friends or family who will notice he is gone, and that nobody will know it if you kill him. Under a strict utilitarian framework, it is not only permissible but actually mandatory that you kill this man and harvest his organs to maximize utility.

Another popular one is a variant on the trolley problem. Most people feel it is morally permissible to change the switch on the train tracks to go toward the side with fewer people, even though it will lead to some people dying. Most people do not feel it is permissible to push someone (say in a car) in front of the train in order that they may stop the train while being killed in the process.

There are others of this genre but you probably get the basic idea. Utilitarianism can lead to plainly monstrous conclusions. I'm suspicious of any "formula" to solve all ethical problems.


> I'm suspicious of any "formula" to solve all ethical problems.

Right, I think that's what I'm mostly interested in - what if you treat utilitarianism as a guideline? If you only apply it in cases where you think most people would agree on what is "most useful", would that not lead to better decisions on average, without influencing potential hypothetical edge cases?

(I also happen to think that the most useful thing in your situations is to respect the law, and have the law say that you're not allowed to exploit the homeless person, since that also influences the amount of trust we have in other people and hence the quality of life, but that's probably less relevant for what I'm interested in now.)


If utility is simply one factor of many then we're not doing utilitarianism.

I don't think the law thing works at all. Laws can be and often are unjust. Imagine I live in a twisted society where the doctor killing patients to harvest organs is legal. Is it now ethical? This is not purely theoretical -- the people operating concentration camps in Germany weren't violating German law.


I don't think "utilitarianism" incorporates any principle that homeless people are worthless. Really, depending on how you define utility, it can be molded into anything. Note that harvesting organs means he will definitely die, but the organ transplants may fail, or may only work for a limited time. It could be rather than assuming "two lives saved minus one life lost" you should use "probability/quality adjusted years of life gained". And incorporate uncertainty.

Also, in your example of pushing someone in front of the train, wouldn't your moral intuition depend on the stakes? Suppose that the train not being stopped was going to set off a chain reaction that would kill all life on earth?


Who said it did? The point is that saving multiple lives produces greater utility than allowing one person to live, so if utility is your only consideration it is fine to murder people to save lives. The point of making the donor homeless and a loner is just to remove objections about how eventually someone is going to notice nobody is paying the rent, or you'll make other people sad by killing him, or you'll reduce their trust in the medical system, or various other answers that sidestep the problem.


"The point is that saving multiple lives produces greater utility than allowing one person to live, so if utility is your only consideration it is fine to murder people to save lives."

That depends on what utility function you're using.


I can't think of one that avoids this problem. Can you?


I did throw out "probability/quality adjusted years of life gained" to address the obviously grossly inadequate metric of counting "lives gained/lost".


That can't really be it, can it? Then we just add patients who need a blood transfusion, two who need corneas, someone who needs a liver, and so on, until we're positive that the expected value of life is higher.


This seems to me kind of like saying if you pick an arbitrary number k, and an infinite sum from 1..n, there must be a point at which the sum is greater than k. But if the sum is finite, it may be less than k for any n. And if you're talking about real life, the total benefit is uncertain too. So I am not at all convinced that this kind of utilitarianism easily leads to decisions that we don't see in real life.

I kind of think that society does use something like the measure of utility I described - we just don't talk about it a lot. For instance, doctors will refuse to do an operation on someone who is sufficiently old and frail. Courts will award less to an elderly victim of medical error. And so on.

Edit/post script: Also, I think you implicitly are assuming that utility is one dimensional, and that you can add up quality of life changes and balance them against lives ended. That does not necessarily seem valid or essential to some form of utilitarianism.


> Edit/post script: Also, I think you implicitly are assuming that utility is one dimensional, and that you can add up quality of life changes and balance them against lives ended. That does not necessarily seem valid or essential to some form of utilitarianism.

It seems that way to me. What difference does it make to utility whether a life ended naturally or unnaturally? Indeed, death from organ failure is probably much more painful than whatever I might do as an unethical doctor. Once you're looking at utility + non-utilitarian considerations like "murder is wrong," it's simply not utilitarianism anymore.

Since we're hung up on details, let's look at other examples:

* You're a judge in a frontier town with only a rudimentary justice system wherein the only possible punishment is hanging the accused person on the spot after they are found guilty. A townsperson is accused of witchcraft. You know this defendant is not guilty, because there is no such thing as witchcraft, but you are also certain that if you rule that the defendant is not guilty a riot will break out and several people will be killed. Therefore, the utility-maximizing option would seem to be falsely finding the defending guilty and ordering execution. Is that the ethical thing to do?

* You are preparing a body for burial. The body is to be buried with several pieces of precious jewelry, per the family's request. You know that if you steal the jewelry, replace it with costume jewelry, pawn the real jewelry, and use the funds to buy your friends dinner, nobody will be any the wiser, and you will have maximized utility by using the jewelry to bring your friends happiness rather than burying it. Ethical?


You seem to be hung up on utilitarianism implying a particular one dimensional utility metric. I don't see why that is necessary, to call something "utilitarianism". Why can't utilitarianism be conceived of as parameterizable? Without arguing for any particular ethical system, I think the world as it exists operates largely on a utilitarian basis, just with a less simplistic utility metric, similar to what I mentioned twice. Is acting according to this "ethical"? I have no interest in debating that. Your examples are starting to sound like things I believe have happened frequently in history, unethical or not.

Edit/post script: I am not so sure that rules like "don't murder" are in conflict with utilitarianism as I think of it existing. Maybe if you look at moral choices as an optimization process, simple rules function as heuristics to deal with uncertainty, bias and lack of information on utility.


I don't follow. The world may often operate on intuitive, inchoate, proto-utilitarian principles, but I don't think most of us accept a just-world principle, so that observation doesn't take us very far. Utilitarianism is an ethical framework used to evaluate ethical claims. Why do you even care about it if you're not interested in talking about ethics?


Even if there are things that quantitative measurement can't capture, perhaps we could at least use it to rule out donating to some charities?


If you want to answer a question like "who distributes the most malaria nets per dollar?" then quantitative measure is very effective. I don't think that's most questions facing someone who wants to do good, though.


I want to make sure it's said that effective altruism is intertwined with the rationalist movement (Scott Alexander[1], Eliezer Yudkowsky[2], Robin Hanson[3]) and that, among these so called rationalists and the parts of the effective altruist community where they hold sway, there are a lot of advocates of AI risk mitigation research (what this means, who knows). These people see AI as the greatest risk to humanity, if you agree to a very long list of tenuous assumptions and implications.

I don't think the one paragraph in this article where this is mentioned does enough to emphasize this part of the community. Many members of effective altruism use it as a front to recruit people into their belief system, which is centered around devotion to / fear of future AIs that exist solely as thought experiments. And while their leaders don't necessarily explicitly endorse it, the communities they foster tend to also be fairly right wing / race realist / misoginistic / bigoted.

Many of the people in the rationalist and effective altruist community believe that if they don't help create AI any future AIs will create a hell and punish them in it. Seriously. That is a serious belief.

Check out more: https://rationalwiki.org/wiki/Effective_altruism

http://benjaminrosshoffman.com/effective-altruism-is-self-re...

[1] http://slatestarcodex.com/2018/03/26/book-review-twelve-rule...

[2] https://www.lesswrong.com/posts/3wYTFWY3LKQCnAptN/torture-vs...

[3] https://twitter.com/robinhanson/status/989535565895864320?la...


I feel like this criticism is largely unfair.

First off, yes, there is overlap between effective altruism and the rationalism community. I think that makes sense when you want to try to use reason instead of intuition to make decisions.

I fail to see why you should qualify them as "so called" rationalists. Though besides Yudkowsky (and maybe even him) I honestly haven't heard enough of them to defend them. If you do want to sling in some defamatory remarks at their expense I feel like they should be backed up (or left unsaid), though.

You mention that some people see AI as the greatest risk to humanity. Perhaps I misunderstand but the way you phrase this, it sounds like you think this is absolutely ridiculous. If so, why would that be? And what long list of assumptions would you need to agree to? And why would it be bad for there to be a common starting point for discussing this? I think there's a fair amount of uncertainty in how any superintelligent entity would act, so a certainty of AI being terrible seems silly. However, a strong belief that it can pose a large threat seems, honestly, evident.

You say people use "AI risk" as a front to recruit people into their belief system. There is so much wrong with this... first off, why is it a front? That implies deception. Secondly, it is one of many facets. The core of EA is the desire to do have a (large) positive impact. If some people think that they can make their impact by working on the AI safety issue, why do you feel the need to portray that as nefarious? Finally, "belief system" sounds incredibly dogmatic. EA is not a church. Yes, there is a set of beliefs that most people in the EA community would ascribe to. But I don't experience EA as some echo chamber where everyone is forced into some kind of mold. Rather, people challenge both each other's and their own ideas. There's inevitably going to be some biases and filters, but your portrayal of EA as a cult (purposefully or not) is inaccurate.

As far as EA communities tending to be alt-right... what on earth are you smoking? I help run a local chapter and the focus is highly left-wing. And anyone I've noticed that's slightly more right wing is definitely not of the misoginistic or racist side. I recently listened to an 80000 hours podcast with Bryan Caplan, and noticed he's libertarian. While I think libertarian views are mostly bonkers, at the very least the way his libertarian views showed (e.g. arguing for open borders) are not insane on the level I'm used to from libertarians. Either way, this is an exception. Even if you can list some well known names that also have some strange views, I can say with a high degree of certainty that it is not even remotely representative of the community as a whole, especially not as I've seen it in the Netherlands.

FINALLY: I've honestly only ever seen Roko's Basilisk being mentioned on a meme page for EA. So much for taking it seriously.


I am a bit skeptical about the "scientific rigor" part. Sure, all charities are not equal and there is a big difference between donating to musical education in the US and curing blindness in Africa. But estimating the impact usually involves a lot of guesswork and quantifying the "quality of life" factors. What is the impact of George Soros' pro-democracy foundations? What is the impact of Amnesty International? What is the impact of me supporting the education of one child in Bolivia via ActionAid?


I'm probably missing something, but I have some serious concerns about the effectiveness of this approach to charity.

First of all, without some coordinated effort to spread out donations, won't some of the more effective charities still suffer? If everyone gives to the top 4 charities, for example, won't the 5th most effective one suffer? I don't believe this movement yet has the traction behind it to have this kind of negative effect, but won't it get worse as the movement grows?

Second of all, who decides what is the most effective cause (or who interprets the studies that indicate which cause is most effective)? If I could spend 1 dollar and know it would save 5 lives or prevent 50 people from living with horrible mental illness, I would have trouble deciding which one to give to, even if given empirical evidence as to which is cause is the most beneficial to society. Any field dealing with this many human variables is bound to produce at least some skewed results. To me, it seems like the trolley problem with a limitless number of tracks with different situations.

Third of all, some of the causes that are not effective might still benefit from some amount of involvement. The example they give - volunteering at a soup kitchen - is one such example. Say giving to the homeless is effective, but volunteering at a soup kitchen is not. Someone still needs to volunteer at the soup kitchen, even if it's not many people.


Your first paragraph is something I think about from time to time, but as you point out, we're not to the point where it's a problem.

Right now we're in a competitive market where we select for emotional impact and marketing prowess. EA, for me, is about moving to a market where we select for effectiveness. We should, once we have the resources as a movement, include funding for experimenting with new organizations.

FWIW, it's worth noting that Givewell.org (the EA organization I pay the most attention to) includes "room for more funding" as a major decision point in its recommendations. Funding is a constraint for most organizations, but they recognize that dropping a hundred million dollars on a small organization isn't always the best thing, even if that organization does the best work.

> The example they give - volunteering at a soup kitchen - is one such example. Say giving to the homeless is effective, but volunteering at a soup kitchen is not. Someone still needs to volunteer at the soup kitchen, even if it's not many people.

80000hours.org talks about this some, mostly in the consideration of working in direct service vs. earning to give. Part of this is what you're passionate about -- if high paying professions would make you miserable, working in direct service might be for you. Part of it is opportunity. Not everyone is cut out to be a lawyer/doctor/engineer.

I do disagree with "someone still needs to volunteer". Someone needs to volunteer or get paid to work at the soup kitchen. If the choice is between having a doctor spend their time volunteering, or having that doctor donate an extra $100k/year, and $40k of that paying someone to replace the missing volunteer time, the soup kitchen is still up $60k, and will probably get better results from having someone experienced doing the work.

In the way we traditionally evaluated charities, this would be a worse outcome, because the overhead just went way up. One of EA's main things is to stop worrying about overhead. When I buy a car, I want the best car per dollar, not the car from whoever pays the CEO the least. When I buy good, I want the most good I can get for my dollar.

I disagree with other EAs in a ton of areas, but what makes us EAs is that one principle.


GiveWell actually uses this in their rankings; the charity needs to have the ability to scale up with more money.


I can see you don't miss much. You've made some excellent points!

> First of all, without some coordinated effort to spread out donations, won't some of the more effective charities still suffer?

This is almost certainly true. However, it's also true that this is the case in any scenario where there are less than infinite resources. The starvation issue is very real today.

With that said, I think adherents to EA hope that their movement will incentivize effectiveness in charities. Some might postulate that there are worse possible outcomes than charities competing for effectiveness.

> Second of all, who decides what is the most effective cause (or who interprets the studies that indicate which cause is most effective)? If I could spend 1 dollar and know it would save 5 lives or prevent 50 people from living with horrible mental illness, I would have trouble deciding which one to give to, even if given empirical evidence as to which is cause is the most beneficial to society.

This is another excellent point! How people pick causes is a real concern. It's also one people face today. I think it's largely still a matter of personal choice and priorities. The EA approach tries to put a framework in place for explicitly reasoning about what people currently tend to do implicitly.

It absolutely is a trolley problem with infinite tracks and infinite victims. Yet, we already live that trolley problem.

> Third of all, some of the causes that are not effective might still benefit from some amount of involvement. The example they give - volunteering at a soup kitchen - is one such example. Say giving to the homeless is effective, but volunteering at a soup kitchen is not. Someone still needs to volunteer at the soup kitchen, even if it's not many people.

Not to nitpick your example, but a sufficiently well-funded soup kitchen could likely afford to hire some of the homeless to staff it. Then it could be free of the need for volunteers and lift people off of having to live on the streets! It wouldn't be many people, but it also seems like it might be preferable to staffing the kitchen entirely with unpaid volunteers.

More broadly, you're once again absolutely right. This is a real and pressing concern in a scenario where Effective Altruism becomes literally the only way any person thinks about charity. However, some might postulate that we are not currently in that scenario.


I suspect, but don't know for sure, that most charities experience diminishing marginal utility for each dollar received after some point. Therefore if people give too much to a single charity, it's rank will fall. If there is not diminishing utility, you would have a utility monster[1], which would cause the problem you pointed out.

[1] https://en.m.wikipedia.org/wiki/Utility_monster


> I suspect, but don't know for sure, that most charities experience diminishing marginal utility for each dollar received after some point.

This is absolutely true. If you're a charity and your primary activity is distributing anti-malarial bed nets, once you have enough bed nets to meet demand, your going to have to move on to other activities that won't necessarily be exactly as effective as the bed nets.

In a contrived thought experiment with "perfect" information about the effectiveness of activities, EA would suggest a "Greedy" (algorithm) style of allocation. For each incoming dollar, you send it to whichever activity has the greatest utility per dollar give, and repeat that for as many dollars as you have.


I first heard of this movement in the book Strangers Drowning. The book is structured as a varied series of accounts of extreme altruism along with some philosophical discussion of cultural attitudes towards it. This movement appears in one chapter about members of Giving What We Can, an organization of individuals donating at least 10% of their income to charity. The chapter follows one woman who gives away almost all of her six-figure salary every year.

I'd highly recommend Strangers Drowning if this sort of thing interests you. It's a fairly short read and made me reconsider how I regard altruism by looking at the most extreme examples of this human impulse. One of the best points the book makes is the surprisingly cynical views towards altruism which prevail in the US.


One of the best points the book makes is the surprisingly cynical views towards altruism which prevail in the US.

This comment echoes something I have felt for some time. I feel that aspects of the US economic development which root in 'self reliance' have encouraged this. Ayn Randian world views?


The book talks about Ayn Rand a bit, but spends more time discussing the damage done to altruistic ideals by Adam Smith's The Wealth of Nations and much of Freud's work. Freud considered altruism and philanthropy a form of masochistic pathology.


Freud/fraud


An entertaining, particularly biting, and horribly shortcoming-riddled statement of the case against EA that nevertheless captures a common set of views:

https://ssir.org/articles/entry/the_elitist_philanthropy_of_...

(This is the "defective altruism" article.)

Is there a name for a set of views that are correlated with each other in prevalence, in the sense that they occur together more often than could be explained by chance alone? It's similar to the idea of a syndrome in medicine: "opinion syndrome", perhaps?



I think the Effective Altruism movement is great. I was wondering recently what the most effective way would be to donate money specifically to help the less fortunate within California, and unfortunately it is not clear to me how to do it. GiveWell seems to do a pretty good job evaluating charities fighting diseases in poor countries, so hopefully similar operations spring up to evaluate other types of charity as well.


That's because saving a life in a poor country is cheaper that in rich country. If your goal is no maximize number of lives saved you'd naturally look at poor countries.


> Not all of this money was given with the intention of maximising human welfare. Take, for instance, the Make-A-Wish Foundation, which helps children stricken with life-threatening illnesses, by granting “wishes”, such as meeting celebrities or visiting theme parks. The typical wish costs the foundation around $10,000 to fulfil—heartwarming for the recipient but of little help in improving health generally.

This pretty much captures my main complaint about EA. Being mindful about your philanthropy is a good thing, but EA seems to turn into shaming people who value things differently.

[edit] I would add that the Make-A-Wish foundation has a very significant quality of life impact on the kids AND their support network

[edit2] EA = Effective Altrusim (the term was in the original post title, but has been edited out)


There's nothing shameful about donating to the Make-A-Wish foundation. It's certianly better to donate there than to spend the money on booze, video games, fancy clothes, etc or the vast majority of things competing for your money. But you still get far less moral credit for donating there than donating somewhere more effective. And yes, you can count the effects of the Foundation on the kids support network but remember that a kid who dies of malaria also has friends and family who will be affected by their death.


> But you still get far less moral credit for donating there than donating somewhere more effective.

I take issue with this attitude. How is this anything but degrading and shaming those who give their time and money to a noble cause? I frankly don't see how your opinion on efficacy matters, it appears you want to make people feel bad for doing good-but-not-good-enough things.


Because I want to make the world a better place it's incumbent upon me to encourage people to do so efficiently. It's sad that this might make some people feel lower status than they think they deserve but compared to children dying it's really not my main concern.


The Make-A-Wish Foundation is mostly useless. People have a moral obligation to set aside their feelings in order to do the right thing. Giving to the Make-A-Wish Foundation is motivated mostly by a desire to feel good.

If you want me to stop saying that, prove me wrong. Calling me mean isn't an argument. If what I say makes people feel bad, the solution isn't to start lying to make them feel better.


> Giving to the Make-A-Wish Foundation is motivated mostly by a desire to feel good.

Me spending $40 to go out to dinner with my wife is motivated by a desire to feel good. Should I instead "set aside my feelings" and spend that on your list of approved charities "in order to do the right thing"? Have you given up all frivolous spending and donate it to causes you believe to be effective?

I should have started by saying that it's good that you're thinking of which charities you find to be a more effective target for donations. It turns from thoughtful to unnecessary when you pass judgment on others for giving differently than you would give.


By giving you any credit at all for any of your donations, is still passing judgement on you.

EA is just doing the same moral judgement using a different accounting metric.

By going out to dinner with you wife, you are making both you and your wife happy. For that, you get 1 morality point in my book.

If you instead took that money and gave it to the make a wish foundation, you get 10 morality points.

And by giving it to the gates foundation, to help prevent Malaria, you get 100 morality points.

It is exactly the same thing it you yourself were to congratulate someone else on donating to a charity that you like. You are making a moral judgement on them.

Me making a moral judgement on you is no different.

Nobody "has" to do anything. I will fully admit that the average doctor is a better person than I am. Just like how I am probably a better person than the average pay day loan scammer.

Everyone makes judgements every day about other people. EA is just saying that these judgements should be evidence based.


It's useless according to your metric of usefulness, not according to someone else's.

I think it is 100% a valid moral stance to care about your community or your nation, and not feel any moral obligation to help people outside of that. The world is too big and chaotic for every human to feel the same degree of moral obligation for every other human.

Moreover, maybe Make-A-Wish's main positive outcome is not that one kid gets a wish, but that everyone gets to 'live in a society where Make-A-Wish exists', thus kindling optimism and community and reminding us that the world is still kind in big ways. It is not hard to see that the net result on one's society of Make-a-Wish could, for some systems of evaluating morality of actions, be far higher than well-intentioned throwing money at some faraway place where you would never see any result, either if you care much more about local effects, or if you think signaling effects can be comparable in magnitude to actual 'good'.

Anyway, the crux of it is that your way of accounting for the morality of actions is actually pretty particular and you're demanding everyone else follow it and then shaming them if they don't. Not your morality, your system of accounting, which is far less 'obviously' right.


> The Make-A-Wish Foundation is mostly useless. People have a moral obligation to set aside their feelings in order to do the right thing. Giving to the Make-A-Wish Foundation is motivated mostly by a desire to feel good.

> If you want me to stop saying that, prove me wrong. Calling me mean isn't an argument. If what I say makes people feel bad, the solution isn't to start lying to make them feel better.

People aren't robots, and there's no moral obligation for them to act like they are.


I've never seen a EA advocate berate a donor giving money to the Make-A-Wish Foundation.

I do often see people injecting themselves into discussions of EA once they get defensive about their charity choices.

The people who seem to be judgmental are those who dislike how EA advocates choose to operate.


"...seems to turn into shaming people who value things differently."

Freedom of association.

Why would I (willingly) do business with people working against me?


Are you suggesting donors to make-a-wish actually value making one dying kid (and his support network) happy more than saving the lives of several?

I doubt that's usually the case because if you consider it for a moment is seems so unfair and cruel. No, it's probably something else that motivates their decision...


They probably do value spending $10,000 on making one dying kid happy higher than $10,000 worth of medical research.

If you claim $10,000 worth of medical research will save the lives of several people then clearly we're doing something wrong with the other billions we spend on medical research.


$10k worth of malaria nets will save a couple of lives.


That's a fair point, but ultimately people are going to care more about children that are 'closer' to them. You're free to criticize people for this, but it probably won't do you any favours.


> That's a fair point, but ultimately people are going to care more about children that are 'closer' to them. You're free to criticize people for this, but it probably won't do you any favours.

Also, making charity too much about distant problems and far off research outcomes factors out the civic and community-level thinking that actually drives and sustains a lot of charity.

It also makes charity solely the domain of experts. Instead of seeing a problem firsthand in your first-world community and reacting to fix it, you have to delegate to someone who you hope knows the highest needs in far off lands.


Our intuition regarding communities stems from a history where we only ever communicated with those near us. Right now, we are able to go anywhere in the world within a day. We are able to communicate worldwide within seconds. Clearly our intuition has not caught up, but maybe given these changes in our community (which is now far less defined just by some kind of radius around our location), we should update our actions to reflect this?

It makes sense not to feel morally responsible for things that happen out of view when you cannot know what is going on and have no way to impact it. Thing is, we do know a lot about what is going on and we do have tools to impact it. It just doesn't feel intuitively satisfying. It is possible to internalize this satisfaction regardless, though.


That’s true but also what makes EA interesting. Ineffective charity is as good as nothing and sometimes worse. All the money poured into former colonies is a good example where it may be doing more harm than good.

The reason EA is not interesting is finding out what’s effective is so hard. Maybe local is much better.


not if they never reach their target. effective altruism is a joke, its just rationalism applied to charity, with all the idiocy the rationalist movement possesses. People who talk endlessly about problems like dolphins who obsess over hang gliding.


Are you saying the Against Malaria Foundation is a scam and they don't actually get nets to people?


Medical Research saves a lot of lives over time.

A drug that saves 1,000 lives per year worldwide at the cost of 1 billion dollars seems expensive. But that's 1 million lives over the next 100 years for 10,000$ each and we would still know how to make that drug in 100 years. Dropping that to 1,000$ per life over ~1000 years.

And if you consider 1 billion per 1 thousand lives/year then cancers's 8.2 million deaths/year is clearly worth ~8.2+ Trillion dollars in research to cure or ~4.1 Trillion to cut in half.

PS: Global cancer research only recently hit 100 Billion per year so it's going to take a long time to hit 4.1 trillion yet we already save a large chunk of potential cancer victims.


While that's true something like $10,000 would at best accelerate medical research a bit, ultimately it will only save those people for whom medical research would have otherwise been too late. It's not entirely clear what this figure would be, but with billions spend each year it's probably not several per $10,000.


I guess it just comes down to at what number the effectiveness flips. I'd much rather spend €2 on a bar of chocolate for some kid I encounter who just heard that he's terminally ill or something, than donate that to research. (Though perhaps at that point it's the gesture that's most valuable, not the money.)

Of course, that's also not often a choice I (have to) make. I've a set amount of donations to a few regular charities, whereas a situation as above would "count" towards my regular daily expenses rather than charitable givings.

It's probably also related to always being able to do more. Yes, maybe giving €x to one of the Givewell organisations is more effective than giving it to Make-A-Wish, but often it's not a game of either-or. I'm sure a lot of people donate to Make-A-Wish because they encounter it and sympathise with it, rather than spending effort looking for the most efficient way to spend <arbitrary amount designated for charitable donations>, so it's not even a question of effective charity vs. MAW, but MAW vs. sitting there in your savings account.


Discovery is by it's nature a random process, so a single 10,000$ investment could with some low probability make a 30 year difference for some drug.

People only investigate a finite number of things, adding just 1 more to the early stage to the millions already tried is not likely to be useful, but we simply don't know.

PS: After that point much larger sums are involved, but even then only a finite number of tests get done and they are very dependent on funding.


Yeah, how dare donors use their money in ways they see fit! Let's make an organization that shames other organizations for spending in ways we find unsavory or inefficient.


The donors should be free to use their money in ways they see fit, and everyone else should be free to point out that there are several orders of magnitude more effective ways to help people.


You appear to be using innuendo, but I don't know what you're implying. Here is my answer reading you literally:

Yes. Your argument, while presented against make-a-wish, actually applies to all artistic endeavors. Yes, it is good and right to make life worth living.


I think it boils down to having or not having, let's call it "religious mindset".

If you have religious mindset (it doesn't matter if you are an atheist) you donate to "save your soul", or, to be less vague, to alleviate your moral distress. If that's the goal it doesn't really matter whether the charity you give to is effective or not. It only matters if it makes you feel like a better person.

If you, on the other hand, care about actually helping people, it matters very much who you donate to.


>Are you suggesting donors to make-a-wish actually value making one dying kid (and his support network) happy more than saving the lives of several?

Yes.

>I doubt that's usually the case because if you consider it for a moment is seems so unfair and cruel.

Hmm? What about all the discretionary money you spend on dates, living in a nicer than necessary apartment, money spent on hobbies, etc? Do you value those things more than the lives of African children who will die if you don't donate your discretionary spending? Well of course you do because you do one and not the other.

>No, it's probably something else that motivates their decision...

And what would that be?


EA?

EDIT: Make-a-Wish seems more like a giant pr-machine. Granted it's helping some, but wasting a lot of resources that could be better allocated. I guess I am part of the movement, without ever thinking about it


"Effective altruism is a philosophy and social movement that uses evidence and reason to determine the most effective ways to benefit others."

https://en.wikipedia.org/wiki/Effective_altruism


There is a motivational value in Make-a-Wish. I'm not saying that extensive resources should be committed to it, but if emotional impact of a Make-a-Wish story spurs people to be more altruistic or become childhood leukemia researchers, then it's not wasted money.


I have thought about it...but I don't really know whether that's the case. There is no encouragement or any instructions how to help yourself. We aren't all famous singers. The videos are great to watch, but to me it really feels more like a pr-machine.


It may be useful to consider the Make-a-Wish foundation as a performing arts organization that focuses on a large number of intimate performances.


> who value things differently

Can you try to formulate those values?

> I would add that the Make-A-Wish foundation has a very significant quality of life impact on the kids AND their support network

But given the costs per instance that impact is only on a small fraction of all cases. A smaller impact for everyone or one that only materializes in the future would likely generate more utility if you add them up.


  Can you try to formulate those values?
I donate money to the AMF and SCI charities mentioned in the article.

But I might also donate to the charity that supports my cousin who has cerebral palsy, because I can see with my own eyes that they're doing good work; and because it's work I'd like to see continue.

Or I might donate to the charity that gave me a scholarship when I was younger, to repay them and so other people can benefit like I did.

Or I might donate to Wikipedia, because although it's hard to value wikipedia page views in terms of quality-adjusted life years saved, you can surely get a great many page views for your dollar.

Or maybe my colleague in work is running a marathon for charity, and although our professional relationship wouldn't be damaged at all if I didn't donate, our professional relationship would be improved if I did; perhaps I see a possibility to leverage better relationships into success at work.

Or I might consider that I have fully discharged my moral duty for this financial year by donations I've already made; and hence I can spend money on things that are utterly uncharitable if I feel like it, and spending money on things that are inefficiently charitable is relatively better than that.

Or I might donate to a political organisation, because I think I directly benefit from their activism....


> Can you try to formulate those values?

This is an issue I see with effective altruism and other (new atheism, veganism) hyper-rational movements. It's not enough to establish a viable alternative; they also have to exterminate every other line of thinking, in the pursuit of the almighty rationalization.

There's a lot of charities out there and I think it's enough if EA creates a marketing channel for some of them to have a unique advantage. "No one paid attention to us until we showed we save the most lives per dollar". If everyone agreed donating was entirely about maximizing lives/dollar then maybe this truly is the most important stat. But people donate for scads of less-than-rational but still well-meaning reasons.

It works against EA's (new atheism's, veganism's) best interests to draw a line in the sand and alienate everyone else. To me this feels as rational a conclusion as you can get, which makes it funny to me that the people particularly taken up by hyper-rationalism behave the way they do.


> Can you try to formulate those values?

Perhaps your question is phrased poorly, but it appears you're asking OP to rationalize every value a donor has when they donate to an organization like Make-A-Wish. Is that what you meant? If so, where would one even start to answer this question?


Well, I wanted a list of values that, for them, rank higher than human lives. I mean every donation has an opportunity cost, and that cost can be lives not saved. People are implicitly willing to make such tradeoffs. What I am asking is to think about it and make those preferences explicit.


Here is a value that I have which is counter to ethical altruism:

I have a small amount of cash in my pocket, which will (given enough time) purchase junk food with. However, I see a random charity donation box so I put my money in that and feel happy that I've done two good things.


Why is The Economist being upvoted that much lately? I was a subscriber and they were a pain to unsubscribe to. Articles are OK but do not contain actual information or news, imho


This is something close to what I work with. Ten years ago we started an organisation to help international development organisations use IT tools more effectively. In 2012 we took over the development and operation of a field data collection system from Water for People and together with them released it under an open source license. We work with 20+ governments and 200+ international NGOs in providing them with data solutions.

In my opinion, using data in international development work to see what is effective and how you can improve the work, has by a surprising number of people working in the sector, not been seen as particularly important. This is changing.

The awareness that more and better data, as well as the understanding how to handle the data and working on a sustainable technical infrastructure, has existed for some time in the sector, but has been slow to come to the forefront. At the end of the work with the Millennium Development Goals [1] an independent expert group was set up to give advice to the UN Secretary General [2] and to highlight this problem (Tim O’Reilly was probably the only person you would recognise that was part of the group).

But to actually get the sector to change has been very slow progress. An analysis by a group of organisations have identified the underspending on data and analytics that is essentially endemic. [4] They estimate that to be able to track the indicators chosen for the Sustainable Development Goals [4] we need to spend some US$ 350-500 million/year more than we do today.

(Not that all the indicators in the SDGs are that easy to measure and track. A colleague in the sector said “Of the full set of indicators (232), 88 are not backed up by available data or a suitable method to gather it, and 55 have some form of method but no data. […] Many of them are bordering on the realm of the unmeasurable”.) [5]

However, there is a change coming, but I don’t think it is happening fast enough. My opinion is that we won’t be able to achieve the SDGs for 2030 without data and we are not investing enough.

As part of our work we have since 2012 helped 11 governments collect data about water access and sanitation to cover about 130 million people, mainly in West and East Africa, in South East Asia and the Pacific. And we are working with organisations such as UNICEF and WHO, together with governments, to approach this in a systematic way with long term sustainability for the processes and systems that get put in place, but it is an uphill struggle.

[1] https://en.wikipedia.org/wiki/Millennium_Development_Goals

[2] http://www.undatarevolution.org

[3] Paris21, UNICEF, World Bank, ODI, Earth Institute, Open Data Watch, Simon Fraser University, UNIDO https://sustainabledevelopment.un.org/index.php?page=view&ty...

[4] https://sustainabledevelopment.un.org/index.php?page=view&ty...

[5] http://news.trust.org/item/20180524172651-1sjk4/


Let’s see how long the reproducibility crisis takes to show up here too


Not new, and not helpful in my opinion. For a bunch of reasons, including Goodhart's/Campbell's Law (look em up), and http://prospect.org/article/state-debate-lessons-right-wing-...


I for one, will never trust the rich (Especially when it is clear when they pay people to promote their deeds and paint them an angel).

I think what ever they do, they do it to gather more wealth (Even if it is not apparent for the common man).

For an example, see how gates foundation tried to operate in India [1].

[1] http://jacob.puliyel.com/paper.php?id=370


There are wealthy individuals and organizations that seek to make profit from charitable projects and the example presented is far from the worst; corruption as seen in the Red Cross is one such issue. However, I expect the proportion of those instances relative to genuinely good efforts is overblown thanks to media.

The wealthy are wealthy generally because they look to make sustainable models where the goal is to avoid simply throwing resources into a pit just because it feels good to help. Doing that bankrupts everyone - consider a broader analysis before throwing the baby out with the bathwater.

Give a man a fish and he eats for a day; teach a man to fish and he'll eat for life.


> Give a man a fish and he eats for a day; teach a man to fish and he'll eat for life.

Which is exactly why we not only do the whole handouts-as-inexpensive-penance thing but also make sure to keep the system 'sustainable' as you put it by gating off access to the fish reserves, polluting and over-exploiting communal stocks, 'teaching' fishing as a method of being a reliant user of charity-approved fishing gear with built in planned obsolescence of increasingly confusing and complex dependencies, and of course 'supplement' the rudimentary knowledge with a complete moral reconditioning program stressing the virtues of submissive piety and gratefulness as prerequisite to rather dubious advice on self-reliance.


I cannot make much sense of your comment. But let me expand on my original comment and tell this.

The only charitable work that I consider genuine is the one that you does not know about. When you read about the work of a person 'x' of a foundation in papers/media, their work cease to be charity. It is just that the person is gaining something, in return, other than "feeling good" you mention. Superficially, or in other words, as shallow as public perception goes, it will appear as benevolent, and often times, it can be. But there is nothing that guarantees it, since the end result is not of benevolence, but something materialistic or sinister.


If the only genuine charity is invisible, how can we maintain a society with a culture of genuine charity? If charity is secret, it's too easy to simply not do charity, since there are no consequences, neither positive nor negative reinforcement.

I have my suspicions about the PR motives of https://givingpledge.org/ , but ultimately, doing good in exchange for positive PR (a form of "buying advertising") is a deal I'd take every day.


> If the only genuine charity is invisible, how can we maintain a society with a culture of genuine charity?

Why would you want to maintain a society that extols charity, 'genuine' or not? Wouldn't you rather live in a society that doesn't have a need to 'do good' as a cleanup procedure for the 'bad done.'

Before you say that's a strawman argument, consider that charity exists because things that should be considered human rights are often tossed aside as impractical to implement due to the fundamental poverty of the governance structure.

Here's some interesting tables of numbers of 'charitable giving' that will reveal your assumptions and biases:

https://www.treasurydirect.gov/govt/reports/pd/gift/gift.htm

https://cdn.vcapps.org/sites/default/files/upload/VCEP%20-%2...

https://www.fidelitycharitable.org/docs/Fidelity-Charitable-...


Can you clarify this post? I don't understand what it's arguing.

How are fundamental human rights "implemented"? Why would a "right" need an "implementation"?

You are making some argument about government working poorly? How is that relevant to charitable spending?




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: