I used to feel similarly until I realized that the EA movement isn't a hierarchical organization: it's just a bunch of totally separate orgs who have a common philosophy about how to do good in the world.
Sure, OpenPhil funds AI research. But Givewell (who, last I checked, share the same office) has in their Top Charities list, only those that work in poor countries
Agreed. I've loosely held an EA-like philosophy for about a decade and I think that OpenPhil orientation towards AI is pretty disappointing.
I account for my time in terms of things like number of people saved from blindness or death due to malaria, and I definitely do not count future simulated persons as worthy of any of the same concern as actual humans who exist today.
I watched a debate involving William Macaskill last summer and he poses the hypothetical question:
"You are outside a burning building and are told that inside one room is a child and inside another is a painting by Picasso. You can save one of them. To do the most good, which do you choose?"
The point he's trying to illustrate is that, if you knew for certain that you could turn around and sell the Picaso for millions and use that money to purchase malaria bed nets, the expected number of lives saved by using the Picaso could be hundreds, and so there's a moral dilemma present.
Like many hypothetical questions, this one feels a bit "off" or "unrealistic", but if you don't get hung up on the oddities, I think one can sense the essence of his question, and it reminded me of the point you're making here as well as a responder's question asking you why you think the way you do.
I do think these questions are hard for us to wrap our heads around -- how to value high probability immediacy against somewhat uncertain non-proximal/non-immediate things that might be "much higher value". Part of my human brain goes splat when I try to weigh these things.
In terms of the moral dilemma with the painting, I do have quite a bit of sympathy for the argument that one should do what they feel will produce the most good, which might be to save the painting and purchase malaria nets. My father on the other hand seemed to believe that to be absolutely morally wrong, which seems to be siding with your sentiments. Practically speaking, I think I'd almost certainly save the child's life, because one's human impulses would be so strong that they would override any high-and-lofty-rationality, and one wouldn't have time anyway to do deep analysis. But the question in a hypothetical sense does seem quite valid and hard.
> But the question in a hypothetical sense does seem quite valid and hard.
I appreciate this sentiment, and tend to think likewise. However, the situation is a hypothetical. In the end, perhaps most important is the practical decisions we make, which is almost never situations like the ones you describe above, but more like "what cause should I donate to"? In that sense, it might be hard to discern between different causes in the Givewell top lists, but picking either of those at random is probably a good heuristic that already beats a fairly widespread heuristic of just giving to something like Make-A-Wish, if you're starting from the point of donating €x to a charity.
Sure, OpenPhil funds AI research. But Givewell (who, last I checked, share the same office) has in their Top Charities list, only those that work in poor countries
https://www.givewell.org/charities/top-charities
[2] https://www.givewell.org/charities/top-charities