Hacker News new | past | comments | ask | show | jobs | submit login
My Left Kidney (astralcodexten.com)
284 points by impish9208 on Oct 27, 2023 | hide | past | favorite | 314 comments



"But looking more closely at the increased deaths, they were mostly from autoimmune diseases that couldn’t plausibly be related to their donations."

Kidneys make calcitriol. One kidney means you'll make less, and if you don't supplement, you'll become deficient. Deficiency would (possibly) lead to autoimmune diseases.

Also EPO (erythropoietin). One kidney means less, and less (possibly) means fewer new red blood cells, then (possibly) anemia. Anemia wreaks havoc.

And bicarbonate.

To be clear, if you want to donate a kidney, then do it if it's the right choice. It's very kind. Just know what to pay attention to to keep yourself in good health.


One small advantage of kidney donation is that it comes with lifetime screening without some of the gatekeeping normally attached to that. I dunno exactly how it works in the US, but here in Canada you can't just really demand blood tests purely for screening reasons, there needs to be a reason for it. But if you're a donor, your transplant clinic will insist on periodic screening both to monitor for GFR dropping as well as other issues like the ones you mention.

Transplant centres vary for sure (to a surprisingly huge degree), but at least where I am if you do donate the clinic will be watching out for this stuff on your behalf.


Some screenings have gone direct to consumer in the US. Fairly easy to purchase online, make an appointment, then show up for a quick collection.

https://www.ondemand.labcorp.com/

https://www.questhealth.com/sale

Quest is even running a sale right now on some tests!


There are also at-home versions like this YC company https://siphoxhealth.com (full disclosure, I have a conflict of interest here)


cries in New York


About erythropoietin (EPO), I understand it's a hormone whose amount is regulated by the need for oxygen in the blood (i.e. a feedback loop which regulates the amount of red blood cells to be just what is needed to satisfy the oxygen needs of the body).

So EPO is not limited by the ability of the kidneys to produce it (I expect, by a large margin). So one kidney would just as well be able to produce all the EPO you may need, the kidney is not the limiting factor.


A tour full of competitive cyclists would beg to differ with this analysis.


Cyclists who dope with EPO are raising their levels far beyond what is healthy in order to get their red blood cell count to superhuman levels. It has the side effect of massively raising your risk of stroke


My point is that cyclists dope with EPO precisely because the body doesn't produce enough for your oxygen needs. There is a cap, probably for good reasons as you say.


>they were mostly from autoimmune diseases that couldn’t plausibly be related to their donations

Sounds extremely brittle logic. Why couldn't they be "related to their donations"?

For starters, the donations could make their immune system weaker (and thus make any autommune diseases they already have more impactful) or make it go haywire to combat the post-operation stress...


The idea "the donations could make their immune system weaker" seems pretty brittle. Why? What biological mechanism would support that hypothesis? What kidney function that would be halved (assuming the remaining kidney wouldn't "step up" to fill the gap) that could lead to an autoimmune disorder?

Not saying there's nothing, but you haven't provided any more evidence or logic than OP has provided.


I was born with one kidney, a condition called renal agenesis. I’m 53 years old and in perfect health. Doctors don’t recommend any particular action.

As an aside, I’m also Buddhist so if the kidney goes I’m going with it. This obsession with prolonging life is just a distraction from living it.


> This obsession with prolonging life is just a distraction from living it.

Does brushing teeth twice a day also distract you from living?

I think the opposite.


> Everyone knows we need a systemic solution, and everyone knows what that solution will eventually have to be: financial compensation for kidney donors.

Another solution would be to tackle type 2 diabetes, since a big amount of kidney failures is caused by that. Apparently, 1/3 of Americans are at risk: https://www.health.com/cardiovascular-kidney-metabolic-syndr...

Having worked for a few years in the diabetes space, I‘ve been hearing that cheaper CGMs are coming from the big players, apparently for <= $1 a day. This will allow people with diabetes to keep their time in the ideal BG range longer, reducing risk for kidney disease _significantly_.


The GLP-1 drugs should have a big impact for type II diabetes. Interestingly Fresenius, a massive dialysis provider, has taken a beating in terms of stock price with the GLP-1 data.

That said, diabetes is a long-term disease and it's going to take a decade or two to see a measurable difference.


Or, human-compatible kidneys grown in bioengineered pigs.

GLP1 drugs are also very promising for reducing the rate of kidney disease.


It's a good read. I enjoyed the data analysis and his snarky wit about a few things. I enjoyed him finding out he could apply to another hospital and doing it "out of spite."

I know he wants to encourage you to donate a kidney. I still wish we tried harder to heal the "original equipment from the manufacturer" we are all born with. I have been on the record a long time with this opinion. I get a lot of hatred for it, though I have a condition that frequently results in being a transplant recipient and my only wish here is for better baseline care for people like me instead of razzle-dazzle headlines about flashy "tech" solutions.

Good solutions tend to be boring. I want a boring life where I just get to have good health and not an "exciting" one where I get to tell you how I was dying until some kind stranger died in a motorcycle accident at a shockingly young age and let me have their leftover body parts. (Because people like me need body parts that are a little hard to spare as an act of charity.)


>I still wish we tried harder to heal the "original equipment from the manufacturer" we are all born with

You know that this is hard. Are you saying we should just spend more in that particular area instead of spending on kidney transplants?

Eventually research will improve and we may need less transplants, but in the meantime they're a great solution, and according to a quick search those transplants actually save the healthcare system money.

Research is great. But the flashy solution turns out to be also great, pays for itself, and yields concrete results today. Sometimes I think there's too much of a reaction against flashy, to the point that people discount the flashy things that are good.

We should judge things on the outcomes and the costs, not superficially on whether they feel boring or flashy!


We can do both. I don't think anyone who encourages kidney donation wouldn't also advocate for encouraging healthy lifestyles and slowing down the progression of acquired kidney disease.

I have a genetic disease that ends in kidney rejection 100% of the time, with no real way to prevent the progression of the disease. I'm healthy, exercise often, eat well. I've never been overweight or had the slightest whiff of hypertension, I run every day along with other exercise, and my kidneys still failed. Some percentage of kidney disease is going to be like me.

I will say one thing that absolutely should change is better screening for kidney disease. A lot of times it's discovered because blood testing from some other thing coincidentally found a low GFR or high creatinine levels, and even then something like a 60-70% GFR is often dismissed by doctors. Fortunately we discovered the gene that causes my family's disease, so in our case we can be screened via blood tests, but for kidney disease without such a simple genetic cause we shouldn't leave identification and diagnosis up to luck.


> I still wish we tried harder to heal the "original equipment from the manufacturer" we are all born with

I mean, i don't think its for lack of trying.


I was bluntly told "People like you don't get well. Symptom management is the name of the game." I replied "It may be true that I will always be infected with something but this particular infection has to go as it's killing me."

My doctor physically took a step back as if I had slapped him.

I'm quite confident it's due to lack of trying. If you have zero goal of actually getting me better, please don't blame my condition for your failure to get me better.

That seems blindingly obvious to me. I want to spit nails that it gets so casually hand-waved off.


You're both sort of wrong.

Solely symptom management should not be the recommended course of action for anything short of stage 5, and there is a ton you can do to slow or stop progression in the early stages, perhaps even avoiding the need for dialysis / tranplant altogether.

But as much as you say kidney disease "has to go", it won't. Reduced kidney function from chronic kidney disease is permanent, and no matter how hard you try, you're not getting to the point where your kidneys get better. You can only pause progression.


I don't have a diagnosis of kidney disease. I never claimed to.


Fair enough. In whatever your case was, it sounds like you may have had the ability to eliminate the disease / infection / whatever. That's not the case for kidney disease, and while encouraging healthy habits is always a good thing, CKD patients (i.e. the ones who would need kidney transplants) can't just exercise or diet away the disease.


> "It may be true that I will always be infected with something but this particular infection has to go as it's killing me."

> My doctor physically took a step back as if I had slapped him.

Not to be cold, and I'm sorry, but what did you expect. You are talking to a doctor not jesus. Everyone with an uncurable disease wants a treatment but just because you want something doesn't mean its possible. If it did, nobody would die.

And of course your doctor isn't trying to come up with novel treatments. Your doctor is not a medical researcher.


Grant Generoux claims his chronic kidney disease ("incurable, fatal") was healed by his low vitamin A diet. We know from studying technical outages and systems failures that the more checks and balances we put in place, the weirder the cause of any outage will be - because only the weird things get through where the gaps between the checks all line up. Of course the claim isn't wrapped up in a simple package with conclusive proof and widely agreed on - if it were, it would be mainstream accepted knowledge.

Why hasn't it been noticed? Because the damage is cumulative and long term - years or decades. Because it's fat soluble, so the effects are modified by the amount of fat eaten at the same time. Because the effects are diffuse - it's not showing up in one single organ. Because the effects look like other things - autoimmune attacks, for example. Because historic mistakes in studies have caused 'vitamin A toxicity' to get mixed up with 'Vitamin A defficiency'. Because it varies with different people with different liver storage capacity and historic retinol compound buildup. Because it may be masked by protective effects of Vitamin C or other dietary components. Because stopping eating it offers no quick fix, healing might be "wait years for damaged cells to die and new ones to grow". Because there's no money in telling people to eat less of somethings, so there's little industry funding incentive or research grants.

Here for example, one of the other people who agrees with the claim of Vitamin A being harmful shows their working: https://ggenereux.blog/discussion/topic/biotransformation-of... - Under Discussion it says "It is now no longer difficult to imagine how retinoic acid [ed: Vitamin A/retinol related compound] could cause auto-immune disease, since the data appears to show that retinoic acid is essentially a pro-inflammatory cytokine that is stored in the liver and, once the liver is saturated, in different types of tissue all over the body, especially in epithelial cells that constitute blood–tissue barriers, and in adipocytes, both of which were shown to express STRA6 (Amengual, Zhang et al. 2014). [... explanation continues]"

I am never going to study enough chemistry, biology, biochemistry, statistics, medicine, to be able to make a judgement call of my own about that page, let alone all the other studies people have linked to and pulled apart. At least it's cheap and relatively easy and low risk to try, but even if my health improves (I don't have kidney damage) it could easily be attributable to many other factors - placebo, for example.


The author states: "the risk of dying from the screening exam was 1/660"

And demonstrates with: "This involves a radiation dose of about 30 milli-Sieverts. The usual rule of thumb is that one extra Sievert = 5% higher risk of dying from cancer, so a 30 mS dose increases death risk about one part in 660."

Sorry but there is a flaw here: calculation seems good but conclusion is completely wrong.

Calculation: increased risk ratio of cancer-related death for 30mS = 1.05^0.03S = 1.001465... So +0.15% = +0.0015 = around +1/660 (with less rounding +1/682)... fine!

Conclusion: this is not your risk of dying, but the increase of your risk of dying. If it was X%, the exam brings your risk at X% x 1.0015

X depends on the medicine quality in your country, your access to it, your health, your exposure to cancer-triggers (pollution, tobacco, food...), your DNA, your gender...

Let's state a depressing 1%, then the screening exam brings you to 1.0015%, or +0.0015% additional risk due to the screening exam = 0.000015 = rounded 1/67000. So your chance of dying from an exam-related cancer is absolutely not 1/660.

Please correct me if I did it wrong...


You did it wrong.

One Sv increases your absolute risk of fatal cancer by an added 5% or so. It doesn't multiply it by 1.05.

Quoting Wikipedia: "According to the International Commission on Radiological Protection (ICRP), one sievert results in a 5.5% probability of eventually developing fatal cancer based on the disputed linear no-threshold model of ionizing radiation exposure."

Also, where on earth did you get 1% as a "depressing" upper bound from? For lifetime risk of dying of cancer? It's over 15% in the US.


The real logical problem with his approach is not the relative risk. It's not the linear no threshold model.

It's the use of effective whole body dose to estimate the risk associated with a dose to part of the body. Exactly zero radiation biology organizations recommend this. Most explicitly caution against using effective whole body dose to estimate radiation risk. Effective whole body dose is only used for population-level estimates.

For example, one of the most radiosensitive (wrt cancer) organs is the thyroid. But his thyroid is not in the beam. Also the skin is exposed more than the interior on CT, which increases the risk of skin cancer. These corrections are standard, while alternatives to LNTM are not standard.

Then there is the effect of age. Most radiation related cancers are delayed by a long time, and the faster that cells are dividing, the greater the risk of DNA damage. But Scott is old, which is also why older workers were preferred for cleanup at Fukushima.

D_eff can get you within an order of magnitude I suppose, but you shouldn't express it with two significant figures — it's misleading precision. You could say 1/1000 or maybe 1/700? But you really need more detail for any kind of meaningful medical decision.

Anyway, that's my rant as someone studying for the board exam.


I tried googling the risk and it's all a bit inconclusive but:

>The linear no-threshold model is disputed by several health organizations, including the American Association of Physicists in Medicine and the Health Physics Society, both of which concluded that cancer risk estimation should be limited to doses greater than 50 mSv. Both organizations state that risks from doses below this threshold are too small to be detectable and may be nonexistent.


"the risk of dying from the screening exam was 1/660"

For someone as smart as Scott Alexander, this is an astonishing mistake.

If CTs were such death machines, we would have seen a worldwide epidemics of CT-related cancers. There is no way you can cover up such a strong signal, given that people are screened all the time.

Edit: thanks for unleashing such an interesting debate. I guess the problem is in my perception. "Risk of dying from the exam" means lifetime risk, while my perception was "1 of 660 people who get the exam drops dead pretty soon afterwards".


It would be astonishing if it were a mistake.

And yes, it's a detectable signal. "in a large population-based cohort it was found that up to 4% of brain cancers were caused by CT scan radiation" --somewhere on Wikipedia

CT scans vary in dosage. Wiki gives ~10 Sv for an abdominal CT; I don't know where Scott got 30, but maybe the kidney screening is multiple scans or otherwise higher dose. Or he was wrong by a factor of 3, which is not a factor of 100.

CT scans aren't done frivolously, and the current rate of scans is hotly debated for exactly this reason. I'm a little surprised that kidney donation involves CT over MRI by default, but I'm not an expert.


> I don't know where Scott got 30, but maybe the kidney screening is multiple scans or otherwise higher dose.

Scott called it "multiphase abdominal CT". Quick searching on-line suggests[0] that the "multiphase" here stands for doing 3-4 scans within a minute or two of each other, as the contrast agent diffuses through the organ, giving you multiple images that inform you about different parts of the target structure.

> Wiki gives ~10 Sv for an abdominal CT; (...) Or he was wrong by a factor of 3

If what I wrote above is correct, then it tracks - ~10 mSv for one CT, multiplied by 3-4 scans done in a multiphase CT, gives you ~30-40 mSv, which matches the number Scott posted.

--

[0] - https://www.barnardhealth.us/dynamic-contrast/multiphasic-im... - First link I found; CTRL+F "multiphase", as the relevant information is spread throughout the comments section.


> I don't know where Scott got 30

The text where he says that is a clickable link! https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4635397/


> CT scans aren't done frivolously

As someone who works in the ED, we pretty much give CTs out like candy to avoid a lawsuit if we miss something.


I stand corrected.


Abdominal stuff often involves CT scans that need to be done in phases. For brain stuff, some things require CT vs MRI.

Another risk is the contrast dye that’s often used in these studies. If you’re dealing with cancer monitoring or something that requires monitoring, you can develop allergies and poor reactions to that as well.


The paper he links to [1] broadly agrees with his statement, eg "An estimated 1 in 270 women who underwent a coronary angiography CT at age 40 will develop cancer from that CT (1 in 600 men), compared with an estimated 1 in 8,100 women who had routine head CT at the same age (1 in 11, 080 men)".

1. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4635397/


I felt less compelled by the paper after reading " 1,119 " as the sample size - how the fin f they think that they can get to estimates like 1:11,080 with a sample size of 1,119 I do not know.

Going the other way: on the internets I read that there are 5 million CT scans in the UK every year. If there was a 1:10k rate of cancer from these we would see 500 fatal cancers a year. If there was a 1:600 risk then we would see 8300 cancers a year. There are 400k cases of cancer per year in the UK as it is. So, at the top end about 2% of cancer could be hypothesized as from CT scans based on these numbers however they were extracted kicking and screaming from the case notes.

There is an interesting twist on this though - the mortality of people who get CT scans is probably much higher than the mortality of people who don't as there is probably a reason why they are getting the scan. One reason I have seen for people to get a CT scan is that they have metastasizing cancer. If you have metastasizing cancer you are probably going to get radiotherapy. Now, radiotherapy doses are quite difficult to understand as there is a big difference in the way it gets absorbed and handled, but as a layperson I look at the numbers and think that radiotherapy doses seem much bigger than CT scan doses. But I don't even know how I would go about comparing them and controlling for them in the stats.

I personally would have to sit and think for a long time about how to sort the causal factors out in the stats around this, I think I would not be doing that on a sample of 1k people.


> The paper he links to [1] broadly agrees with his statement, eg "An estimated 1 in 270 women who underwent a coronary angiography CT at age 40 will develop cancer from that CT (1 in 600 men), compared with an estimated 1 in 8,100 women who had routine head CT at the same age (1 in 11, 080 men)".

Develop cancer, or die of cancer? Alexander seems to be claiming the latter.


The thing I don't understand is that if the CT scan is more dangerous than having a kidney removed, then surely they'd take the kidney out to see if it was compatible with the recipient rather than give you such a dangerous scan.


The kidney surgery only looks so low risk (partially) because they only do it on people that passed the CT scan.

(To give an even more extreme example for illustration:

Suppose the scan could perfectly predict who will die from the surgery and who will live without any side effects. Suppose 90% of people fall into the former and 10% of people fall into the latter category. Suppose further that the scan has a 0.1% chance of killing you.

If you scan people beforehand, it will look like the surgery has 0% chance of complications against 0.1% of the scan. But if you dropped the scan, all of a sudden the surgery would have a 90% death rate.)


One of the reasons they do the scan is that they look at both kidneys only take your WORST one


If that's the case, is it possible Scott is lying or leaving out information on what kidney it was?


I'm guessing that <CT scan> is more dangerous than <having a kidney removed, given you've cleared the CT scan and other tests>. Plus, <having a kidney removed, studied, and then reinserted after failing some tests>, might be more dangerous than either.


> Conclusion: this is not your risk of dying, but the increase of your risk of dying.

No, it does not increase your risk of dying. Your risk of dying was 100% before the procedure and is 100% after it. We all die at the end with certainty. Risk of dying is only meaningful when you qualify it with a timeframe (let say next 5 years) or cause (let say getting terminal cancer).


Um, yes, the entire point of that section of the article was to talk about the increase of risk of death by cancer.


You're not wrong, but he's probably farther off that that. Danger from radiation doesn't scale linearly, although (to be extra safe) standards are set as if it is. In fact, there is some evidence that small doses of radiation can even be beneficial (hormesis).


That discussion is already in the linked article.


He also assumed that the risk increases linearly.


Not blindly; there is a pretty extensive footnote (footnote 2) covering the linear-risk assumption.


I think you are correct.


An anecdote to go along with his rejection. Once I got rejected from doing a medical donation in US because in the giant table of family sicknesses they gave me I marked X next to one of the mental conditions for my father who was an alcoholic because apparently alcoholism technically is that kind of mental condition (yes, the X had to be explained in the form, which I did). As you can imagine alcoholism is not that heritable, and had more to do with the environment (I grew up in Russia where it was extremely prevalent), but for the clerks an X is an X.


> As you can imagine alcoholism is not that heritable

Alcoholism is that heritable: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4345133/ At ~50% heritability, it is in no way unusually low, and entirely comparable with many other traits.


This is actually more heritable than appendicitis or leukemia. Hard to imagine what the mechanism could be.


Mechanism(s), it's going to be highly polygenic & mediated through many pathways. I don't see it as needing to be any more inherently mysterious than appendicitis: it's all cells doing very complicated chemical stuff in long causal pathways, in the end. 'Appendicitis' need be no more 'direct' or 'simple' than 'alcoholism'.

For example, I expect that I have a very low genetic risk for alcoholism - no family history, and drinking more alcohol than a beer or two makes me miserably depressed. There is surely some sort of neurochemical explanation there, something something GABA depressive neurotransmitter positive reinforcement of alcohol-avoidance behavior yadda yadda, but the net effect is the same: I'm not going to become an alcoholic. If you could trace it out, it's not going to look any more mysterious than some appendicitis explanation starting with an immune-modulating SNP affecting T-cells modifying risk of random hepatitis virus infecting the appendix and triggering an autoimmune reaction... or whatever it is that is between 'SNP variant #123' and 'developed appendicitis at age 14'. It's all cause-and-effect, atoms-and-void, in the end.


I've always enjoyed this author's work, but there's something almost pathological about the way he assumes that the risks of this act are well-quantified enough to make a rational mathematical decision about it, or that those risks are even well-quantifiable at all; there are ten thousand paths to medical complication here that aren't captured in "this is the percentage of healthy kidney donors who died during the operation or died of kidney disease later".

Honestly, I would have respected "I wanted to do it, I became obsessed with doing it, I knew it was likely dangerous and I can't quantify how dangerous, but it did it anyway because I thought it was important" more than this many-page attempt to mathematize the un-mathematizable.

Which I guess raises a larger point about how the rationalist and EA and other similar communities have a tendency to try to reduce fiendishly complex multivariable human issues to equations and statistics without realizing that some problems are not math problems and will defy any attempt to be quantified, statistitized, or calculated.


Yes, from the outside his fervent desire to donate a kidney looks like a compensation for his rationalism. Personally, I find EA to be uncompelling because it implicitly assumes both that there are objective universal values and that price and value are in direct proportion. Even if we could agree on our values (historically we can't), I don't believe that there is a pricing model that could possibly get this right. But even if such a model existed, it still would be unsatisfying for me to do proxy work to satisfy the model (i.e. earn to give) rather than the direct embodied work in the world that needs to be done.

It is unsurprising to me that people attracted to rationalism/EA get stuck in this tension and find a need to demonstrate their values in an embodied way. Interestingly, anonymous kidney donation is both embodied and abstract unless you have a relationship with the recipient. I admire that the author had the conviction to act on their principles, but I also have a suspicion that this will not truly resolve the internal conflict that led to their act.


This comment reminds me of the central thesis of Red Plenty [0] - that all the mathematical or algorithmic sorcery in the world frequently fails on contact with actual human beliefs and desires. We seldom have simple enough utility functions to meaningfully optimize in real life - all of our philosophies, theologies, and other cultural debris have evolved to manage those complexities (often imperfectly).

[0] https://www.theguardian.com/books/2010/aug/08/red-plenty-fra...


It makes me wonder what the world would be like if Bishop Joseph Butler was more popular among rationalists, and generally more well known as an Enlightenment thinker. It's been a while so excuse my spotty memory, but in summary his work challenged prevailing Enlightenment ideas about self-interest and centered human sentiment in understanding ethics. It goes to show you can be a rationalist and still talk about human morality with words like compassion and resentment.

I understand that the rationalist project has been to bypass the emotional and get to some kind of cold hard truth of right and wrong, but I've long suspected that this is impossible.


> Honestly, I would have respected "I wanted to do it, I became obsessed with doing it, I knew it was likely dangerous and I can't quantify how dangerous, but it did it anyway because I thought it was important" more than this many-page attempt to mathematize the un-mathematizable.

Eh, I felt that already came across strongly in the article itself. He attempted to quantify the risk, but every avenue resulted in "it's fine, maybe?", and he chose to go ahead with it despite not having any dependable numbers.


> Which I guess raises a larger point about how the rationalist and EA and other similar communities have a tendency to try to reduce fiendishly complex multivariable human issues to equations and statistics without realizing that some problems are not math problems and will defy any attempt to be quantified, statistitized, or calculated.

Some of my old coworkers were really into EA, LessWrong, and the rationalist community.

At first it was fun to read some of the articles they shared, but over time I observed the same pattern you described: Much of the rationalist writing felt like it started with a conclusion and then worked backwards to find some numbers or logic that supported the conclusion. They had already made up their minds, after which the rationalist blogging was all about publicly rationalizing it.

The other trend I noticed was how much they liked to discard other people's research when it didn't agree with something they wanted to write. They were very good at finding some obscure source that had a different number or conclusion that supported their claims, which was then held up as an unquestioned source with no further scrutiny.

Someone once described it to me as "second option bias": Rationalist writings have a theme of taking a commonly held belief and then proposing a slightly contrarian take that the closest alternative explanation is actually the correct one.

Once you start seeing it, the pattern shows up everywhere in rationalist writings. In this article for example:

1. Argument that kidney donation is actually much safer than people think because the author picks 1 or 2 specific failure modes and cites those as if they captured all possible downsides.

2. Argument that CT scans are actually much more deadly than people think because there are many possible downsides that we can't account for. Author admits to surveying a contentious field and selecting the most conservative risk estimate he found.

3. Argument that the Center for EA buying their own expensive castle to host their own meetings is good, actually. Anyone who questions the Center for EA spending millions on a remote castle so they could meet there must be wrong and misinformed and outsiders, with little more than "just trust them that the math is right" as the argument.

This author is a very good writer so he's masterful at breezing past the details, but it's hard to miss the pattern once you start seeing it. The pattern is more obvious in a lot of the less popular rationalist writings where it's clear the authors discarded any sources that were inconvenient to their conclusion but elevated any sources that told them what they wanted to hear.

Another common pattern in rationalist writing is to make strong claims based on weak evidence, then to hedge in the footnotes as a way to preemptively defuse counterarguments. Sure enough, this article has some footnotes that acknowledge that the radiation risk numbers he used in the main article are actually highly disputed and he chose the most conservative ones. This point is conveniently separated from the main article to avoid detracting from the point he wanted to make. The main article confidently makes one claim, then anything that might weaken that claim is hidden in a footnote.

Predictably, many of the comments on HN questioning the radiation numbers are met with "he addressed that in the footnote" comments that try to shut down the debate, so the strategy clearly works. Something about hedging in footnotes inoculates certain readers against questioning the main article. It's another pattern that becomes obvious once you start seeing it.


Thank you for taking the time to write this take up. I couldn't quite put my finger on what I found off-putting about EA but the second option bias and hedging with footnotes, in retrospect, are what have made me uneasy about EA (and rationalist writing in general) since Yudkowski.


At the same time, don't you risk punishing him for dissecting and revealing his thought process in public, however flawed it may be? I don't think you want to discourage that, do you?

The whole article is an exercise in motivated reasoning, and he comes right out and says as much.


Revealing the thought process is part of the persuasive technique.

It’s another rationalist writing theme: By taking the reader through a meandering path to the conclusion, some times with twists and turns and backtracks, you give the impression that you’ve covered every possible angle already. The conclusion feels unarguable because the author has walked you from beginning to end with clear logical steps.

The hidden problem is in what has been omitted. We’re supposed to assume the author included all relevant information and presented it faithfully. What really happens is that writers tend to downplay evidence that disagrees with their opinions. If they can’t avoid it, they include it with a suggestion that they tried to consider it but it they imply that it wasn’t reasonable.

That’s where the footnotes come in: By acknowledging it in the footnotes they can signal that they are aware of it but relegated it to the footnotes as a defensive measure.

Revealing the thought process gives misleading confidence in the conclusion if you’re assuming that the thought process isn’t prone to the same opinions and biases as the conclusion.


If it was possible to discourage him from revealing his thought process in public wouldn't that have happened already? He's not been exactly shy about controversial takes on various topics.


If your philosophy presupposes that all problems are nails, then you will probably try to hit them with a hammer regardless of what other tools you have.


I respect the British organizers’ willingness to sacrifice their reputation on the altar of doing what was actually good instead of just good-looking.

Surely their reputation is a factor in their ability to do good? Optics sometimes necessitate sacrificing the "mathematically superior choice" for one that's worse in the name of not pissing off a ton of people the support of whom you rely upon.


Reputation has a value but that value is not infinite. Presumably they made a guess at how bad the hit would be. (Also, with the right audience, being willing to make tough choices like this would enhance their reputation)


The problem here is that, in order to believe this, we would have to accept that "the castle was the cheapest choice". There are near-infinite ways to present (and omit) numbers in a way that makes your point and your biased choice look like the best one, especially for smart people with a messiah complex. Unless there is someone on the other side actively interested in refuting it, they can massage the numbers as much as they want. It is reasonable for an average person to doubt that buying a castle was indeed the cheapest choice they had.


Nobody really objects to them holding conferences. (At least not too many people do.)

Renting venues for the conferences vs buying a venue seems like a straightforward financial calculation one can make and decide on fairly objectively.

If you want to have a moral discussion, we should probably talk about what kind of venues conferences should be held at; not on whose balance sheets the assets are held.


Sure. I don’t object to them doing anything they want, but if you tag yourself as an “altruist” (effective or not) then other people’s opinions and perspectives are at the very core who you are (or pretend to be). The idea is that they “don’t care” because they believe the numbers, and the numbers only; if the numbers are telling you did the right thing, then it doesn’t matter what people think, right? Well yes, if you’re naive enough to believe that the numbers can be infallible (in which case you’re being used), or if you’re smart enough to massage them for your own good (in which case you’re using others). Seriously, in the end it’s just a bunch of people in a big circle jerk looking for social media points, like so many others. “Oh look at how objective we are,” give me a break.


> I don’t object to them doing anything they want, but if you tag yourself as an “altruist” (effective or not) then other people’s opinions and perspectives are at the very core who you are (or pretend to be).

I don't understand that. You can eg measure relatively easily and relatively objectively how many people die from malaria and how many life years that cost.


Yes, but you can’t measure objectively the context around these people and what causes that to happen. Why do so many people die of malaria in certain places but not on others? That points to the fact that it is actually possible to avoid these deaths, with the right effort. Then you might be tempted to think: well they don’t have the money to handle it, or the know-how; then you send them money and you send them specialists and you realize it doesn’t change anything. At this point you might be thinking: if they get money and support to deal with the problem, why are they not doing it? Jumping from this to some sort of racist/Darwinist conclusion is then almost inevitable, if you are naive enough to think that the numbers explain everything. There is a very complex social-historical-cultural system of multiple intertwined factors that go way beyond the numbers when it comes to “altruism”. It is human. Relying on the numbers is just a way for people to a) think of themselves as better than other “non-effective” altruists, b) mask their prejudice with “it’s not me, it’s the numbers saying you’re inferior”, and c) to just circle jerk and find themselves two/three girlfriends at the same time. It’s not that different than right-wing ngo-hating bigots really.


For example, East Germany and North Korea had and have vastly inferior outcomes to West Germany and South Korea. No racism or darwinism required to see that.

I'm not sure why you are so confident that racist conclusions are inevitable? You also seem to suggest that the facts are racist, but that we have to blind our minds to that reality?

> [...] if you are naive enough to think that the numbers explain everything.

Our knowledge, including our quantitative knowledge, is very limited, but it is not zero either. There are some areas where numbers explain enough.

And, of course, just because numbers don't explain everything perfectly doesn't mean that our personal non-numeric pet theory is automatically any better.


(I don’t know how to quote, sorry.)

1. I guess blaming the government is one alternative to blaming DNA, but what happens if the government changes and the problem remains? Over and over again?

2. You are the right, I can’t be sure that racist conclusions are inevitable, but that’s what I have experienced from people who fail to analyze the context with the numbers and to accept that a) you can’t explain everything and b) the numbers very often lie, are biased, manipulated, and/or simply incomplete.

3. The facts are not racist; your answer is exactly what I mean. You can’t get a causal conclusion from a post hoc analysis. Have you heard of eugenics?

By the way, thanks for the engaging conversation.


It's easy to say in retrospect, but it's not like all the movement ever did in it's entire existence was buy a single castle. Do you really think they were expecting EA to explode in the public consciousness after the SBF thing and to have all of their line-items exhaustively audited for what could make the best outrage-bait article? That's quite a black swan event.


>and to have all of their line-items exhaustively audited [...]?

That part they should have expected (and I'm sure they did). Scott claims they expected negative social reactions, just maybe not to this extent.

Even in the absence of journalist attention, EAs love to exhaustively audit line-items. Something as big as Wytham Abbey was clearly not going to escape commentary.

I don't follow why SBF somehow excuses this. It seems you're suggesting buying the abbey would have been fine if only it could have been kept quiet, but because of this unforeseeable black swan event people heard about the Secret EA Castle and now we have all this outrage.

I heard about it from the EA Adjacent forum, and I don't see what SBF has to do with the argument that this burned a lot of social capital and reputation within the community for very nebulous gains (compared to buying a counterfactual property). The abbey might be relatively cheap for what it is, but not absolutely cheap. And it turned out to be very expensive in other ways.


My point is that you could take an action which seems reasonable and justified and thought out within the community that you're in (e.g. EA) because you know everyone's working off the same set of axioms and can follow your reasoning. But when you suddenly get the entire world watching, that's not true any more, and you realize you can't explain the axioms to them, because it's a lot easier to say haha castle stupid EA people than to actually think it through and besides they're already scrolling past on their feed to the next bit of outrage-bait.


Thanks. I didn't experience that part, but I can definitely see that this would happen. My position is still that it was a very questionable idea even without outside attention, but this definitely didn't help


William McCaskell wrote and promoted a book in August 2022, in an effort to popularize and justify the sorts of “longtermist” utilitarian views held by (some, increasingly dominant) EA folks. Allegedly this was backed by a multi-million dollar PR budget, which is why it got so much press at the time, and also why so many people were talking about EA philosophies last summer — even before FTX.

I think your response is strange. The EA/longtermist folks have been making a very deliberate and considered effort to promote and popularize their movement. This wasn’t something that “just happened to them.” And FTX blowing up was a major event in the course of that debate, since it starkly illustrated the weaknesses of a moral philosophy that centers numbers and dollars over the kind of traditional ethical judgement practiced by other charitable movements.

This piece, in turn, feels like more of the same promotional effort. Nominally it’s about the author’s kidney donation, but it immediately and prominently shifts to arguing about how EAs are great people who will donate their kidneys to strangers and how unfair the world is to criticize them over castles, which incidentally were the best possible use of money. It’s not subtle or incidental at all, and it felt like I was reading a piece of promotional religious material or something.


> And FTX blowing up was a major event in the course of that debate, since it starkly illustrated the weaknesses of a moral philosophy that centers numbers and dollars over the kind of traditional ethical judgement practiced by other charitable movements.

Honestly, I think this just doesn't make sense (and I have no ties with EA whatsoever). You've written it nicely, but it just doesn't follow. It makes no sense to judge a movement by its worst possible member, and it doesn't make sense to say that the overall philosophy doesn't work when one guy obviously didn't follow the philosophy and then had it explode in his face.

The argument, to me, feels akin to "well, vegetarians think they're morally superior, because they don't kill animals, but just look at PETA! PETA does this horrible stuff where they kill animals in shelters[1]." And then perhaps follow it up with "this shows the weaknesses of a moral philosophy that centers around saving animals lives..." but it doesn't. PETA doing shady stuff doesn't illustrate any philosophical failures any more than SBF doing shady stuff. If you want to address the philosophical failures of EA, you may, but I don't see any of that in your comment.

> but it immediately and prominently shifts to arguing about how EAs are great people who will donate their kidneys to strangers

Is this so surprising? Look at the immediate and negative response that EA receives today. Of course any mention of EA would want to be brought with a caveat that "hey, EAs do some good things too, you know - we're not all SBF!"

But I question if your interpretation of the article is even correct. Simply Ctrl-F "effectiv" gives a bunch of hits in section IV, and then no more hits for the rest of the article, except for a stray one in section VII, which was essentially my impression when reading the first time. He talked about it enough to address the controversy, then moved on. Like a reasonable person, not an author of "promotional religious material".

[1]: Actually true, btw (not that it has any bearing on vegetarianism): https://www.theatlantic.com/health/archive/2012/03/petas-ter...


I like the general idea of EA. But it's a human movement, and thus vulnerable to mismanagement and corruption that's characteristic of distributed organizations that manage large sums of money. To that end I've observed three worrying trends in the EA movement over past couple of years. They are (in no particular order):

1. To focus EA efforts on donations from high-net-worth individuals, often at the cost of giving these individuals massive influence over organizational priorities.

2. To shift (more) towards a longtermist philosophy, wherein "effectiveness" is determined by the wellness of hypothetical future beings, rather than measurable near-term impacts like "lives saved, bed nets distributed." This measurability was supposed to be the bedrock of EA, what kept it from becoming like other wasteful organizations.

3. As a consequence of (1) and (2), to shift the balance of internal priorities away from practical and measurable efforts, towards work like "AI alignment"; spending millions on book tours to promote EA/longtermist ideas; and spending on charities that provide facilities to help EA organizations "come up with ideas about the future."

4. To close ranks against outside criticism of EA's priorities, and to refuse any efforts for a community-wide re-evaluation of these new priorities, or to pose tough internal questions about donors or spending.

In this new regime, spending on luxurious meeting facilities ("castles") sits on equal footing with malaria nets. Because perhaps the ideas developed therein will save billions of future lives, or the facilities will encourage big new donations, and that's an organizational priority now. In any case, there's no way you can prove this won't happen, because nothing is empirically measurable. Also: castles are awesome.

None of these priorities represent the entirety of EA, but it's obvious that the opinionated people who control these (huge!) purse-strings are gradually gaining organizational control of the movement, to the (I suspect) long-term detriment of the "let's spend effectively on Malaria nets" wing. It's quite sad, but it's also exactly what I'd expect of an organization that is insufficiently defended against this kind of drift.

Far from "everything is great but SBF couldn't have been predicted," you see evidence of all these mismanagement in the events that I mention. First, there's the well-funded McCaskill book, which attempted to mainstream EA/longtermist priorities. This would not have been possible without (1) [and quite possibly, without stolen FTX deposits.] Then there's the presence of obvious grifters like SBF within the inner-circles of the community, and the fact that nobody with power was asking the obvious question about whether these people should be such an important part of the movement. (It did not take a lot of looking, apparently.)

And finally, you see it in the orgy of undisciplined, poorly-justified spending by EA organizations that occurred right at a time when they were deliberately courting increased prominence, including two different castles. All of this would be perfectly normal in a cult like Scientology, but has no place whatsoever in a mass-scale effective altruist movement.


As someone in the "malaria nets wing", I think you're directionally correct, but overstating things.

> it's quite obvious that the opinionated people who control the (huge!) purse-strings are gradually gaining organizational control of the movement

This is to some extent true of Open Philanthropy, although the effect looks larger than it is because they're consciously committed to not throwing all of their resources behind whatever they think is the best option. I'm not a fan in principle, but it's not insane. See https://www.openphilanthropy.org/research/worldview-diversif... for their take.

GiveWell remains firmly on the global health side, and I don't see that changing. Here's the first my-screen-worth of organizations they've funded in the last year, with approximate numbers:

- 87 million to the Malaria Consortium

- 77 million to Hellen Keller International (Vitamin A supplementation)

- 42 million to New Incentives (infant vaccination)

- 17 million to Sightsavers (parasitic worms)

- 8 million to PATH (malaria)

- 6 million to Nutrition International (Vitamin A)

- 5 million to IRD Global (healthcare infrastructure)

- 5 million to Miracle Feet (clubfoot)

- 7 million to Deworm the World (parasitic worms)

- 2 million to RICE (postnatal care)

From https://airtable.com/appGuFtOIb1eodoBu/shr1EzngorAlEzziP/tbl...


Like I said, I don’t think EA is bad or that the situation is irreparable. I just think there are people within “the movement” who are taking it in a worrying direction. And by “taking it” I don’t mean they’ll brainwash everyone in the org, but I do believe they might succeed in capturing the EA brand and a lot of its organizational capacity towards their priorities.

I think in the medium/long term, the Givewell wing of the EA movement will either need to (1) develop better organizational strategies to defend against this kind of takeover and keep priorities balanced, or (2) consider breaking off from the rest of the EA movement and recognizing that the brand now means something different. But that new movement will also need to develop some defenses to prevent the same thing from happening.

To use an excellent rationalist phrase: there’s a “Chesterton’s fence” that I think a lot of EA folks have torn down in their attempt to refactor charity and make it more efficient: namely, that the intentions of leadership really matter. And in any human organization that manages large sums of money and power, you have to have sharply-enforced defenses against charismatic leaders who say they’re your friends and share your priorities, but actually want to take the movement in a very different direction.


> consider breaking off from the rest of the EA movement and recognizing that the brand now means something different.

Yes, this seems fairly likely to happen.

> the intentions of leadership really matter. And in any human organization that manages large sums of money and power, you have to have sharply-enforced defenses against charismatic leaders who say they’re your friends and share your priorities, but actually want to take the movement in a very different direction.

I don't think this is exactly the issue. Openphil's leadership is, as far as I can tell, sincerely not trying to dominate the movement. The problem is that they're such an important funding source for charities that they can't not do so: even if they would never actually withdraw funding to punish people, the mere fact that they could creates a chilling effect.

In principle the same dynamic could apply to GiveWell and global health charities, it's just that there aren't the same sorts of deep ideological differences there: e.g. maybe Alice thinks parasitic worms are the most important problem, and Bob thinks it's malaria, but they're always going to agree that both are extremely bad and should get significant funding.


I think the notion of “sincerity” should be viewed very skeptically here. Not because I believe you’re wrong about anyone’s intentions: but because intentions don’t matter. If I sincerely believe issue A is the most important issue in the field, and I sincerely believe my “obtain donor funds at all costs” strategy is the best strategy to pursue it, then I can end up dominating the movement without ever intending to do so. It takes a strong and explicit effort to prevent this from happening, and that defense won’t happen if everyone is more concerned about being amicable than about vociferously defending the mission.

And of course, once one branch of the movement dominates it, then you’re at the mercy of their continued sincerity. This means you have to assume they’ll always continue to behave sincerely, and their organization won’t be captured by opportunists in the future.


I am an EA of long standing, though not very active on the movement side of things. I think SBF is a big deal, at least in so far as it prominently exposed a moral weakness in the movement. A heavy use of Bayes' rule coupled with a focus on the best uses of dollars led in my eyes to SBF being seen less as a successful EA celebrity and more of a moral exemplar. We're trying to effectively improve the world! SBF has developed a magic money printing machine which will make the world awesomely better by generating dollars for EA causes! Earn-To-Give proven the best strategy because of unlimited upside risk! We should be more like SBF!

I had two problems with this: firstly that a movement focusing on extracting money from the megarich rather than 10% tithes from the public more generally may potentially generate more cash (good!) but will probably do so most effectively with thought leadership and fundraising teams and donor care and castles for conferences of important people. This loses a distinctive simplicity and non-heirarchy that feels important.

The second is that I had much less sympathy with the 'longtermism' sub-sect than SBF and many of the richer and increasingly more prominent Californian types do[0]. And that the Good Old-Fashioned EA focus on cheap but unglamorous interventions on malaria, cash transfers etc. were being increasingly overshadowed in the public eye.

So I don't think it reflects badly on EAs that SBF turned out to be shady (as you say, all groups have a worst member). But it should prompt some awkward questions about the extent that the movement was taken in by the smoke-and-mirrors act. And ideally a reconsideration about whether chasing the money and interests of SBF-types is the right direction for the movement.

[0]: I find it suspicious that the equations demonstrating the infinite importance of fairly recherché concerns on the specifics of AI safety, for example, just happen to line up with the research interests of some EA-adjacent people. That suggests people aren't discounting sufficiently for their own group biases.


This was my real takeaway from the essay. I haven't read anything by the author in years and was surprised how _aggrieved_ and superior he came off.


>to have all of their line-items exhaustively audited

Historically, yes, that's the entire point of the movement. Audit everything in the goal of doing the most good and not wasting money on frivolities. The auditing approach faded as the movement has grown and became more longtermist, though I (somewhat) expect it to come back now that we're post-ZIRP, post-SBF.


I'm a bit taken aback at the replies who say that EA never takes that into consideration. It had been actually exactly the kind of discourse I was seeing back in 2019 in the EA sphere. Actually, the reason this changed is exactly to avoid another SBF disaster: If you focus too much on consequentialism and convoluted reasons for why you're doing is the highest EV, you are incentivizing very unlawful and unwanted behaviours. EA pivoted to a more deontological framework to avoid those kind of dangerous reasonings at that point

On the object level, about your point, Scott alludes to it in his linked article: https://www.astralcodexten.com/p/the-prophet-and-caesars-wif...


This stuck out to me too. If Effective Altruism is about using math to figure out how to do the most good they seemed to have ignored higher order effects of their actions. What if buying the castle hurt Effective Altruism's reputation so less people got into EA so less people donated money to buy mosquito nets or malarial regions in Africa resulting is less overall good?


It's hard to trust a community who claims to focus on effectiveness but then it turns out puts great effort into looking good. That kind of deception is more damaging than a bit of bad press.

There's plenty of charities with great marketing. EA doesn't need to be another one.


But the point is, you're just asserting that. I think the parent poster was observing that, as effective altruists, they might attempt to quantify the pros and cons of such reputational factors (some game theoretic calculation, perhaps?) and include it in their determinations.


Prove it with the maths. I'd be willing to bet that the maths comes out on the favour of those who put effort into looking good.


Not when you factor in how this strategy affects those who employ it. Sure, compromising your ethics and lying to people in order to secure more donations will bring in more money for the good cause - short-term. Longer term, how long until you start thinking, since you're already lying to the donors anyway, why not also lie about the whole charity thing and start pocketing the donations for yourself?

"Ends don't justify the means" isn't true on paper, for perfectly rational actors - but is true for actual humans.


There's a danger here, of which I imagine they're aware - higher-order effects are very hard to estimate in chance and magnitude, or in how your actions specifically contributed to them. This makes them perfect for justifying whatever you want, intentionally or accidentally. Overemphasize some positive second-order effects, fail to notice some negative second-order effects, and suddenly your first-order selfish choice looks like a charity. Societies and markets are dynamic systems, so predicting outcomes is less like predicting path of a rocket from Newton's laws, and more like predicting the weather.


It does not take a genius to understand how bad it looks to buy a castle in the UK


The previous owners were using it as a single family home (according to Wikipedia). Make of that what you will. It's definitely a bigger 'waste' of resources than using it for conferences.


No-one cares about the waste of resources though. A castle is a symbol of power. It's the combination of lots of big, bold ideas, lots of public speaking, giving money out, THEN buying a castle. People begin to worry that these people want to rule them.


> Surely their reputation is a factor in their ability to do good?

True. But the EA crowd has a hard time understand perception or empathy and has to resort to mathematical calculations

I could think of a couple of ways better than "buying a castle" for conferences.


e/acc is short for effective accelerationism and not effective altruism. Those groups are interested in very different things.


True, fixed. But the groups have some overlap


Do they? Aren't EAs mostly doomers? I haven't followed the debate closely.


EA and e/acc have a lot of overlap in being free market, pro technology, urban, fairly online groups. The "doomer" and charity parts are the difference. EA's mostly believe that current trajectory of AGI will "paperclip maximize" humans, regardless of our suffering. While e/acc believe that heralding in AGI is the next frontier of tech, and that more tech is always good, that tech = positive progress. EA like to donate towards helping humanity. E/acc believe in getting really rich through tech and by markets, this helps other people too, via Randian market prosperity.


> Do they?

Not really, but they're both descended (in part) from the same early 2000s Bay Area counterculture-technoutopian-libertarian milieu.


The thing is, perception and empathy often are included in their "mathematical calculations", or at least some kind of simulation or guess of it. It wasn't in this case, and that's very strange.


The way this post leaves no room for debate about how the decision was good and paints all of the people who disagree as ignorant outsiders feels very disingenuous.

There was a lot of criticism coming from inside the EA community, too. It became taboo to criticize it with multiple EA figureheads (author of this article included) making definitive statements that any critics were wrong and the decision was unequivocally right.


I have found it somewhat... amusing... how this decision resulted in downgrading the entire community discussion section at EA Forum.


A link would have been nice, but it’s a side point anyway. If you care enough, maybe do a little research?


> The way this post leaves no room for debate about how the decision was good and paints all of the people who disagree as ignorant outsiders feels very disingenuous.

Agreed, I went to look for these thinkpieces to see what the arguments were since the guy with footnotes longer than some articles didn't cite any. Searching Google News for '"effective altruism" ( "castle" OR "Wytham" OR "£15m" )' netted 10 total results:

  - Three articles about a DIFFERENT castle in the Czech Republic also bought by effective altruists (What's the marginal utility of castles?)
  - One article about EVF (formerly CEA) claiming FTX funds weren't used to by Wytham Abby
  - A smattering of anti-EA of anti-SBF pieces that make single line references to the castle
Even just googling for the same you get one New Yorker article that makes offhand mention, then mostly forum threads and a couple blogs that are debating (pretty fairly imo) about whether it was an effective use of money.

I'm sure I could have missed some pieces but this looks like the classic and all too common "got mad about some random hyper-niche tweets but can't admit it"


I agree. It seems that EA forgets to factor in PR into its mathematics. Really the mathematically superior choice should take optics and the effect of good and bad optics into account


This was an enjoyable read.

The solution of giving people money to donate their kidneys is terrifying to me however. I do not like the thought of desperate people selling their internal organs to survive one bit.

Of course, if inequality was dealt with and nobody was desperately poor, it would be a different question. If everyone had enough money to comfortably get by, and the extra $100,000 from a kidney donation would just go to luxury goods, I wouldn't have a problem with it. But that's not the world we live in.


Iran allows paying compensation to kidney donors. You can study how they are doing. Hint: they don't have much of a wait list.

> I do not like the thought of desperate people selling their internal organs to survive one bit.

See https://en.wikipedia.org/wiki/Repugnant_market

Do you prefer desperate people have fewer options? Especially, do you prefer taking options from people that they prefer to take?


> Do you prefer desperate people have fewer options?

Yes.

For an individual poor person, one could argue that the individual would be better off if they were allowed to sell their internal organs. But this isn't just about individuals, this is about societies. A society which pushes poor people towards selling their organs to survive is worse than a society which doesn't do that, in my opinion.


The extreme example would be: I think an IRL version of something like the Squid Game should be illegal, even though that means "desperate people have fewer options".


Remember, if you can sell it, then a debt collector can force you to sell it. I'd much rather say "People can't sell their organs" than have a future where debt collectors manipulate people into parting out their bodies.


> Remember, if you can sell it, then a debt collector can force you to sell it.

This is not true at all. Debt collectors acting on their own can't take anything. A court can force you to sell certain things if you lose a lawsuit, but many assets are protected. They can't seize your 401k, in most states they can't take your primary residence unless the debt was taken on to purchase it in the first place, they can't garnish more than 25% of your income (and in some states can't garnish anything for consumer debt), they can't take social security income, and so on.


You can justify anything with that argument, no matter how horrific. Why not permit people to allow themselves be killed in entertaining ways, for a pay per view audience, with their surviving family getting 80% of the proceeds? It's possible that that would be the most money they will ever "earn". Don't you want to give poor people as many options as possible?


> Why not permit people to allow themselves be killed in entertaining ways, for a pay per view audience, with their surviving family getting 80% of the proceeds?

We already allow shades of that, and society hasn't collapsed.

Ie we seldom allow people to consent to outright death, but we certainly allow them to take on a great deal of risk of injury or death in return for fame or money. If you repeat a 1% chance often enough, you are going to hit the jackpot sooner or later.

So I'm not sure your reductio ad absurdum works here.


We certainly do allow such activities (MMA fights, extreme sports competitions etc). But such activities are fundamentally different in my view. While risky, they are not designed to permanently damage the competitor. If a participant emerges victorious and unscathed, they can enter next year's competition. That's not the case with selling a kidney (nor with my admittedly extreme example above).


Yes. Because the second order effects of giving those desperate people the option of donating their organs is worse for the society going forward.


What is it that makes "giving people money to donate their time" so much less repugnant? Is it that time is finely divisible, whereas kidneys come in twos?


It's not necessarily much less repugnant. The fact that rich people can spend their time doing whatever they want, while a lot of people have to burn themselves out working two jobs to make ends meet, is abhorrent. However, the ability to make ends meet through selling your organs is a line most societies haven't crossed, and I'd prefer to keep it that way to be honest.

And for better or worse, we do tend to see a difference between forcing people to sell their time and forcing people to "sell their body". It's one (among many) reasons why societies tend to look down on prostitution, for example. Prostitution, like paid organ donation, is the kind of thing that I see no issue with people doing voluntarily, but I take issue with systems which "force" people to do it out of desperation.

I think society would be better if we started viewing the selling of one's time more like we currently view the "selling of one's body". I think society would be worse if we started viewing the selling of one's organs more like we currently view the selling of one's time. We want people to labour in order to keep society going, just like we want people to donate kidneys to help people in need, but we don't want an unfair division of labour where an underclass is working 80 hours per week while an upper class has all the free time they could wish for, and we don't want an unfair division of kidney donations where an underclass feels forced to donate their kidneys while the rest of society never even has to think about it.


As always, society will push a problem caused by its economic system away by making it illegal and create a new one in the black market.

Sourcing organs is an especially difficult problem since for the people needing one it is a literal matter of life and death. I wouldn't be surprised if people travel to countries where buying organ is legal/easier.


Yes, I think it is that. Any such "time for cash" arrangement will have an end date, and can be terminated or renewed regularly. Whereas selling a kidney - once it's gone, it's gone. The transaction can't be repeated.


What about lazy people? I'm not desperate at all, just regular depressed, desensitized, uncaring, unmotivated, due to all the shit going on constantly in this nice fancy interconnected world, blablabla.


selling kidneys doesn't preclude us having the same thorough screening process we have now, although it means there may be people who try harder to game the screening process.

i just don't see kidney donation as being a significant enough harm to outweigh the harm of the recipient not getting the kidney. if we're worried about people being financially desperate, we should allow organs to be sold and build a stronger social safety net (with the money we save on dialysis?)


> selling kidneys doesn't preclude us having the same thorough screening process we have now, although it means there may be people who try harder to game the screening process.

Nothing I have said really even touches on the topic that that safety could be impacted by people trying to game the screening process, though it is of course a legitimate concern.

> i just don't see kidney donation as being a significant enough harm to outweigh the harm of the recipient not getting the kidney.

I don't disagree that kidney donation is a net good. The question is what you do with that opinion. You could defend the idea of a "kidney draft", where some random selection of the population gets forced by the state to "donate" one of their kidneys; though I suspect a lot of the proponents of a free organ market would be uncomfortable with that idea, as would I. The difference is that I view financial coercion as just as much of a problem as state coercion.

> if we're worried about people being financially desperate, we should allow organs to be sold and build a stronger social safety net

I agree! If there was a social safety net which reliably kept people out of desperate poverty, I would be less worried about financial incentives.

The safety net needs to come first, however.


I agree, and would extend your point even further. Drafting people into the army against their will is a grave harm, for obvious reasons. But the existence of a "volunteer" army that nevertheless gets paid for their participation represents no less financial coercion. People on a financial precipice, given the option to join the army, might take it -- and thus increase their risk of being shot or blown up abroad. Removing those payments removes that coercion.

I was originally going to post this as a facile gotcha, until I thought about it slightly harder and realized that effectively disbanding the military would actually be a pretty good idea, all considered. However, I would still argue that we should accept monetary rewards for altruistic yet dangerous acts, like fighting fires and rescuing cats from trees (and donating kidneys).


The screening isn't just for safety, it's for biological compatibility with the recipient - a mismatched organ will not function and will be destroyed by the recipient's (even suppressed) immune system. It's worse than just failing, you damaged the recipient and you wasted an organ that could have gone to a proper match. When I received my kidney transplant I had to acknowledge a risk of contracting an undetected bloodborne illness from the (deceased) donor's history of drug usage. If the primary concern driving screening was simply safety the organ would not have been considered available. The primary concern is compatibility. Other resulting conditions can be managed afterward. The risk of me dying as a result of the aforementioned undetected bloodborne illness was some tiny fraction of a percent - The risk of me dying on extended dialysis is one hundred percent.


The post mentioned a tax credit though. In theory the poorest people would benefit the least from this. But there could still be issues as you increase your income, I agree.


The modify NOTA people call for it to be a tax credit ($10k/yr for 10yr) or a check(also 100k split up over 10 years) from the federal government if you do not make enough money to pay federal taxes.


While I've appreciated some of the author's articles in the past, I think the numbers in this one miss the mark quite a bit.

> the risk of dying from the screening exam was 1/660

> The usual rule of thumb is that one extra Sievert = 5% higher risk of dying from cancer, so a 30 mS dose increases death risk about one part in 660.

Well not quite. First of all, this assumes that exposure events are additive over a lifetime (rather than individual events being more damaging at higher doses). This does seem to be the case, though I'm curious if we really know.

More to the point, it's a 5% increase of chance of death from cancer. Combining 5% with 30mSv/1Sv, you get an extra 0.15% chance of dying of cancer indeed.

But the chance of dying of cancer in the U.S. is something like 17.5%, although that does increase (and then decrease) with advanced age. According to https://usafacts.org/articles/americans-causes-of-death-by-a... , something like 29% deaths in the 65-74 age group are cancer, while in the 75-84 year age group it drops again to 25%.

Regardless, this means if you figure out your lifetime chance of dying by cancer based on your current age, it's 30% at the most (in aggregate, obviously personal risk factors vary a lot from person to person).

A 0.15% increase in that is 0.045% additional chance of dying by cancer, or 1 in 2,222. Using the average mortality incidence of 17.5% its more like 1 in 3800. And for someone who's this concerned about dying of cancer and I assume limiting risk factors much more than the average citizen, perhaps it's a 1 in 5000 chance.

All in all you're much more likely to reduce your chance of death by cancer by much more than a CT will increase it, via a number of lifestyle changes: healthier eating, quitting alcohol, staying out of the sun.


No, multiplying the incidence by 1.05 is incorrect. The 5% figure is additive.

ICRP Publication 103 is pretty unambiguous; e.g. Table A.4.2 gives total figures of about 400-600 "cases per 10,000 persons per Sv".


> a number of lifestyle changes: healthier eating, quitting alcohol, staying out of the sun.

On the other hand, maintaining these sound like a much larger effort than avoiding one CT scan.

(That said, they do have other benefits so they might still make sense.)


Staying out of the sun has significant downsides.


Like what? I assume he's taking vitamin D3.


Sunlight exposure is associated with reduced all-cause mortality, as well as reduced incidence of several specific diseases including some types of cancer.

It’s also associated with higher endorphin levels, and the maintenance of proper circadian rhythm through melatonin synthesis.

Most of these effects have been found to be unrelated to vitamin D, but that’s also a significant benefit considering the poor bioavailability of supplements.

The reduction in all-cause mortality from sunlight exposure is significant enough that people with an early diagnosis and treatment of non-melanoma skin cancer have a higher average life expectancy than the general population.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5129901

https://onlinelibrary.wiley.com/doi/10.1111/joim.12496

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7400257/

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2290997/


I've always assumed that those results are caused by exercise, and since many people exercise outdoors, being outdoors more is correlated with better health, but the better health is actually caused by by the exercise, not the being outdoors. As for the circadian rhythm, SAS should embrace More Dakka [0] and cover his entire ceiling in bright white LED modules. He has Substack money to spend after all. I thought D3 was pretty good with absorption, though he could start drinking milk and take GLP-1 inhibitors to offset that if needed.

[0]: https://thezvi.wordpress.com/2017/12/02/more-dakka/


> The vast majority of donors, 98 to 99 percent, don’t have kidney failure later on. And those who do get bumped up to the top of the waiting list due to their donation.

Can’t help but picture a roller coaster queue wrapping around the back of a hospital, and some guy is cutting the “Singles” line, defending himself from scowls with, “I’m not cutting! I’m just getting my kidney back!”


Just imagining a surgery gone sideways situation and they just decide to trade the one still in with the one they just took out.

(Not even sure this is an option, orientation and all, just imaging)


Transplanted kidneys generally go lower in your belly than your "native" kidneys and they aren't swapped, the original kidneys are left in and they add another. Anyone who's had a transplant most likely has 3 or even 4 kidneys inside them.


This is probably the most concrete “I had no idea it worked that way” this year for me. So you can collect kidneys!


They only remove them if they have some kind of immediate risk of damage to the body - Necrosis or similar. On top of that, while your original kidneys won't have enough function to sustain life, any function they do have will take some burden off of the transplant kidney(s).


It makes complete sense once I think about it. My mental framing is too often based on technology. Ie. there’s two kidney slots that you can populate with one kidney each.


If it helps, you can think of all medicine as reverse-engineering and hacking a machine the vendor has ceased supporting and won't provide documentation for.


Mine is located on my right side just above the midline of my pelvis.


Yup, same here! Mine is (somewhat unfortunately) pretty superficial so you can even see a small bulge where it is.


They're just making sure everyone has 2 kidneys before they give anyone a third. :)


I dunno, I feel like this whole article just makes it clear how fundamentally flawed the idea of calculating your way to good (in both the moral and the practical sense) decisions is. It's clear from the article that even the first-order effects of a kidney donation decision are incalculable: no one really knows the exact risk accrued from the CT scan, or the surgery, or the upside to the recipient. It doesn't seem OP included the direct costs of the transplant itself at all (all those tests and scans and screening interviews, the travel and lost time from work, and then the cost of the two surgeries). Then one could go on and on with calculating higher-order effects forever. ("I'm married to someone who gives $500K a year of their income to EA causes, but if I die in surgery there's a 5% chance they will commit suicide out of grief, denying XYZ dollars in future income to the cause...")

Indeed, OP tries to present a bunch of arithmetic to justify their decision, but admits that it ultimately rested on an emotional gut feeling: "It starts with wanting, just once, do a good thing that will make people like you more instead of less."


Or the risks of hospital-acquired infection (maybe his girlfriend considered that). What about effects like if it aches for years afterwards but won't kill him; at what level of ache would that turn from "reminder that I did a good thing" to "damnit"?

Hand transplants are a thing - would he give up his non-dominant hand to someone who lost both of theirs in an accident?

Eye transplants aren't a thing, but if they were, would he give up one to a blind person?

If it was really easy to see the damage (hand) and lose an obvious piece of functionality (eye) it would be much harder to argue with numbers "studies show that most people who lose a hand survive more than 7 years". Would he lose an eye, a hand, a kidney, a lung, a lower leg, a foot of intestine, a chunk of liver, a section of skin, a few litres of blood, if it did more good to someone else than to him?

> "Indeed, OP tries to present a bunch of arithmetic to justify their decision, but admits that it ultimately rested on an emotional gut feeling: "It starts with wanting, just once, do a good thing that will make people like you more instead of less.""

The Last Psychiatrist wrote that you shouldn't need validation from other people, shouldn't want it, and if you do want it - fix that. Doing it hoping you gain some indelible "society values me" token to carry around with you forever so you can use it as a trump card and people will like you more, feeling like you've never done anything good and that you need to to be likeable, doesn't sound much like "altruism: Unselfish concern for the welfare of others; selflessness.", does it?


Maybe they've worked out some sort of precommitment to prevent that, or he put poison pills into his will or something.


I’m keeping both my kidneys for if someone I love needs one.

When I am dead however, they can take my kidneys, heart, lungs, liver and whatever else that is salvageable and give them to whoever needs them (assuming I die in the right circumstances that this is possible).


He covers that in the post. If you donate your kidney, you can nominate up to 5 people to be first in line if they (or you) ever need a transplant.

Given compatibility concerns, it's probably safer for your family member for you to donate than not.


If you donate a kidney knowing that if something goes wrong to the one you have left you can skip the queue of people who have been suffering potentially longer than you have with end stage renal failure, is your donation truely an altruistic one?


But can you change the names later? What if you divorce and remarry, or have a kid later?


just so you know, around 1% of people die in conditions where their organs are elligible for transplantation. then there's the factor that kidneys from living donors generally last longer for the recipient


My grandmother donated both her kidneys before her life support was switched off she was brain dead tragically due to having a brain aneurysm at 51, but she was technically living when she donated since they keep people alive until they’ve completed the organ harvest. She also donated her heart, but unfortunately the transplant wasn’t successful at all.

One kidney failed after 3 years, the other kidney is still going strong. That person she saved has gone on to have 17 years of good health and counting, our family get’s anonymous letters letting us know the kidney is still going strong. It’s kind of nice.

If I don’t get to donate my organs to save lives, I’ll still donate them to science. I was Born with two kidneys and I like to think wanting to keep them for myself isn’t being greedy. If someone I love needs a kidney, I’ll donate to either them, or someone else so they can hopefully receive a kidney quicker, but otherwise, I have no desire to have a major surgical procedure, recovery etc. maybe I’m selfish, but I’ll at least be honest about it.


Good for Scott for actually looking at the research and then following through in a charitable action that is consistent with his beliefs.

I'm surprised by how relatively risky the scan creening seems to be though.


Well, he found a less risky substitute.


Can anyone (with a better understanding of radiation risks) confirm the math for the added mortality risk from a single CT? I know a CT is a lot of radiation, but the mortality risk seems very high.


His footnote says as much as I could:

> Maybe. Kind of. Our knowledge of how radiation causes cancer comes primarily from Hiroshima and Nagasaki; we can follow survivors who were one mile, two miles, etc, from the center of the blast, calculate how much radiation exposure they sustained, and see how much cancer they got years later. But by the time we’re dealing with CAT scan levels of radiation, cancer levels are so close to background that it’s hard to adjust for possible confounders. So the first scientists to study the problem just drew a line through their high-radiation data points and extended it to the low radiation levels - ie if 1 Sievert caused one thousand extra cancers, probably 1 milli-Sievert would cause one extra cancer. This is called the Linear Dose No Threshold (LDNT) model, and has become a subject of intense and acrimonious debate. Some people think that at some very small dose, radiation stops being bad for you at all. Other people think maybe at low enough doses radiation is good for you - see this claim that the atomic bomb “elongated lifespan” in survivors far enough away from the blast. If this were true, CTs probably wouldn’t increase cancer risk at all. I didn’t consider myself knowledgeable enough to take a firm position, and I noticed eminent scientists on both sides, so I am using the more cautious estimate here.


There is this story: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2477708/

The conventional approach for radiation protection is based on the ICRP's linear, no threshold (LNT) model of radiation carcinogenesis, which implies that ionizing radiation is always harmful, no matter how small the dose. But a different approach can be derived from the observed health effects of the serendipitous contamination of 1700 apartments in Taiwan with cobalt-60 (T1/2 = 5.3 y). This experience indicates that chronic exposure of the whole body to low-dose-rate radiation, even accumulated to a high annual dose, may be beneficial to human health.

... though some evidence against it with leukemia it seems, but this is not my field.


The non linearity intuitively makes sense. If I (naively) assume that DNA has some built in correcting codes (iirc it at least has some redundancy), one would be able to damage it up until the error rate that can be corrected without any deleterious effect.


I think there's a very obvious error in the calculation:

The author jumps from 30mS increases the risk of cancer by 1/660 to 1 in 660 get cancer after the procedure.


I thought of this too, but then it also seems like too simple a mistake for someone like Scott Alexander to make in public, so I thought I must be misunderstanding his argument.


> People got so mad at some British EAs who used donor money to “buy a castle”. I read the Brits’ arguments: they’d been running lots of conferences with policy-makers, researchers, etc; those conferences have gone really well and produced some of the systemic change everyone keeps wanting. But conference venues kept ripping them off, having a nice venue of their own would be cheaper in the long run, and after looking at many options, the “castle” was the cheapest. Their math checked out, and I believe them when they say this was the most effective use for that money. For their work, they got a million sneering thinkpieces on how “EA just takes people’s money to buy castles, then sit in them wearing crowns and waving scepters and laughing at poor people”. I respect the British organizers’ willingness to sacrifice their reputation on the altar of doing what was actually good instead of just good-looking.

When other organizations do so, they are wasting money that could have been well-spent. When effective altruists do so, they are “sacrificing their reputation for a good cause”.


Whenever I read anything from these people I just keep thinking "Lies, damn lies, and statistics." They rely on "the math checks out" as an excuse for basically doing whatever they want. Here is some real-world math: how about "the math doesn't check out because people will look at your castle and think you're full of shit?"


Have you donated a kidney recently?


No, I’m an academic researcher which means I don’t blindly trust two or three papers in the subject. I don’t even trust them my two eyes wide open.


There are so many unknowns and unknown unknowns in medical sciense, especially regarding the human body. Did evolution really keep two kidneys if you actually just need one? I doubt it. When considering a donation you should also not only think about if you might die. There are more factors like quality of life. In the end, a whole organ will be removed from your body. This will have some effect.


I was wondering about that, but not enough to do any actual research. I figure that hunter-gatherer's diet consisted of a whole lot of meat (seasonally perhaps) and then two kidneys were of advantage. My doctors recommended against eating big steaks. I wasn't eating much of those before and had no need to adjust.

I can't tell the difference (six years after the procedure), but then, I'm no athlete monitoring his performance closely.


I remember reading an article about people in poor countries selling a kidney and thereafter being too sick to work as fishermen or whatever. Some charity was trying to discourage poor people from selling body parts as a get-rich-quick scheme.

I'm skeptical this will have little to no impact on quality of life and productivity for the author.


Kidneys are relatively close to the surface. So you can injure them mechanically.

Perhaps our ancestors kept their two kidneys around, but we can do with less, because our physical environment isn't as dangerous?


> Did evolution really keep two kidneys if you actually just need one? I doubt it.

Aside from the nervous system, pairing is the default condition of all mammalian organs which develop from the mesoderm or ectoderm: they can then go on to fuse into a single larger structure, as with the heart, but as far as I know they're never simply lost on one side during normal development.


The real question is why didn't evolution give us two hearts and two brains?


Kind of obvious for both of them though: evolving two hearts is putting two pumping systems in parallel on the same fluid reservoir. Given how the pump works (evolution can't handle making gears in general), you'd need a precise synchronization system to keep everything working properly - you're basically much more likely to die from the "high availability" system then a very robust single system (and the usual proviso: evolution doesn't need you alive, it needs you to reproduce and the offspring have some decent chance of survival. Percentage survival rates are just fine.)

Same for the brain but I'd say writ large: easier to make an extra human then try and coordinate a redundant brain.


Those ones are much easier to answer: they require very good coordination between each other. "Split-brain" is literally a canonical failure mode in the study of distributed systems, for example. (We kind of do have two brains, right, linked by the corpus callosum; there's a reason they're so close together.)


> Yet only about 200 people (0.0001%) donate kidneys to strangers per year. Why the gap between 25-50% and 0.0001%?

Scott is right to point out the gap, but I don't expect him to donate another kidney next year. Even for those who follow through on their good intentions, kidney donation is a one-time event not an annual event, so it shouldn't be compared to an annual rate. Assuming that would-be kidney donors have a window about 50 years to follow through on their good intentions, the gap should be between 25-50% and 50 x 0.0001% = 0.005%.


> Obviously this kind of thing is why everyone hates effective altruists. People got so mad at some British EAs who used donor money to “buy a castle”.

This feels like a spin on how this all went down. A lot of the anger about that group utilizing funds to, literally, buy a castle came from EA groups too.

Trying to downplay the situation and pretend that the anger was only coming from uninformed outsiders feels like a dishonest attempt to rewrite history.

If you’re not familiar with this story, here’s a source: https://www.newyorker.com/magazine/2022/08/15/the-reluctant-...

There was a lot of debate about it in EA circles. A lot of it turned into mental gymnastics as they tried to find ways to explain how spending a amount of money on a literal castle so they could meet in it somehow had an expected value that was positive for humanity or something.

Weird to see this incident pop up in an article about kidney donation, but even weirder to see the critics of the decision downplayed and sneered as uninformed outsiders when so much of the anger came from inside the EA community. Really feels like a subtle signal that if you don’t toe the line on every EA organization decision, you must be an outsider.


>Really feels like a subtle signal that if you don’t toe the line on every EA organization decision, you must be an outsider.

Bingo. I like Scott's writing in general, I've interacted with him a few times, and EA criticism is one of three or so topics where disagreement gets you the outsider treatment.

He's slightly more amenable if you can provide an example that would've been "more effective," but "this was a bad decision"-type critiques will default to "the bigwigs are smarter than you."


The Vox article refers to the donation chain effect from the US kidney exchange, where one donor can trigger a cascade of donations. Economist Alvin Roth won the Nobel memorial prize in economics for this in 2012.

https://www.bbc.com/news/business-50632630


Hopefully the Ozempic breakthroughs will alleviate some of the need for kidney donation as it lowers the occurrences of End-Stage Renal Disease (ESRD).


There was recent news on an Ozempic study that shows reduced risk kidney conditions for diabetic patients.

> Based on the trial’s design, Ozempic needed to reduce the absolute risk of one of those kidney-related conditions by at least 3.5% when compared to placebo, or a relative risk reduction of around 16.5%, Akash Tewari, a Jefferies analyst who covers Lilly, wrote in a note to clients.

https://www.biopharmadive.com/news/novo-nordisk-ozempic-kidn...


This does honestly kind of shock me, it’s respectable that he did something like this for a random person but I could never imagine myself doing it for anyone except close family.

I used to think EA was some kooky AI safety Cult but if so many EA folks donate their kidney, my opinion of them has improved far more. They at least have balls!


> I used to think EA was some kooky AI safety Cult but if so many EA folks donate their kidney, my opinion of them has improved far more. They at least have balls!

They are both a kooky AI safety Cult and they do things like donate kidneys and fund malaria nets.

They are a kooky AI safety cult because there is a reasonable argument to make that AGI is the most plausible, preventable human extinction event[1] in the near future. If you believe that argument, then starting (or joining) a kooky AI safety cult seems like a really good idea.

1: It's not clear that an asteroid large enough to wipe out all humans could be prevented with technology we can develop in the next 100 years or so. Global warming is bad and all, but it might kill a couple billion people, not wipe us all out. Total global nuclear war probably won't even kill 100% of the population of all of the countries involved, and large parts of Africa are unlikely to get bombed at all.


I don't think it's very reasonable to think "AI" is more of an existential threat than global warming and nuclear weapons. In fact I'd say that's a ridiculous claim.

The only way I can see AI causing total extinction is a Terminator-like scenario where an AI completely takes over, self-sustains by manufacturing killer robots and running power plants etc. It's literally science fiction. CharGPT is cool and all, really impressive stuff, but it's nowhere near some sort of superintelligent singularity that wipes us out.

We don't even know if it's possible to build something like that, and even if we did there's a huge gap between creating it and it actually taking over somehow.

Global warming and nukes are two things we know could wipe out pretty much everyone. Sure it might not be a complete extinction but we know for a fact it can be a near extinction which is more than can be said for "AI". And as far as I'm concerned a full extinction and a near extinction are basically equally bad.

I also think you're underestimating by saying they won't kill everyone. They might. Nuclear fallout is a thing, you don't have to be nuked to die from nukes. Nuclear winter is another thing. Climate change could end up making the atmosphere toxic or shut down most oxygen production which would certainly be a total extinction.

These are real threats, AI is hypothetical.


> I don't think it's very reasonable to think "AI" is more of an existential threat than global warming and nuclear weapons. In fact I'd say that's a ridiculous claim.

AI x-risk is effectively a superset of global warming, nuclear war, engineering bioweapons, grey goo scenario, lethal geoengineering, and pretty much anything else that isn't just Earth winning the cosmic extinction lottery (asteroids, gamma ray bursts, a supernova within couple dozen LY from us, etc.). That's because all those X-risks are caused by humanity using intelligence to create tools that endanger its own survival. A powerful enough[0] general AI will have all those same tools at its disposal, and might even invent some new ones[1].

As for chances of this happening any time soon, I always found the argument persuasive for the time frames of "eventually, maybe a 100 years from now". GPT-4 made me revise it down to "impending; definitely earlier than climate change would get us", because of the failure mode I mentioned in footnote [0], but also because of how the community reacted to it: "oh, this thing almost looks intelligent; quick, let's loop it on itself to maybe get it a bit smarter, give it long-term memory, and give it unrestricted Internet access plus an ability to run arbitrary code on network-connected VMs". So much arguing over the years as to whether you can or can't box up a dangerous AI - only to now learn that we won't even try.

--

[0] - Which doesn't necessarily mean superhuman intelligence. Might be dumb as a proverbial bag of bricks, but able to trick people some of the times (a standard already met by ChatGPT), and to think and act much faster than humans can think and coordinate (a standard met by software long ago). Intentionally or accidentally tricking humans into extincting themselves is in the scope of this x-risk, too. But the smarter it gets, the more danger potential it has.

[1] - AI models are already being employed to augment all kinds of research, and there's a huge demand for improving the models so they can aid research even better.


This is mostly hypotheticals. I can't argue against hypothetical problems, so all I'm going to say is I'm not convinced this is a danger.

I also don't agree that helping humans make scientific progress is a danger. We already have the tools to wipe ourselves out, adding more of them doesn't really change much. It might well help us discover ways to improve things, and whatever we discover it's up to us how we use it.

We don't know what the future holds. GPT-4 may be close to the limit of current possibility. It is not given that we will discover significant improvements. Even if we do discover significant theoretical improvements it is not given that it will be feasible with current nor near-future hardware.

I can agree that there exists hypothetical potential for danger, but to put that hypothetical risk higher than real threats is in my view exaggeration.


> We already have the tools to wipe ourselves out, adding more of them doesn't really change much

We do already have the tools, but they're mostly in the hands of people responsible enough to not use them that way, or bound into larger systems that collectively act like that.

A friendly and helpful assistant that does exactly what you ask for without ever stopping to ask "why" or to comment "that seems immoral" is absolutely going to put those tools in the hands of people who think genocide is big and clever.

The two questions I have are: (1) When does it get to that level? (2) Can we make an AI-based police force to stop that, which isn't a disaster waiting to happen all by itself?


> an AI completely takes over, self-sustains by manufacturing killer robots and running power plants etc.

You don't need any killer robots at all if you possess a superhuman level of persuasion. You can use killer humans instead.


Well... maybe they're both. Plus a kooky crypto-embezzlement cult. Probably depends on who you get. My skepticism to the EA movement is of the form: I can respect the underlying idea but I have very little faith in many of the actual people to do it.

Also, EA sorta bakes in utilitarianism as a premise and (from my experiences in the ACX comment section) basically finds any non-utilitarian argument to be literally unintelligible, which doesn't work for me because I think there are more important things in life than optimizing numbers (... but since they evaluate the worth of things by number-optimization they seem to be unable understand this perspective at all).


This is a quite uncharitable view of things. EA utilitiarians aren't spending their life on optimizing numbers, they're trying to use numbers to guide decisions on how to better impact life.

People can follow a value systems and still understand that other value systems exist.

The EA view of things is pretty simple to understand. Given the premise of limited resources, and a belief that all lives are worth the same, how can you best improve human livelihood?

Different people approach giving back to society in different ways. The EA way to approach the above is to crunch numbers and find what they think is the place where their limited resources can have have the largest impact.

My best friend's family does their part by joining their church to volunteer at food kitchens in poorer neighborhoods and hosting fundraisers for various causes throughout the year.

A Vietnamese coworker of mine used to give back by donating to a charity that gave scholarship opportunities to high achieving low income students in Vietnam.

It's not that complex to understand that different people have different value systems and how they view their tribe, people and the world.

And yes I agree that lots of people find people who hold differing views incomprehensible, but that's also normal aspect of humanity and not unique to those in EA.

From political differences, to philosophical differences, to religious differences, to any topic, many people have a hard time comprehending the worldview of others.

You can even just take the perspective from this article. There are a whole swath of people I've known that could not comprehend the idea that someone would be willing to give their kidney to a total stranger. They might understand if its someone the person knows, but a total stranger? Some might say that's insane and irrational behavior.

Lots of people can't see past their own perspectives on things, but I think it's uncharitable to suggest that EA is not just like any other group with some portion of people like that.


> The EA view of things is pretty simple to understand. Given the premise of limited resources, and a belief that all lives are worth the same, how can you best improve human livelihood?

Not quuiiite. Many other groups would accept that value, framed that way, maybe even a majority of people. What differentiates EA isn't their intention to improve livelihood, but their belief that it is possible to know how to do that.

And in fact other groups also have high confidence in their understanding of how to achieve this goal. It's not obvious to me that EA's approach to the constraints is more effective than the noble eightfold path or love your neighbor as yourself.


> It's not obvious to me that EA's approach to the constraints is more effective than the noble eightfold path or love your neighbor as yourself.

Ok, but that's not the competing option here. However good being a bodhisattva is, being a bodhisattva and saving someone from kidney failure is even better. And most people, of course, aren't going to become bodhisattvas at all: we're only choosing between being ordinary flawed people... and ordinary flawed people who also saved someone from kidney failure.


Do all EAs donate kidneys? Or even at higher rates than other groups? Everyone thinks their religion makes them better at being good. EAs might be uniquely positioned to demonstrate it statistically, if it's true.


The number of people who altruistically donate kidneys per year in the USA is like 1-200, so the fact that Scott knows multiple EAs who did so (and that the kidney donor people are used to EAs) is pretty high-tier evidence that either there are a LOT more EAs than I thought, or they do it MUCH higher rates than the general population.


Scott alone donating would probably set the rate of altruistic kidney donation at a higher rate among EAs than the general population, at least for this year; there'd have to be around 2M EAs in the USA to match the baseline rate, while the real number is almost certainly significantly lower.


> People can follow a value systems and still understand that other value systems exist.

I'm aware of this! I am fairly anti-utilitarian, and understand that the EA folks I've talked to are deeply utilitarian. What is so frustrating is that they don't seem to be able to understand me back. Any conversation about the ethics or character, duty, or virtue is translated back into utilitarianism, a framework in which non-utilitarian motives can't possibly be valid. Of course I'm not characterizing every EA-ascribing person, but it's ... very common in the community, to say the least, and it makes e.g. engaging with their forums / comment sections / subreddits agonizing.

> This is a quite uncharitable view of things. EA utilitiarians aren't spending their life on optimizing numbers, they're trying to use numbers to guide decisions on how to better impact life.

"better impact life".... as determined by... numbers.

This is a group of people who look at the world and think that the best things to do are things like optimizing QALYs or the number of animal lives or, in extreme cases, their personal lifespan including cryogenic extension in the offchance it works, or "the number of humans who will die when a superintelligent AI Roko's Basilisks / Pascal-mugs them", or other sorts of things like that. And in a world where you are only capable of measuring worth by holding up numbers against each other and comparing them, those arguments become seductive.

But outside of that framework, for instance in a moral philosophy in which the best thing to do is not "the thing with the highest +EV" but "the most noble action", those stances are absurd. It's not, IMO, a person's job to single-handedly have a highest +EV on lifespans or net-suffering; it is (to some approximation) to live a good life and do the right thing in your local journey. I would reject the notion that a person is directly responsible for far-away people's suffering. I think the world is direly short of leadership, character, and compassion, and for me goodness is about those things.

When it comes to large institutions, like governments or large charities, I feel differently, and the calculus switches over to being more +EV --- but ultimately is still about the moral compass of the organization. Like I think SBF was a scumbag and totally wrong, and would still be wrong if his bets had worked out. It is not common that people are operating at a scale where utilitarianism starts to become morally appropriate, and even when it becomes appropriate it's never entirely appropriate, because actual leadership is ultimately about morality even if the organization is doing practical things.

If the human race was completely moral, and then eventually died out due to some X-risk, that is mostly a Fine Result to me and we would all be able to sleep well at night. (but if like, the dying out was because we didn't do our moral duty and handle e.g. climate change or AI or nuclear war or building an asteroid-defense system or dealing with our own in-fighting and squabbling, then that wasn't completely moral, was it?)

To be clear, I have a lot of respect for the kidney donation stuff, a slight amount of respect for giving money to charity, and massive disrespect for the hordes of smart people who have divested from the real world and instead smugly pat themselves on their backs that they're doing important work on AI safety.


not meaning to attack you on this point--it's your life and you can do what you want--but why would you only consider being a donor for your family? i'm guessing the reason might be 1) you have some condition and you're worried about your health (in which case you might not be an elligible donor anyway) or 2) you're worried about surgery complications or long-term health impacts. but the impacts to your health are usually much more minor than you expect!

here's an analogy i've used before: if you were walking by a burning building, and you heard a stranger inside saying "help! help!", would you run in and save them? i think a lot of people would say "yes" or at least consider it, despite running into a burning building being a lot riskier than organ donation IMO


I think that one of Effective Altruism's axioms - that all lives are equally valuable - is not accepted by much of the population.

The number of people that would rush into a building to save a family member is likely far higher than the number of people that would rush into a building to save a stranger.


>They at least have balls!

For the moment.


A lot of EAs, including the author of this piece, are also really into IQ, eugenics and "human biodiversity" (aka race science). Consider that Scott Alexander (aka squid314) once expressed a desire to donate to "Project Prevention", a eugenics charity formulated to pay undesirables to sterilize themselves [1][2].

[1] https://en.wikipedia.org/wiki/Project_Prevention [2] https://web.archive.org/web/20131230050925/http://squid314.l...


I don’t see anything wrong with the charity you mentioned. It certainly isn’t what normal people consider eugenics though technically that charity might meet its definition. If you are a crack addict, you shouldn’t be anywhere close to having children. You will ruin the child’s life first and foremost. Apart from that you will create an undue burden to society, neither of which is fair. First they should cure themselves of their addiction and then have children.


If you don't see anything wrong with a charity whose founder compares people to dogs and calls their children "litters", then I doubt there's anything I could say to make you disapprove of it.


Maybe you should answer the question I and the charity pose rather than deflecting. Do you think it’s correct for Crack Addicts to have litters of children, who will likely be neglected, emotionally stunted, and suffering from mental health problems.


That is what normal people consider eugenics.


It was completely baffling to me how someone as brilliant as SA could conclude that "the risk of dying from the screening exam was 1/660".

However, the paper he references does contain the specific example:

> For example, a 40-year-old woman undergoing CT coronary angiography is estimated to have a 1 in 270 chance of developing cancer at a radiation dose of approximately 20 mSv

At this point I'm completely lost here.



I know a few people who have donated kidneys (both to friends and strangers), and I find it admirable (and supererogatory).

At the same time, I think this post demonstrates a kind of selective unawareness that I’ve seen members of the EA community demonstrate. A significantly less charitable summary of Scott’s situation is “our prospective kidney donor is newly-married high-stress professional with a history of SSRI usage who has recently experienced significant negative public attention and belongs to an actively-imploding community/cult.” This is almost certainly what the UCSF administrators perceived, and his latent hostility to them almost certainly didn’t help.

This doesn’t necessarily justify their decision (maybe they are just bureaucrats), but it’s a framing that’s roughly as plausible as the bucolic EA one that’s presented. A community that has a self-stated mission in rationality should be able to see that.


We have a long waiting list for recipients, and we don't have an excess of donors to be picky with.

Why are you assuming Scott was not "able to see" the explanation you laid out, when in all likelyhood that thought has crossed his mind (he's thought about this far more than you, a commenter who speculated that in 5 minutes)?

Perhaps he thought of it, and decided saying it would be even meaner to the hospital... which frankly, I think that framing is far crueler to the hospital. Like, if they rejected him from bureaucratic incompetence as they tried to filter out people who can't consent to surgery? Fine, incompetence is bad, but fine.

If they rejected him because they could give someone with kidney failure 10 more years of good life, they thought he had a healthy kidney and could consent, but they had a personal grudge or moral judgement... That's a much more repugnant way to operate a hospital than simple incompetence. Reducing the quality of life of a potential kidney recipient for any of those reasons intentionally is far crueler than mere incompetence.

I also suspect saying anything other than the hospital's purported reason would be grounds for libel.


> Like, if they rejected him from bureaucratic incompetence as they tried to filter out people who can't consent to surgery? Fine, incompetence is bad, but fine.

I'd be even more charitable to them: it's not incompetency, but waste inherent in a bureaucratic process, which is a consequence of scale. Sure, it would be better if Scott was interviewed with someone dedicated and empowered, who could hear and verify the full story, understand the context. But there are only so many people able to do this and willing to accept responsibility for making a bad call - and healthcare system can't afford them anyway. The bureaucratic process focused on effectively minimizing false positives will introduce a lot of false negatives - but it's also much more streamlined and cheaper to run at scale. And sure, it sucks to be the one rejected by an effectively automated process[0], but it's better for everyone than not having any process at all.

--

[0] - I maintain that bureaucracy is software running on a distributed system made of meat instead of networked silicon. It has the same dumb failure modes because it's the same thing.


> And sure, it sucks to be the one rejected by an effectively automated process[0], but it's better for everyone than not having any process at all.

You could have a much lighter weight process, that leaves more of the decision making powers with individuals and families. That would be cheaper and overall suck less.


Placing the responsibility with families results in compelled donations (“you love your Dad, don’t you?”). Our medical system is (imperfectly) built around consent; that consent needs to be in evidence.


Yeah. When my dad had a kidney failure I had a lot of fun proving to the doctors that I did not in fact love my dad, and that's why I was giving him a kidney.

Really though, the definition of "compelled" is fuzzy, and mostly for medical stuff we take "consent" to mean "sound of mind" and "not overtly coerced, i.e. by threats of violence or such".

Is someone able to "consent" if they're only donating blood or a kidney because their religion tells them to do good? That sounds like a cult, huh. What about because they love someone? What about because the local blood drive literally gives you 10 bucks and a candy for donating blood?

Like, you can argue around enough and say everyone is always compelled at all times by "society" or "their upbringing", but that's an obviously useless definition.

What we have now, you getting a private meeting with a nurse or such who asks "are you really sure? Is anyone making you do this?"... that, plus doing basic mental health checks of "does not have dementia or mental retardation", that seems like it should be enough for any real situation.

Trying to say "No, someone who's famous might be doing this to write a blog post about it, that's not okay" is an interpretation of consent that simply is too far gone to be useful.


> Trying to say "No, someone who's famous might be doing this to write a blog post about it, that's not okay" is an interpretation of consent that simply is too far gone to be useful.

Nobody has said this, as far as I can tell. My claim was that there were other things about his situation that might suggest to the bureaucrats in question that he was a risky donor. I don’t think Scott gave a kidney so that he could write a blog post about it.


There are two people in surgery and two families involved in this transaction. The recipient side is relying on the healthcare system to make the transplant work out well, and it's the medical staff that will be blamed and held responsible if the recipient suffers or dies due to an avoidable mistake. Because of that, there's only so much decision making powers that can be granted to the donating side.


> We have a long waiting list for recipients, and we don't have an excess of donors to be picky with.

That’s not how this works. Organ donation is inherently picky and fallible: without that pickiness, you get coerced organ donation and people undergoing mental crises seeking a form of salvation.

I don’t understand where the moral judgement or grudge determination came from: everything I’ve said is a prima facie plausible reason to reject a donation candidate on basic medical ethics grounds.


> Organ donation is inherently picky and fallible: without that pickiness, you get coerced organ donation [...]

What is your evidence for this claim?


The international organ trade[1], for one. Survival is our strongest instinct; otherwise reasonable people will do reprehensible things to stay alive.

(There’s also familial coercion, but that wasn’t present in this case presumably.)

[1]: https://sgp.fas.org/crs/row/R46996.pdf


That international organ trade still operates as a black market. So it's hard to compare that with being less picky about organ donors.

The very pdf you linked to even mentions 'Reducing U.S. Demand for Trafficked Organs' as one of the policy avenues to pursue to help combat trafficking of organs. Being less picky about donors in the US seems exactly like what you'd need to do to reduce that kind of demand.

You might also want to look into the situation in Iran, where financial compensation for organ donors is legal.


Your summary literally doesn't even sound more stressful than the median American adult life. He has a job? He got married? He has taken psychiatric drugs before? Some people don't like him? There exists social drama sort of nearby him? So what? These are incredibly superficial assessments.

Why is the standard for donating a kidney that your life has to be, in the opinion of some bureaucrats who just met you, super chill? Nobody applies this standard to any other similarly-risky (i.e. not very risky) decision. The standard should be whether you are a person in sound mind who wants to donate it and is physically fit to donate it.


The average American isn’t donating half of their kidneys, as the post points out. It’s entirely possible that the average American isn’t psychologically suited to do so without an extensive (and fallible) filtration process.

But beyond that: the author is not an average American, and we both know that. Most Americans don’t find themselves in the NYT, or somewhat central to a large, wealthy controversial community that’s actively imploding.


Why does being a minor celebrity mean you shouldn't donate your kidney?

You don't think most Americans are "psychologically suited" to donate a kidney? Again, I don't think you would apply this level of paternalism to other situations. Would you tell the median person that they aren't "psychologically suited" to have a kid and therefore aren't allowed to? That's a dramatically more impactful and psychologically stressful decision, but we admit that the person best suited to make it is the person living it.

I think you are just trying to justify a dumb bureaucratic decision that doesn't respect the suffering of people who actually need kidneys.


I didn’t say that. I think anybody should be able to donate their kidneys, provided that they are deemed competent to do so. Famous people and controversial people can be competent, and Scott almost certainly is.

As a society, we treat organ donation differently because it comes with significant opportunity for abuse and regret.

The sole point of my comment was to highlight how prominent EA writers can demonstrate a hypocritical blindness in a way that dispassionate observation can reveal. I’m not especially interested in UCSF’s actual decision.


OK, acknowledged.


He was officially rejected due to mild childhood OCD. If you take that stated reason at face value, his rant about it is completely reasonable.

If their "real reason" is more like your description, perhaps they should have found a way to try to say that.


I know at least for directed organ donation transplant teams will frequently lie - for instance, if a donor backs out, the clinic will default to telling the recipient it isn't a match rather than them backing out. I wonder if there's similar things going on here.


The not PC version of my thoughts is you have to be nuts to want to give away a perfectly good kidney to a random stranger. The system doesn't agree with me and has criteria for who can "reasonably" do that.

My experience is that insanity is essentially rooted in "buying the BS" and actual reality is the best antidote for it.

If they really are concerned for the mental health of donors whose reasoning worries them, they should find a means to effectively express that to them.

The policy you describe is likely a factor here. It's also a form of gaslighting which is an effective means to drive someone insane.

It's not gaslighting or headfuckery to come up with polite explanations for someone changing their mind. It's just a practical matter. But telling someone repeatedly "Get therapy" if* you have misrepresented what you see as mentally wrong is pointless.

* Granted, as someone else pointed out, we are only hearing his version of events and it may not be "The truth, the whole truth and nothing but the truth."


I agree. But we also only have Scott’s telling; it’s entirely possible that they expressed other concerns to him and only concluded with a final point.


This is true.


You make a good case, but I cant help but wonder if the people who will die from lack of a transplant agree with UCSF's decision.


I’m absolutely positive they don’t. That’s why they don’t get to choose other peoples’ organs, and why we don’t let the families of victims determine the suitable punishment for the criminal.


What’s the "community/cult" here? I’m apparently missing context.


Effective altruism, which is both a general (and not especially objectionable) variant of act utilitarianism, and also a specific community that has (fairly, to my eye) been accused of diverting money towards a particular set of reactionary worldviews held by its leadership.


EA isn't a form of act utilitarianism. Though EA certainly has a consequentialist flavor, it isn't the case that all (or even most?) EAs endorse utilitarianism (utilitarians aren't the only ones who care about scale). For example, Holden Karnofsky is explicitly non-utilitarian and Will MacAskill advocates making decisions based on considering (fundamentally, not just for optics) non-utilitarian ethical systems, though he is more utilitarian than someone like Karnofsky.


The main recipients of EA political funding (what little there is: the majority of money goes to global health and development, with AI safety as a distant second) are centrist Democrats. They're too right-wing for my taste too, but calling them "reactionaries" dilutes the word into meaninglessness.


I'm not sure woodruffw necessarily talked about donations to US politicians when they mentioned 'diverting money towards a particular set of reactionary worldviews held by its leadership.'

Perhaps they just don't like malaria nets? See https://www.givewell.org/charities/top-charities


I meant more the accelerationist and AI derisking contingents, not political donations. Both are explicitly reactionary in the most basic sense (and accelerationism is also reactionary in the far-right sense).


> Both are explicitly reactionary in the most basic sense

In the most basic sense, as a reaction? Are weapons regulations "reactionary?" I think it's well established what using the term "reactionary" is, and it doesn't apply to X-risk concerns. Also Eliezer Yudkowsky has been "reacting" against AI X-risk since you were a toddler, in case you thought this was some recent phenomena...


The idea that we should plow charitable donations into various low-value and ill-conceived projects because the moral calculus of doing so versus a malicious general AI strikes me as reactionary.

That’s independent of (but connected to) Yudkowsky’s whole thing, which to my understanding never really attempted to apply the logical extreme here.


Effective Altruists. See also SBF/FTX, and https://time.com/6252617/effective-altruism-sexual-harassmen...


Now that we're talking about kidney donation -- you might also consider becoming a blood donor! It's easy, and you get free tracking of cholesterol, blood pressure, and a battery of other blood tests when you do so.

If you're in the US and have been told in the past that you're not qualified, you might want to look and see if that's still the case. The US relaxed a bunch of restrictions during the COVID-induced blood donation shortages of the last few years, and is in the process of removing some more.

https://www.hhs.gov/givingequalsliving/giveblood/can-i-give


This was actually really helpful to me as a person with CKD. Some of the possible risks & issues I’ve had clicked after reading it. I hadn’t considered that, even if I’m not actively dying of CKD, I could be experiencing degraded function in other ways. Useful to know.


I skimmed over this so I am unsure if this was addressed in the article: what if you donate an organ to an asshole? If they're weak or sickly, there is less chance of them being an asshole. However, a recovered and healthy asshole has relatively greater chance of subjecting the world to themselves.

The above may appear to be a crude (and maybe overly harsh and pessimistic) argument against organ donation, but I struggle with encouraging altruistic because some people suck and don't deserve your kindness. My question is this: does showing kindness to someone who is incapable of it themselves only perpetuate more unkindness? I worry that it does.


Some people suck. Some people need kidneys. Some people who need kidneys are people that suck.

I guess the answer to your question lies in whether you believe that humans are predisposed towards good or evil.

Evolutionarily the argument goes that humans are predisposed towards cooperation (i.e. good) which also seems to be backed up by archeological evidence (e.g. unremarkable individuals clearly being cared for up to old age despite evident disabilities or injuries that would have made them unable to contribute to the group's survival). Thus it seems that a general predisposition towards "evil" (for lack of a better word) is rare or otherwise the result of circumstances, socialization and socio-economics (e.g. living in a system incentivizing competition over cooperation).

On the other hand a devout Catholic would argue that man is inherently corrupt and sinful and requires deliberate salvation to become good. Or an economist might argue that a human is ideally a rational actor only interested in optimizing their own benefit (which most would probably characterize as "being a bit of a dick"). And then there's the old Greek dead guy's saying of "man is wolf to man" (i.e. people suck). And I'm sure depending on your life experiences and circumstances you have met plenty of people who suck as well.

Of course there's the odd chance that your act of kindness as a donor may change a person who suck's outlook on life and make them suck less rather than die as a miserable husk of a person. But that's just rolling the dice at that point so the question still remains.


> Evolutionarily the argument goes that humans are predisposed towards cooperation (i.e. good) which also seems to be backed up by archeological evidence (e.g. unremarkable individuals clearly being cared for up to old age despite evident disabilities or injuries that would have made them unable to contribute to the group's survival).

Animals developed cooperative behaviors because selective pressures rewarded the development of these behaviors for populations, but I don't think this helps us assume anything about one person's predisposition for cooperation. We can make assumptions about behavior for an individual on average based on their past actions and assumed personality characteristics. I'd like to believe that it means something to be human (ideologically, philosophically, morally, ethically etc.), but it's hard to judge what it means for other people or another person when their decision making is distorted by either past trauma or cognitive biases (I fall to this myself). Ideally, one should reciprocate the behavior one is afforded. But as you said, societal pressures sometimes do make good behavior harder to afford.

For organ donation most people donate organs because they want their death to mean something, or they do it while they're living because they make the assumption that they probably won't need two kidneys or a whole liver during the rest of their life (bad assumption given the standard American lifestyle and diet, maybe non-Americans might have better luck here). If they want to do it, whatever. For most people, a relatively healthy body and mind is their most useful asset..

I'll never encourage someone to donate organs while they're living, and even less after their death because they lose autonomy over their body. Why does post-death autonomy matter? For sentient, conscious and intelligent beings post-death autonomy is an extension of their pre-death conciousness. The rights of an individual don't and shouldn't end at their death, I think. If we don't help eachother in life and live selfishly, I don't see how and why death changes that? If the idea is that doing selfless deeds will beget more selflessness..um not in my experience, sadly. There's an entrenched individualistic tendency (almost bordering on solipsism) which often makes the receiver of the selfless deed believe that "they got something good because they're special", and not that someone else did something selfless. I think this individualistic tendency also prevents "passing it on", as you were hoping the receiver of a selfless deed might do. So I think for organ donation, if every reciever is automatically and irrevocably enrolled in organ donation themselves, then I wouldn't mind donating organs or encouraging other people to do it.


Do you think the average kindness of a person is positive? Or negative? If positive, and not knowing who your kidney is going to, donating a kidney is positive in expectation


I think the average kindness of a person depends on how much they can get away with without social, financial or legal repercussions. Maybe I am just pessimistic? Infosar as organ donation is concerned, I think it's nice to ask people to be considerate of each other and we should encourage that behavior. I am just saying that there are people who are abusive, opportunistic and uncaring. I don't like the idea of an altruistic person falling victim to that kind of person.


What if I save a nazi's life is an argument that can be used against anything though.


What make a weak and sickly person less likely to be an asshole?


Really long article so I will only comment on one of the points that came to mind. He talks about the whole EA castle thing, and that it was actually mathematically calculated to be better to buy the castle, but it seems to me that something is always left out of the EA calculation. That is, the PR. It's better if there are more EAs, but the PR has been so bad that the number of EAs probably decreased quite significantly, but shouldn't that have been part of the calculation? If you only optimise for immediate effect and spend all of your PR to do it, this is just bad EA.


Great article! I'd never considered the mathematical risks associated with this.

Now I'm curious what the numbers would look like if it were a cultural norm for everyone (who passed the screening exam) to donate their kidneys in this non-directed fashion. My incredibly unscientific gut-feeling, back-of-the-napkin math seems like it would be plausible to reduce kidney failures to zero in no time: most people have two functioning kidneys and there are significantly less than 50% of people that need donated kidneys. Of course that gut-feeling math doesn't factor in the increase in risk for the donors (the radiation during screening, botched surgeries, etc), but those risks seem low enough that I imagine we'd net positive on a large scale.


The linear non-threshold model is... Quite contested however.

The truth is it's likely a threshold exists, likely under .1 Sv, but its really, really hard to determine becauses at those percentage cancer causes tends to mix up. I think US and German environmental agencies support the non-threshold model as a tool to absolutely reduce radiation contamination in the environment, but I know French doctors (or at least radiologists) think this is bullshit and think the model is both wrong (like all models) and useless in their discipline.

I just checked to be certain I was right, and I am, in fact French doctors rejected the model in 2005, citing a boatload of work on radiation, and also epidemiologic studies (background radiation is more concentrated in areas that have decreased chances of developing cancers apparently, which is weird, but i'm taking a break and do not have time to check out more)


(This is discussed in footnote 2 of the article.)


> I make fun of Vox journalists a lot, but I want to give them credit where credit is due: they contain valuable organs, which can be harvested and given to others.

One of the funniest things Scott has ever written...


The whole benefits of kidney donation to a stranger comes down to an effective donation of $10,000 using the same logic.

How is it effective when he could have probably raised 2 times that amount by working in place


He addressed this

>But it only costs about $5,000 - $10,000 to produce this many QALYs through bog-standard effective altruist interventions, like buying mosquito nets for malarial regions in Africa. In a Philosophy 101 Thought Experiment sense, if you’re going to miss a lot of work recovering from your surgery, you might as well skip the surgery, do the work, and donate the extra money to Against Malaria Foundation instead.

>Obviously this kind of thing is why everyone hates effective altruists...

>It starts with wanting, just once, do a good thing that will make people like you more instead of less. It would be morally fraught to do this with money, since any money you spent on improving your self-image would be denied to the people in malarial regions of Africa who need it the most. But it’s not like there’s anything else you can do with that spare kidney.

>Still, it’s not just about that. All of this calculating and funging takes a psychic toll. Your brain uses the same emotional heuristics as everyone else’s. No matter how contrarian you pretend to be, deep down it’s hard to make your emotions track what you know is right and not what the rest of the world is telling you


I'm inclined to think there might undiscovered evolution advantage that we still have two kidneys today. (Most of our parts have two for good reasons)


It's... not undiscovered, they screen you for kidney stuff before they let you donate because there's a negative health impact on kidney function from only having one.


There is a flaw in the calculation of benefit. It makes no allowance for any domino effect resulting in other non-directed kidney donations.


One thing to note for HN comment readers is that this guy is a MD with experience in kidney function despite not being a nephrologist. He's also not a teenager or in his 20s. That's important due to remaining expected lifespan, as well as the vital importance of functioning kidneys (preferably plural) after a car crash or a gunshot wound.


"It starts with wanting, just once, do a good thing that will make people like you more instead of less." Oh this poor soul. The people who do not like you will not care what you do. I know he's been through the ringer with the Blue Tribe and all but this is the reaction that just makes them keep on pounding away.


Well he really inspired me to consider kidney donation, if only I hadn't just discovered that I have only one.


Good for him. However I feel bad it seems like a misplaced sense of personal guilt over being an effective altruist compelled him to do this. Many empathetic people worry far much over the slight possibility they are a bad person even though they're miles away from one. Evil, psychopathic people never worry about being a bad person at all, and can sleep soundly even in perfect understanding of the deliberate suffering they've caused.


there's a lot of suffering in the world, and it doesn't matter too much to me who's fault it is (in the case of kidney failure, is it anyone's fault?). for me, morality isn't about avoiding being evil, or being better than my peers, it's just about increasing the amount of good in the world; particularly doing so if it's not too costly or inconvenient to me


Campaigning for live organ donations and donating yourself is such a waste compared to the real solution, which is opt-out organ donation.


I looked into this and it isn't true - countries which switch to opt-out don't get significantly more than we do. I agree this is surprising. The best explanation I was able to find was that, of the few people who get in accidents/comas where their organs are useful, most have already opted it, and there's a lot of negotiation with the family around exactly what that means such that an opt-in/opt-out box doesn't make a big deal. See eg https://www.hopkinsmedicine.org/news/media/releases/presumed... .


it's not either/or, tho


tldr;


[flagged]


HN isn't really the place for this kind of vitriolic diatribe, because it's not all that conducive to thoughtful commentary.

That Aside,

> As if you can reduce "societal benefit" to some quantifiable number

I assume your business doesn't mark any Goodwill value or other non-tangibles in its GAAP-compliant financial statements then?


GAAP compliance has no bearing on accuracy. Accounting intangibles are suggestions at best, the market discovers the true value of assets. Where's the market on bad EA ideas?


The linked article was explicitly stating a philosophical argument in favor of effective altruism as a good philosophy to live by. I see this equivalent to eugenics - the ends justify the means, afterall.

If we are going to live in a society where some people pretend they can calculate the benefit of all their actions, then I am free to say how absolutely fucked that entire process is, and point out the real world damage that is has explicitly caused.

If someone was openly a National Socialist, we wouldn't have any problem pointing out the problems the Nazis caused. But with EAs who recently lit tens of billions of dollars on fire to buy penthouses on Caribbean islands and castles in rural England, we have to give them a pass? Excuse me if I'm not open to listening to their thoughtful commentary.


It looks like you've interacted with some nasty folks who were EAs - I don't think this was a typical cross-section of EAs. Sorry to hear about your bad experiences


As one example, a friend of mine lost over $20 million to SBF / FTX, which funded said castle. However he did not sell his claim and is optimistic he will get most of it back. Money seems like a number on a page until you see how it affects real people!

As another example, I had an EA VC investor nuke one of my seed investments in order to have a different one of his portcos acquire the IP for scrap because he had a far higher stake in the other portco, which was in the same vertical. Ruined a lot of employees' ISOs as well as my equity. But hey he's an EA - can't blame him! His expected value was simply higher.


No, you can blame EAs for making stupid decisions and being generally bad people if they are making stupid decisions and being generally bad people. I'll second that I'm sorry for the bad experiences you've had with EAs.

In my experiences, lots of them are simply good people trying to do more good. I hope you can have better experiences with them in the future.


I won't have better experiences with them in the future because the moment I discover someone is an EA I no longer have experiences with them, period.


This is deeply fallacious binary thinking.

Take anything you like. A cause, religious group, political party, sports fanbase, etc.

Some subset of each group are generally decent people, who do their best to be good to the people around them, and to live their lives in a generally moral and ethical manner.

Some subset of each group have no interest in morals or ethics, and attach themselves to the group for purely selfish reasons.

Judging the entire group based on the selfish or amoral subset is a logical fallacy of the most basic sort, and is bordering religiosity. This is even more problematic when you look at the kinds of negative EA situations that have (rightly) caused controversy. The high profile big money cases get attention.

If confronted with someone who ascribes to the EA philosophy and by all measurable indicators has done incredibly good things for the world, would you be willing to change your mind?


FTX didn't actually fund the castle, from what I've heard. It was another group.


The ends do justify the means. It's just that the means are also ends.


Yes! This was the realization that started moving me away from many flavors of consequentialist utilitarianism.

There is to me no ontological distinction between means and ends, or the cause and the effect as an action. There is a vast literature prioritizing the latter at the expense of the former, an illusion brought upon us by the arrow of time our consciousness happens to be travelling in the same direction as. But if we're going to prioritize along that axis we might as well claim actions to the left of us are privileged relative to actions to the right of us.

I admit this is splitting philosophical hairs. "Doing X at time T means at time T + 1 many many sentient beings will benefit" is not really a hard-consequentialist argument - we don't prioritize X because it is at time T. But now consider: "if future sentient beings no longer care about Y, then doing X now will benefit nobody; deciding on not-X, however, saves resources we can then use towards whatever we do care about." Now totalize that, and add in a modesty claim: As the people at the start of history, we (maybe) have no control over what moral concerns future beings will have at all, and if we do then we can (maybe) can choose to craft a future where no moral concerns are detected by any sentient beings in the future at all.

Is that good? I have no idea, but it's a lot more fun to try to puzzle through.


Also, there's the Achilles' heel of utilitarianism in general to deal with: "Even the very wise cannot see all ends."

A cynic might say "Even the very wise cannot see any ends."


Then the cynic jumped out of a window.

(Clearly, there is some ability to see ends.)


... and landed on some tracks just seconds ahead of a runaway trolley, derailing it before it could hit a busload of nuns.


How many kidneys have you donated?


My brother has a kidney ailment, so the expected value of me donating to a stranger is less than the expected value of me donating to my brother. This is because my brother has the potential for children. If he has children, they may become doctors. If they become doctors, they will potentially save a huge number of people. Of course the risk of them not becoming doctors is high. So I should investigate who around me is a doctor that needs my kidney. But then I need to assess if their doctoring is based upon sound methodology, and the correlation of their GPA vs. their schooljg mathcex the ideal numberr of utils to balances dfcsslmksn;kzknfvknzlgn;lz;lnzglzll


As the article said, if you donate, you can get your brother prioritized for a donation later.


Your point being?


This is about as bad and ethically compromised a take as someone who hates crypto-bros and would immediately cease all contact with anyone they identify as pro-cryptocurrency (say believe it superior to the current financial system) and blackball them from all business interests. Justifying doing so to protect their wealth because they know for a fact that all those damn crypto-bros are utter pieces of shit.

I have a good friend who is a communist I strongly disagree with. I believe quite-bad things about communism as a philosophy and what it inevitably leads to. As of yet I have not excluded all people who are communists from my life or business interests due to believing their philosophy makes all or them bad people. I can say the same about Christians or Jehovah's Witnesses or Randian libertarians or Kantians.

I strongly disagree with all their ethical worldviews and metaphysics, but I don't automatically believe people are evil in those categories nor believe they would stab me in the back if they could, even it I believe the philosophies in question can lead to more backstabbing behavior.

It's simply discrimination and would morally compromise me to hate any group of people for what they believe. It wouldn't surprise me if also very illegal, especially if you decline to hire someone who identifies as EA.


I would be moderately surprised if association with Effective Altruism was very much more predictive of sociopathy than association with Monero.

Which isn't to say that either is so unforgiveably horrible, just that it's very funny to see someone try to make this case without a shred of self-awareness.


[flagged]


Why?


If your spouse does not oppose your decision to donate your kidney to a random person, get a divorce asap.


This is why no firefighters are married: people don't want their spouse to take any risks to save others' lives.


Comparing the two makes no sense. Your "analogy" is silly beyond belief.


Yeah but, what if his wife eventually needs a kidney transplant?

"Oopsie, I already gave it to a stranger."


He addressed that in the article.


A man has so much cognitive dissonance, has to donate kidney to align internal narrative to create resonance with castles again.


Wow. Bonkers. Also, prolix.

Evolution has equipped us with two kidneys for presumably good reasons. Don't go giving one away unless you really have to.


This is a pretty odd remark. The effect of living with one kidney isn't some sort of unknown thing we need to reason about from first principles, we know pretty well what life is like for kidney donors. And I don't know about 'really have to', but it doesn't seem like 'saving someone's life' is a trivial or stupid reason.


the article struck me as something hannibal lecter might have used as a case-study in his paper on surgical addiction.


Evolution doesn't imply a cause to every attribute. Totally plausible that having one kidney is advantageous, our bodies have general bilateral symmetry of organs, there is no strong selection pressure against having that symmetry result in two kidneys, hence we have two. Most of your DNA is "non-functional". Evolution does not have a plan.


> Evolution has ... for presumably good reasons

That's not how evolution works either in theory or in practice


> Evolution has equipped us with two kidneys for presumably good reasons

As the article pointed out, on average you live a totally fine life with only 1 kidney, so clearly no.

Like, that same argument would be "evolution gave us poor eyesight, so glasses must be bad. Why else would evolution make so many people's vision fail? Blurry eyesight must have an evolutionary advantage", or "evolution made some people white for a reason, don't use sunscreen, skin-cancer research says to sunscreen up, but we shouldn't second-guess evolution and play gods".

Evolution is random mutations with a very course-grained utility function. Rejecting evidence in favor of it gives you nothing but nonsense. And fad diets, but mostly nonsense.


Going off a tangent: the modern myopia epidemic seems to be related to lots of time spent indoors when growing up. People in the past weren't so myopic, and we have difference between modern countries that also track pretty well with that explanation.

However the science isn't quite settled on that.


evolution also programmed us to yawn, yet doctors can't figure out what purpose or benefit it has


It's human nature to feel good about themselves by doing certain "altruistic" things. Make no mistake the donor benefits more than the recipient.


You almost make it sound like a bad deal for the recipient.


Speaking as a kidney transplant recipient it is most certainly not a walk in the park. I mean that literally; I cannot walk in the park without taking precautions against covid/influenza and my increased risk of skin cancer...


Sounds like a win-win then.


> So in total, a donation produces about 10-20 extra quality-adjusted life years.

How much would you say the donor benefits?


It depends on the QoL improvement factor you assign to self-satisfaction I suppose? Just multiply that by the expected number of years left in their life.


Damn, moral self-righteousness must be better than fuckin' heroin, shocking more people aren't out here donating kidneys given that.


Yeah, I'd cap that factor at 1% (and it's probably less) which would mean you'd have to live 100+ years to get more out of it than the recipients.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: