Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Since as you say this utilitarian view is rather common, perhaps it would good to show _why_ this is problematic by presenting a counterargument.

The basic premise under GP's statements is that although not perfect, we should use the technology in such a way that it maximizes the well being of the largest number of people, even if comes at the expense of a few.

But therein lies a problem: we cannot really measure well being (or utility). This becomes obvious if you look at individuals instead of the aggregate: imagine LLM therapy becomes widespread and a famous high profile person and your (not famous) daughter end up in "the few" for which LLM therapy goes terribly wrong and commit suicide. The loss of the famous person will cause thousands (perhaps millions) people to be a bit sad, and the loss of your daughter will cause you unimaginable pain. Which one is greater? Can they even be be compared? And how many people with a successful LLM therapy are enough to compensate for either one?

Unmeasurable well-being then makes these moral calculations at best inexact and at worst completely meaningless. And if they are truly meaningless, how can they inform your LLM therapy policy decisions?

Suppose for the sake of the argument we accept the above, and there is a way to measure well being. Then would it be just? Justice is a fuzzy concept, but imagine we reverse the example above: many people lose their lives because of bad LLM therapy, but one very famous person in the entertainment industry is saved by LLM therapy. Let's suppose then that this famous persons' well being plus the millions of spectators' improved well-being (through their entertainment) is worth enough to compensate the people who died.

This means saving a famous funny person justifies the death of many. This does not feel just, does it?

There is a vast amount of literature on this topic (criticisms of utilitarianism).



We have no problem doing this in other areas. Airline safety, for example, is analyzed quantitatively by assigning a monetary value to an individual human life and then running the numbers. If some new safety equipment costs more money than the value of the lives it would save, it's not used. If a rule would save lives in one way but cost more lives in another way, it's not enacted. A famous example of this is the rule for lap infants. Requiring proper child seats for infants on airliners would improve safety and save lives. It also increases cost and hassle for families with infants, which would cause some of those families to choose driving over flying for their travel. Driving is much more dangerous and this would cost lives. The FAA studied this and determined that requiring child seats would be a net negative because of this, and that's why it's not mandated.

There's no need to overcomplicate it. Assume each life has equal value and proceed from there.


This is either incredible satire or you’re a lunatic.


I'm just showing the logical consequences of utilitarian thinking, not endorsing it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: