Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem is making assumptions about why people are experiencing certain emotions, or telling people they are wrong when they say, "I'm actually not angry"


Yes, but this can happen just as easily with human actors as it can with non-human actors.

I suppose the benefit of a human actor is that you can theoretically fine or jail them if they're found to be malicious or sufficiently incompetent.

However, on the other hand, human actors can explain why they're doing the right thing. Even when they are in fact doing the wrong thing. An AI that is broken incomprehensibly can still be determined to be broken. The human actor causing issues can produce very convincing arguments to avoid termination. Also they can bring in donuts every Thursday to stay on the bottom of the termination list.

[And to be clear. I don't trust the technology at all. I just also don't trust the human system either. A system isn't better because innocent people are oppressed by humans instead of by a computer.]




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: