Hacker News new | past | comments | ask | show | jobs | submit login

But usually there’s a one-way flow of intent from the human to the tool. With a lot of AI the feedback loop gets closed, and people are using it to help them make decisions, and might be taken far from the good outcome they were seeking.

You can already see this today’s internet. I’m sure the pizzagate people genuinely believed they were doing a good thing.

This isn’t the same as an amoral tool like a knife, where a human decides between cutting vegetables or stabbing people.




> With a lot of AI the feedback loop gets closed, and people are using it to help them make decisions, and might be taken far from the good outcome they were seeking.

The answer to this is simple: don't use a tool you don't understand. You can't fix this problem by nerfing the tool. You have to fix it by holding humans responsible for how they use tools, so they have an incentive to use them properly, and to not use them if they can't meet that requirement.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: