Hacker News new | past | comments | ask | show | jobs | submit login

We don't need a fire alarm for AGI. The problem is not AGI. Machines will be motivated to do exactly what we tell them to do. It's called classical and operant conditioning. The problem is not AGI for the same reason that the problem is not knives, nuclear power, dynamite or gunpowder. The problem is us. The problem has always been us.

Those who are running around screaming about the danger of AGI and why it should be regulated by the government before it is even here, are just scared that someone else may gain control of it before they do. This is too bad because anybody who is smart enough to figure out AGI is much smarter than they are.




Yes, an AI will do exactly what we tell it to do. The the incredible difficulty programmers have with writing bug-free code demonstrates that doing exactly what it's told isn't sufficient to guarantee it'll do what we want.

Classical and operant conditioning are psychological concepts that aren't applicable to non-humans.


"Classical and operant conditioning are psychological concepts that aren't applicable to non-humans."

You're kidding?


Sorry, I misspoke haha. They're not applicable to things without brains.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: