Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

First, don't count on AI being aligned at all. States who are behind in the AI race will increasingly take more and more risks with alignment to catch up. Without a doubt, one if the first use cases of the AI will be as a cyberweapon to hack and disrupt critical systems. If you are in a race to achieve that alignment will be very narrow to begin with.

Regarding the pet vs humans - the main difference is really that the humans are capable of understanding and communicating the long term consequences of AI and unchecked power, which makes them a threat, so it's not a big leap to see where this is heading.



> First, don't count on AI being aligned at all.

I don't. Even in the ideal state: aligned with who? Even if we knew what we were doing, which we don't, it's all the unsolved problems in ethics, law, governance, economics, and the meaning of the word "good", rolled into one.

> Without a doubt, one if the first use cases of the AI will be as a cyberweapon to hack and disrupt critical systems.

AI or AGI? You don't even need an LLM to automate hacking; even the Morris worm performed automated attacks.

> humans are capable of understanding and communicating the long term consequences of AI and unchecked power

The evidence does not support this as a generalisation over all humans: Even though I can see many possible ways AI might go wrong, the reason for my belief in the danger is that I expect at least one such long-term consequence to be missed.

But also, I'm not sure you got my point about humans being treated like pets: it's not a cause of a bad outcome, it is one of the better outcomes.


It's always nice to see someone else on Hacker News who has pretty much independently derived most of my conclusions on their own terms. I have little to add except nodding in agreement.

Kudos, unless we both turn out to be wrong of course.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: