Hacker News new | past | comments | ask | show | jobs | submit login

Right. The point isn't that there's an actionable strategy we know of right now that would give a superhuman AI world domination.

It's that we're not sure there isn't. People who first discover the problem tend to quickly come up with reasons the AI would fail, but whenever you examine those reasons more deeply, you usually find that they're not as bulletproof as we'd like.

It's like playing a novel chess form against an advanced chess engine, with a large pawn advantage. Maybe the advantage is enough to beat the massive skill gap, but until you've played it's hard to guess how much margin is enough.




We don't need to know or examine strategies to all hypothetical risks that we can come up with, nor could we.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: