Hacker News new | past | comments | ask | show | jobs | submit login

So your suggestion is that we ignore the existential risk until after it occurs?

> literally no A->B->C line drawn to it

Spend a couple of hours to educate yourself about the issue, and make a convincing counterargument please:

https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-no...

https://astralcodexten.substack.com/p/why-i-am-not-as-much-o...

https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a...

https://www.youtube.com/@RobertMilesAI/videos




posting links to yudkowsky is just not going to convince people who don't already see it his way. he doesn't even try to be persuasive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: