Hacker News new | past | comments | ask | show | jobs | submit login

We live in a golden age. Worldwide poverty is at historic lows. Billions of people don't have to worry about where their next meal is coming from or whether they'll have a roof over their head. Billions of people have access to more knowledge and entertainment options than anyone had 100 years ago.

This is not the time to risk it all.




Staying the course is risking it all. We've built a system of incentives which is asleep at the wheel and heading towards as cliff. If we don't find a different way to coordinate our aggregate behavior--one that acknowledges and avoids existential threats--then this golden age will be a short one.


Maybe. But I'm wary of the argument "we need to lean into the existential threat of AI because of those other existential threats over there that haven't arrived yet but definitely will".

It all depends on what exactly you mean by those other threats, of course. I'm a natural pessimist and I see threats everywhere, but I've also learned I can overestimate them. I've been worried about nuclear proliferation for the last 40 years, and I'm more worried about it than ever, but we haven't had another nuclear war yet.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: