Most of the responses in this thread remind me of why I don't typically go into the comment section of these announcements. It's way too easy to fall into the trap set by the doomsday-predicting armchair experts, who make it sound like we're on the brink of some apocalypse. But anyone attempting to predict the future right now is wasting time at best, or intentionally fear mongering at worst.
Sure, for all we know, OpenAI might just drop the AGI bomb on us one day. But wasting time worrying about all the "what ifs" doesn't help anyone.
Like you said, there is so much work out there to be done, _even if_ AGI has been achieved. Not to get sidetracked from your original comment, but I've seen AGI repeatedly mentioned in this thread. It's really all just noise until proven otherwise.
Build, adapt, and learn. So much opportunity is out there.
> But wasting time worrying about all the "what ifs" doesn't help anyone.
Worry about the what if is all we have as a species. If we don't worry about how stop global warming, or how we can prevent a nuclear holocaust these things become more far more likely.
If OpenAI drops an AGI bomb on us then there a good chance that's it for us. From there it will just be a matter of time before a rouge AGI or a human working with an AGI causes mass destruction. This is every bit as dangerous as nuclear weapons - if not more dangerous – yet people seem unable to take the matter as seriously as it needs to be taken.
I fear millions of people will need to die or tens of millions will need to be made unemployable before we even begin to start asking the right questions.
Isn't the alternative worse though? We could try to shut Pandora's box and continue to worsen the situation gradually and never start asking the right questions. Isn't that a recipe for even more hardship overall, just spread out a bit more evenly?
It seems like maybe it's time for the devil we don't know.
We live in a golden age. Worldwide poverty is at historic lows. Billions of people don't have to worry about where their next meal is coming from or whether they'll have a roof over their head. Billions of people have access to more knowledge and entertainment options than anyone had 100 years ago.
Staying the course is risking it all. We've built a system of incentives which is asleep at the wheel and heading towards as cliff. If we don't find a different way to coordinate our aggregate behavior--one that acknowledges and avoids existential threats--then this golden age will be a short one.
Maybe. But I'm wary of the argument "we need to lean into the existential threat of AI because of those other existential threats over there that haven't arrived yet but definitely will".
It all depends on what exactly you mean by those other threats, of course. I'm a natural pessimist and I see threats everywhere, but I've also learned I can overestimate them. I've been worried about nuclear proliferation for the last 40 years, and I'm more worried about it than ever, but we haven't had another nuclear war yet.
Most of the responses in this thread remind me of why I don't typically go into the comment section of these announcements. It's way too easy to fall into the trap set by the doomsday-predicting armchair experts, who make it sound like we're on the brink of some apocalypse. But anyone attempting to predict the future right now is wasting time at best, or intentionally fear mongering at worst.
Sure, for all we know, OpenAI might just drop the AGI bomb on us one day. But wasting time worrying about all the "what ifs" doesn't help anyone.
Like you said, there is so much work out there to be done, _even if_ AGI has been achieved. Not to get sidetracked from your original comment, but I've seen AGI repeatedly mentioned in this thread. It's really all just noise until proven otherwise.
Build, adapt, and learn. So much opportunity is out there.