Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I personally think that AI is a realistic extinction threat for our whole species within this century, and a full nuclear war never was (and probably never will be). Neither is climate change.

Collapse of our current civilizations? Sure. Extinction? No.

And I honestly see stronger incentives on a road towards us being outcompeted by AI, then on our leaders starting a nuclear war.



Why? There’s nothing in the current process of developing AI that would lead to a AI that would act against humanity of its own choosing. The development process is hyper-optimised to make AIs that do exactly what humans tell it to do. Sure, an LLM AI can role-play as an evil super AI out to kill humans. But it can just as easily role-play as one that defends humanity. So that tells us nothing about what will happen.

We could just as well think that exploding the first nuclear bomb would ignite the atmosphere and kill all of humanity. There was nothing from physics that indicated it was possible but some still thought about it. IMO that kind of thinking is pointless. Same with thinking LHC would create a black hole.

As far as I can tell the fear of super intelligent AI will kill humans all boils down to something utterly magical happens and then somehow super intelligent evil AI appears.


> Why? There’s nothing in the current process of developing AI that would lead to a AI that would act against humanity of its own choosing.

If we had certainty that our designs were categorically incapable of acting in their own interest then I would agree with you, but we absolutely don't, and I'd argue that we don't even have that certainty for current-generation LLMs.

Long term, we're fundamentally competing with AI for ressources.

> We could just as well think that exploding the first nuclear bomb would ignite the atmosphere and kill all of humanity. There was nothing from physics that indicated it was possible but some still thought about it.

This is incorrect. Atmospheric ignition was a concern by nuclear physicists based on physics, but dismissed as unlikely after doing the math (see https://en.wikipedia.org/wiki/Effects_of_nuclear_explosions#...).

> As far as I can tell the fear of super intelligent AI will kill humans all boils down to something utterly magical happens and then somehow super intelligent evil AI appears.

Not necessary at all. AI acting in its own interests and competing "honestly" with humans is already enough. This is exactly how we outcompeted every other animal on the planet after all.


Total extinction of any dominant species is really hard. Very few post-apocalyptic settings suggest a full extinction and usually show some thousands of survivors struggling with the new norm. Humans in particular are very adaptable so thorough killing all 8 billion of us would be difficult no matter the scenario. I think only the Sun can do that and that's assuming we fail to find an exit strategy 5 billion years in (we're less than a thousandth of a percent into humanity if we measure on that scale).

As such, I'd say "extinction" is more of a colloquial use of "Massive point in history that kills off billions in short order".


Personally I don't bieleve in a collapse nor extinction, just a slow spiral into more and more enshittification. You'll have to talk to an "ai" doctor because real doctors will treat people with money, you'll have to face an "ai" administration because the real administration will work for people with money, you'll have to be a flesh and blood robot to an "ai" telling you what to do (already the case for Amazon warehouse workers, food delivery people, &c.), some "ai" will determine if you qualify for X or Y benefits, X or Y treatment, X or Y job.

Basically everything wrong with today's productivism, but 100 times worse and powered by a shitty ai that's very far from agi.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: