yes, just like "our nuclear bombs are so powerful, they could wipe out civilisation", which led to strict regulation around them and lack of open-source nuclear bombs
It will never stop being funny to me that people are straight-facedly drawing a straight line between shitty text completion computer programs and nuclear weapon level existential risk.
There's a certain kind of psyche that finds it utterly impossible to extrapolate trends into the future. It renders them completely incapable of anticipating significant changes regardless of how clear the trends are.
No, no one is afraid of LLMs as they currently exist. The fear is about what comes next.
> There's a certain kind of psyche that finds it utterly impossible to extrapolate trends into the future.
It is refreshing to see somebody explicitly call out people that disagree with me about AI as having fundamentally inferior psyches. Their inability to picture the same exact future that terrifies me is indicative of a structural flaw.
One day society will suffer at the hands of people that have the hubris to consider reality as observed as a thing separate from what I see in my dreams and thought experiments. I know this is true because I’ve taken great pains to meticulously pre-imagine it happening ahead of time — something that lesser psyches simply cannot do.
"Looks at all the other species 'intelligent' humans have extincted" --ha ha ha ha
Why the shit would we not draw a straight line?
If we fail to create digital intelligence then yea, we can hem and haw in conversations like this forever online, but you tend to neglect that if we succeed then 'shit gets real quick'. Closing your eyes and years and saying "This can't actually happen" sounds like a pretty damned dumb take on future risk assessments of technology when pretty much most takes on AI say "well, yea this is something that could potentially happen".
Literally the thing people are calling "AI" is a program that, given some words, predicts the next word. I refuse to entertain the absolutely absurd idea that we're approaching a general intelligence. It's ludicrous beyond belief.
Then this is your failure, not mine, and not a failure of current technology.
I can, right now, upload an image to an AI and say "Hey, what do you think the emotional state of the person in this image is" pretty damned accurately. Given other images I can have the AI describe the scene and make pretty damned accurate assessments of how the image could have came about.
If this is not general intelligence I simply have no guess as to what will be enough in your case.
Which is interesting because after the fall of the Soviet Union, there was rampant fear of where their nukes ended up and if some rogue country could get their hands on them via some black market means.
Then through the 90's, it was the fear of a briefcase bomb terrorist attack and how easy it would be for certain countries, who had the resources to pull an attack off like that in the NYC subway or in the heart of another densely populated city.
Then 9/11 happened and people suddenly realized you don't need a nuke to take out a few thousand innocent people and cripple a nation with fear.
Yes, just like... the exact opposite. One is a bomb, the other a series of mostly open source statistical models. What kind of weed are you guys on that's made you so paranoid about statistics?