I agree the latter part is a risk to consider, but I really think getting an AI to replace human jobs on a vast scale will take much more than just training a bit more.
You need to train on a fundamentally different task, which is to be good at the adversarial game of pursuing one's needs and desires in a social environment.
And that doesn't yet take into account that the interface to our lives is largely physical, we need bodies.
I'm seeing us on track to AGI in the sense of building a universal question answering machine, a system that will be able to answer any unambiguously stated question if given enough time and energy.
Stating questions unambiguously gets pretty difficult fast even where it's possible, often it isn't even possible, and getting those answers is just a small part of being a successful human.
PS: Needs and desires are totally orthogonal to AI/AGI. Every animal has them, but many animals don't have high intelligence. Needs and desires are a consequence of our evolutionary history, not our intelligence. AGI does not need to mean an artificial human. Whether to pursue or not pursue that research program is up to us, it's not inevitable.
To be clear, I'm not arguing humans will stop being involved in software engineering completely. What I fear is that the pool of employable humans (as code reviewers, prompt engineers and high-level "solution architects") will shrink, because fewer will be needed, and that this will cause ripples in our industry and affect employment.
We know this isn't far-fetched. We have strong evidence to suspect during the big layoffs of a couple of years ago, FAANG and startups all colluded to lower engineer salaries across the board, and that their excuse ("the economy is shrinking") was flimsy at best. Now AI presents them with another powerful tool to reduce salaries even more, with a side dish of reducing the size of the cost center that is programmers and engineers.
Honestly, I wasn't even talking about jobs with that. I worry about an intelligent IOT controlled by authoritarian governments or corporate interests. Our phones have already turned society into a panopticon, and that will can get much worse when AGI lands.
But yes, the job thing is concerning as well. AI won't scrub a toilet, but it will cheaply and inexhaustibly do every job that humans find meaningful today. It seems that we're heading inexorably towards dystopia.
> AI won't scrub a toilet, but it will cheaply and inexhaustibly do every job that humans find meaningful today
That's the part I really don't believe. I'm open to being wrong about this, the risk is probably large enough to warrant considering it even if the probability of this happening is low, but I do think it's quite low.
We don't actually have to build artificial humans. It's very difficult and very far away. It's a research program that is related to but not identical to the research program leading to tools that have intelligence as a feature.
We should be, and in fact we are, building tools. I'm convinced that the mental model many people here and elsewhere are applying is essentially "AGI = artificial human", simply because the human is the only kind of thing in the world that we know that appears to have general intelligence.
But that mental model is flawed. We'll be putting intelligence in all sorts of places that are not similar to a human at all, without those devices competing with us at being human.
To be clear, I'm much more concerned about the rise of techo-authoritarianism than employment.
And further ahead, where I said your original take might not age well; I'm also not worried about AI making humanoid bodies. I'd be worried about a future where mines, factories, and logistics are fully automated: an AI for whom we've constructed a body which is effectively the entire planet.
And nobody needs to set out to build that. We just need to build tools. And then, one day, an AGI writes a virus and hacks the all-too-networked and all-too-insecure planet.
> I'm also not worried about AI making humanoid bodies. I'd be worried about a future where mines, factories, and logistics are fully automated: an AI for whom we've constructed a body which is effectively the entire planet.
I know scifi is not authoritative, and no more than human fears made into fiction, but have you read Philip K. Dick's short story "Autofac"?
It's exactly what you describe. The AI he describes isn't evil, nor does it seek our extinction. It actually wants our well-being! It's just that it's taken over all of the planet's resources and insists in producing and making everything for us, so that humans have nothing left to do. And they cannot break the cycle, because the AI is programmed to only transition power back to humans "when they can replicate Autofac output", which of course they cannot, because all the raw resources are hoarded by the AI, which is vastly more efficient!
I think that science fiction plays an important role in discourse. Science fiction authors dedicate years deeply contemplating potential future consequences of technology, and packaging such into compelling stories. This gives us a shorthand for talking about positive outcomes we want to see, and negative outcomes that we want to avoid. People who argue against scifi with a dismissal that "it's just fiction" aren't participating in good faith.
On the other hand, it's important not to pay too close attention to the details of scifi. I find myself writing a novel, and I'm definitely making decisions in support of a narrative arc. Having written the comment above... that planetary factory may very well become the third faction I need for a proper space opera. I'll have to avoid that PKD story for the moment, I don't want the influence.
Though to be clear, in this case, that potentiality arose from an examination of technological progress already underway. For example, I'd be very surprised if people aren't already training LLMs on troves of viruses, metasploit, etc. today.
I think we're talking about different time scales - I'm talking about the next few, maybe two or three decades, essential the future of our generation specifically. I don't think what you're describing is relevant on that time scale, and possibly you don't either.
I'd add though that I feel like your dystopian scenario probably reduces to a Marxist dystopia where a big monopolist controls everything.
In other words, I'm not sure whether that Earth-spanning autonomous system really needs to be an AI or requires the development of AI or fancy new technology in general.
In practice, monopolies like that haven't emerged due to competition and regulation, and there isn't a good reason to assume it would be different with AI either.
In other words, the enemies of that autonomous system would have very fancy tech available to fight it, too.
I'm not fussy about who's in control. Be it global or national; corporate or governmental; communist or fascist. But technology progresses more or less uniformly across the globe and systems are increasingly interconnected. An AGI, or even a poor simulacrum cobbled together from LLMs with internet access, can eventually hack anything that isn't airgapped. Even if it doesn't have "thoughts" or "wants" or "needs" in some philosophical sense, the result can still be an all-consuming paperclip maximizer (but GPUs, not paperclips). And every software tool and every networked automated system we make can be used by such a "mind."
And while I want to agree that we won't see this happen in the next 3 decades, networked automated cars have already been deployed on the street of several cities and people are eagerly integrating LLMs into what seems to be any project that needs funding.
It's tempting to speculate about what might happen in the very long run. And different from the jobs question, I don't really have strong opinions on this.
But it seems to me like you might not be sufficiently taking into account that this is an adversarial game; i.e. it's not sufficient for something just to replicate, it needs to also out-compete everything else decisively.
It's not clear at all to me why an AI controlled by humans, to the benefit of humans, would be at a disadvantage to an AI working against our benefit.
Agreed on all but one detail. Not to put too fine a point on it, but I do believe that the more emergent concern is AI controlled by a small number of humans, working against the benefit of the rest of humanity.
In the AI age, those who own the problems stand to own the AI benefits. Utility is in the application layer, not the hosting or development of AI models.
I fear that this won't age well. But to shamelessly riff on Marx, those who control the means of computation will control society.