In this scenario the person who wants to be paid owns the output of the agent. So it’s closer to a contractor and subcontractor arrangement than employment.
1. They built the agent and it's somehow competitive. If so, they shouldn't just replace their own job with it, they should replace a lot more jobs and get a lot more rich than one salary.
2. They rent the agent. If so, why would the renting company not rent directly to their boss, maybe even at a business premium?
I see no scenario where there's an "agent to do my work while I keep getting a paycheck."
The problem is the organizing principle for our entire global society is competition.
This is the default, the law of the jungle or tribal warfare. But within families or corporations we do have cooperation, or a command structure.
The problem is that this principle inevitably leads to the tragedy of the unmanaged commons. This is why we are overfishing, polluting the Earth, why some people are freeriding and having 7 children with no contraception etc. Why ecosystems — rainforests, kelp forests, coral reefs, and even insects — are being decimated. Why one third of arable farmland is desertified, just like in the US dust bowl. Back then it was a race to the bottom and the US Govt had to step in and pay farmers NOT to plant.
We are racing to an AIpocalypse because what if China does it first?
In case you think the world don’t have real solutions… actually there have been a few examples of us cooperating to prevent catastrophe.
1. Banning CFCs in Montreal Protocol, repairing hole in Ozone Layer
2. Nuclear non-proliferation treaty
3. Ban on chemical weapons
4. Ban on viral bioweapons research
So number 2 is what I would hope would happen with huge GPU farms, we as a global community know exactly the supply chains, heck there is only one company in Europe doing the etching.
And also I would want a global ban on AGI development, or at least of leaking model weights. Otherwise it is almost exactly like giving everyone the means to make chemical weapons, designer viruses etc. The probability that NO ONE does anything that gets out of hand, will be infinitesimally small. The probability that we will be overrun by tons of destructive bot swarms and robots is practically 100%.
In short — this is the ultimate negstive externality. The corporations and countries are in a race to outdo each other in AGI even if they destroy humanity doing it. All because as a species, we are drawn to competition and don’t do the work to establish frameworks for cooperation the way we have done on local scales like cities.
PS: meanwhile, having limited tools and not AGI or ASI can be very helpful. Like protein folding or chess playing. But why, why have AGI proliferate?
It's the equivalent of outsourcing your job. People have done this before, to China, to India, etc. There are stories about the people that got caught, e.g. with China because of security concerns, and with India because they got greedy, were overemployed, and failed in their opsec.
This is no different, it's just a different mechanism of outsourcing your job.
And yes, if you can find a way to get AI to do 90% of your job for you, you should totally get 4 more jobs and 5x your earnings for 50% reduction in hours spent working.
Maybe a few people managed to outsource their own job and sit in the middle for a bit. But that's not the common story, the common story is that your employer cut out the middle man and outsourced all the jobs. The same thing will happen here.
The trick is to register an LLC, and then get your employer to outsource the work to your consulting company. You get laid off, and then continue to work through your company.
Only mild sarcasm, as this is essentially what happens.
A question is which side agents will achieve human-level skill at first. It wouldn’t surprise me if doing the work itself end-to-end (to a market-ready standard) remains in the uncanny valley for quite some time, while “fuzzier” roles like management can be more readily replaced.
It’s like how we all once thought blue collar work would be first, but it turned out that knowledge work is much easier. Right now everyone imagines managers replacing their employees with AI, but we might have the order reversed.
> This begs the question of which side agents will achieve human-level skill at first.
I don't agree; it's perfectly possible, given chasing0entropy's... let's say 'feature request', that either side might gain that skill level first.
> It wouldn’t surprise me if doing the work itself end-to-end (to a market-ready standard) remains in the uncanny valley for quite some time, while “fuzzier” roles like management can be more readily replaced.
Agreed - and for many of us, that's exactly what seems to be happening. My agent is vaguely closer to the role that a good manager has played for me in the past than it is to the role I myself have played - it keeps better TODO lists than I can, that's for sure. :-)
> It’s like how we all once thought blue collar work would be first, but it turned out that knowledge work is much easier. Right now everyone imagines managers replacing their employees with AI, but we might have the order reversed.
Some humans will be rich and they'll buy things. For example those humans who own AI or fabs. And those humans, who serve to them (assuming that there will be services not replaced by AI, for example prostitution), will also buy things.
If 99.99% of other humans will become poor and eventually die, it certainly will change economy a lot.
Not a lot of difference between an F-35 and a fleet of drones when it comes to it tbh. If F-35s are not enough then I don't see how drones will fare better.
IMO drones are just a way for Elon to get his foot in the door.
> How are businesses going to get money if there are no humans that are able to pay for goods?
By transacting with other businesses. In theory comparative advantage will always ensure that some degree of trade takes place between completely automated enterprises and comparatively inefficient human labor; in practice the utility an AI could derive from these transactions might not be worth it for either party—the AI because the utility is so minimal, and the humans because the transactions cannot sustain their needs. This gets even more fraught if we assume an AGI takes control before cheaply available space flight, because at a certain point having insufficiently productive humans living on any area of sea or land becomes less efficient than replacing the humans with automatons (particularly when you account for the risk of their behaving in unexpected ways).
There is an amount of people who own, well, in the past we could say "means of production" but let's not. So, they own the physical capital and AI worker-robots, and this combination produces various goods for human use. So they (the people who own that stuff) trade those goods between each other since nobody owns the full range of production chains.
The people who used to be hired workers? Eh, they still own their ability to work (which is now completely useless in the market economy) and nothing much more so... well, they can go and sleep under the bridge or go extinct or do whatever else peacefully, as long as they don't try to trespass on the private property, sanctity and inviolability of which is obviously crucial for the societal harmony.
So yeah, the global population would probably shrink down to something in the hundreds millions or so in the end, and ironically, the economy may very well end up being self-sustainable and environmentally green and all that nice stuff since it won't have to support the life standards of ~10 billions, although the process of getting there could be quite tumultous.
That is more or less what I fear. If the top 10 percent already account for half of all consumer spending, and I equality keeps getting worse and worse, that's probably where it will end.
Funny thing. No need for drama. Just give people education and a wage, and a grind, and populations will go down on their own. While we pretend that the value of money still means something.
I didn't read the parent comment as endorsing that outcome, simply predicting that if people chase profits without regard for the well being of their fellow man, that's where we might end up heading. I think the question we have to answer is "how can we prevent that?", because history has shown us that humans are very happy to run roughshod over others to enrich themselves.
It for sure is endorsed by the tech billionaires... Humans greed is just so tiring to me. I am so fucking tired of seeing good people suffer while some tech bros wipe their asses with pure gold.
The AI agents don’t appear to know how & where to be economically productive. That still appears to be a uniquely human domain of expertise.
So the human is there to decide which job is economically productive to take on. The AI is there to execute the day-to-day tasks involved in the job.
It’s symbiotic. The human doesn’t labour unnecessarily. The AI has some avenue of productive output & revenue generating opportunity for OpenAI/Anthropic/whoever.
It’s a fundamental principle of modern economics that humans are assumed to act in their own economic interests - for which they need to know how and where to be economically productive.
humans are assumed to act, and some activities may generate consequences, to which a human may react somehow.
certainly there is a "survivor bias" but the rationality, long-term viability, and "economic benefit" of those activities and reactions is an open question. any judgement of "economic benefit" is arbitrary and often made in aggregate after the fact.
if humans knew how to create "economic benefit" in some fundamental and true way, game theory and most regulatory infrastructure would not exist, and i'm saying that as an anarchist.
You are welcome to try to cut them out and start your own business. But I suspect you might find it a bit harder than your employer signing up for a SaaS AI agent. Actually wait, isn't that what this website is? Does it work?
This is backwards. Those people got into the positions they have by having money to spend, not because someone wanted to pay them to do something. (Or they had a way to have control over spending someone else's money.)
Do people on Hacker News actually believe this? Each one of the four people named built a product I happily pay for! Then they used investment and profits to hire people to build more products and better products.
There's a lot of scammers in the world, but OpenAI, Tesla, Amazon, and Microsoft have mostly made my life better. It's not about having money, look at all the startups that have raised billions and gone kaput. Vs say Amazon who raised just $9M before their $54M IPO and is still around today bringing tons of stuff to my door.
The most successful scammers will provide you with something of value and then act to swindle you and many others of multiple times the amount of "value" they're generating. With Musk and their friends it seems to be the pattern.
Musk sells several things. Electric cars for $40k-$100k. Satellite internet for $40-$120 per month. X/Grok premium for $8/mo. And space launch services for about $2,500 per kg. Which one(s) of these are the scam? Prices seem decent to me, but if you tell me where I can get cheaper and better I'm open to it.
The "scam" part of Tesla has been well-documented, from their failure to deliver reliable full self-driving to the Cybertruck's low quality manufacturing, there is a lot of information out there about it.
comma.ai owns a lot of cars, including a Tesla, so I have tried most cars in the price range. Tesla is certainly no more of a scam than the other cars, and compared to say, the Chevy Bolt, it's a lot better. Can you suggest a better car for the value? Is there another car I can buy with better full self driving?
They are a bridge between those with money and those with skill. Plus they can aggregate information and act as a repository of knowledge and decision maker for their teams.
These are valuable skills, though perhaps nowhere near as valuable as they end up being in a free market.
The free market is an analyzable simplification of the real market, however I think the assumptions hold in this case.
If a CEO delivers a certain advantage (a profit multiplier) it's rational that a bidding war will ensue for that CEO until they are paid the entire apparent advantage of their pretense for the company. A similar effect happens for salespeople.
The key difference between free and real markets in this case is information and distortions of lobbying. That plus legal restrictions on the company. The CEO is incentivized to find ways around these issues to maximize their own pay.
> Just let me subscribe to an agent to do my work while I keep getting a paycheck.
I've already done this. It's just a Teams bot that responds to messages with:
"Yeah that looks okay, but it should probably be a database rather than an Excel spreadsheet. Have you run it past the dev team? If you need anything else just raise a ticket and get Helpdesk to tag me in it"
"I'm pretty sure you'll be fine with that, but check with {{ senior_manager }} first, and if you need further support just raise a ticket and Helpdesk will pass it over"
"Yes, quite so, and indeed if you refer to my previous email from about six months ago you'll see I mentioned that at the time"
"Okay, you should be good to go. Just remember, we have Change Management Process for a reason so the next time try to raise a CR so one of us can review it, before anyone touches anything"
and then
"If you've any further questions please stick them in an email and I'll look at it as a priority.
Mòran taing,
EB."
(notice that I don't say how high a priority?)
No AI needed. Just good old-fashioned scripting, and organic stupidity.
Reminded me of an episode of the IT Crowd where they put a recording of "Have you tried turning it off and on again? as the answering machine for an IT department.
What would you actually do if you got that? I like watching movies and playing games, but that lifestyle quickly leads to depression. I like travelling too, but imagine if everyone could do it all the time. There's only so many good places.
I would use the AI to build a robot that could build copies of itself and then once there are a sufficient number of robots I'd use them to build more good places to go to.
What happens when "your" AI wants to build something where someone else's AI wants to build it? I suppose you are thinking of something like Banks's Culture? The trouble is for that we're probably going to need real AI, not just LLMs, and we have no reason to believe a real AI would actually keep us as pets like in the Culture. We have no idea what it would want to do...
Isn't this kind of the same as an AI copilot, just with higher autonomy?
I think the limiting factor is that the AI still isn't good enough to be fully autonomous, so it needs your input. That's why it's still in copilot form
Just let me subscribe to an agent to do my work while I keep getting a paycheck.