Hacker News new | past | comments | ask | show | jobs | submit login

Humans can do all these things, and much better than machines, and yet noone has conquered the world.



Humans have very limited bandwidth, knowledge, and speed relative to an AGI in the world of abundant GPUs. Humans are easy to capture relative to software. They cannot make copies of themselves. A small group cannot be at many places at once and usually not without being seen/known about.

A larger group is bad at coordination without anyone leaking critical info to an outsider. Most people are not deeply malicious and are instinctively repelled by very immoral things. An AGI may not by default have such an instinct.


Humans can reproduce without pretty much any infrastructure, they can self-repair without infrastructure.

An AGI has no physical manifestation to start with, it needs electricity, is susceptible to various weapons that humans are not etc.


Are you seriously comparing the cost and speed to reproduce a piece of software with a living, functioning human being? Intelligent software can also hide in an existing infrastructure much more easily than a human can.

Also, we're talking about the world in which the cost of GPUs is dropping and GPUs becoming much more abundant over time.


No, but any hypothetical AGI would be reliant on machinery to survive, humans are are not. Humans could EMP all electronics (heck, the sun could so that for us even), blow up all infrastructures, factories, etc. and still survive.

Being able to fast copy while totally being reliable on artificial structures is a massive weakness. Software is just software - it doesn't generate electricity on its own.


That level of coordination among all, even most, human groups is extremely unlikely.

Humanity and civilization depend on many critical infrastructures. We also try to outcompete other groups all the time. In the near future with abundant GPUs, AGI and ASI can easily threaten or bribe some groups to keep it alive in exchange for powerful inventions/technologies.


> Humanity and civilization depend on many critical infrastructures

As does an AGI. Being the most intelligent, fastest thinking engine in the world is worth exactly squat when the opposing can get together 10000 guys with crowbars and a bad attitude in a hurry and knows where the server is.

Anyone who disagrees with that statement is welcome to explain how a infectious particles without a mind, without a metabolism, without even a reproductive system (aka. Viruses), can pose such a godamn hard to solve problem to a species that has atomic bombs, rocket engines and knows about quantum physics.

All the doomsday scenarios about AGI rely on it being able to have AGENCY in the REAL WORLD. That agency isn't software, it's hardware, and as such limited by physical laws. And a lot of that agency has to go through humans.


I answered your comment above here: https://news.ycombinator.com/item?id=38377302


So? That's just another assumption about capabilities of an AGI.

> it would make many backups of itself to different networks before starting its scheme

What would make me assume that would work? We have effective countermeasures against small malware programs infiltrating critical systems, so why should I assume that a potentially massive ML model could just copy itself wherever, without being noticed and stopped?

Such scenarios are cool in a SciFi movie, but in the real world, there are firewalls, there are IDS, there are Honeypots, and there are lots and lots and lots of Sysadmins who, other than the AGI, can pull an ethernet cable or flip the breaker switch.

And yes, if push came to shove, and humanity was actually under a massive threat, we CAN shut down everything. It would be a massive problem for everyone involved, it would cause worldwide chaos, massive economic loss, and everyone would have a very bad day. But at the end of the day, we can exist without power or being online. We have agency and can manipulate our environment directly, because we are a physical part of that environment.

An AGI cannot and isn't.

> We haven't managed to eliminate most dumb infectious diseases.

You do realise that this is a perfect argument for why humans would win against a rogue AGI?

We haven't managed to wipe out bacteria and viruses that threaten us. We, who carry around in our skulls the most complex structure in the known universe, who developed quantum physics, split the atom, developed theories about the birth of the cosmos, and changed the path of a meteorite, are apparently unable to destroy something, that doesn't even have a brain, or, in the case of viruses, a metabolism.

So forgive me if I don't think a rogue AGI has a good chance against us.


You're implying all of humanity would concur to the sacrifice or even the need to eliminate a rouge AGI in the first place.

A 2022 AI can already beat most humans in the game Diplomacy: https://www.science.org/content/article/ai-learns-art-diplom...

Moreover, in the near future when GPUs become abundant, many small groups of people can harbor a copy of an AGI in their basement, where it can plan and re-spawn whenever the situation becomes accommodating again.


> You're implying

Well, this entire discussion is built on assumptions about what would happen in very speculative circumstances for which no precedent exists, so yeah, I am allowed to make as many assumptions of my own as I please ;-)

> A 2022 AI can already beat most humans in the game Diplomacy:

And a 1698 Savery Engine can pump water better than even the strongest human.

> in the near future when GPUs become abundant, many small groups of people can harbor a copy of an AGI in their basement

Interesting. On what data is the emergence of AGI "in the near future" based if I may ask, given that there is still no definition of the term "Intelligence" that doesn't involve pointing at ourselves? When is "near future"? Is it 1 year, 2, 10, 100? How does anyone measure how far away something is, if we have no metric to determine the distance between the existing and the assumed result?

Oh, and of course, that is before we even ask the question whether or not an AGI is possible at all, which would be another very interesting question to which there is currently no answer.


That is just not true. Large scale coordination isn't that unheard of (see WWII, treaties and cooperations on various issues).

AGI might not even have magic technologies to offer and that whoever is siding with AGI has the power to subdue the rest of humanity is a bold speculation. Just like humans haven't even subdued viruses there is no reason to assume that to be true.


It doesn’t need to have magic technology, just access to critical pieces of software it hides in/merge with.

See how quite a few people reacted when Replika was nerfed. Imagine what happens when more important pieces of software are supposed to be turned off to eliminate a rouge AGI. (Pretty many who argue against AGI danger will be the first to argue against it.)

Have we even managed to eliminate all the dumb computer viruses from the world?


I don't think that aligns with history: we recently went through large shutdowns and lockdowns, we had and have people doing their daily work during war and massive destructions - shutting down things will not be too difficult if the need arises.


Wars are a terrible analogy because it implies there are multiple sides. Why wouldn't any intelligent being, AGI included, take advantage of that?

Not to mention the fact that in most wars there are spies and double agents. In the near future when GPUs become abundant, many small groups of people can harbor a copy of an AGI in their basement, where it can plan and re-spawn whenever the situation becomes accommodating.


AGIs hiding in basements are not an existential threat. Joking aside, what you describe isn't really different to the present (or past) - so it doesn't warrant much of concern in relation to AGI. People have followed ideologies and ideas into doom throughout history, it is not clear that AGI is a change there then.

That is, if that type of AGI ever exists in the first place. Maybe real AGI has different desires?


All that is true. But humans have had thousands of years to do such things, and yet: No world dominator.


Because at the end of the day individual humans are within an order of magnitude of intelligence of one another.

Also humans die. Human systems have been 'weak' without technology. Human thought and travel is slow.

If we take AI out of the equation and just add technology there is significant risk that a world dominator could arise. Either a negative dominator (no humans left to control due to nuclear war) or positive dominator (1984 cameras always watching you). There simply hasn't been enough time/luck for these to shake out yet.

Now, add something that over a magnitude smarter and could copy itself a nearly unlimited number of times and you are in new and incalculable territory.


> humans are within an order of magnitude of intelligence of one another.

Yes, and maybe that will be different with an AGI. Maybe AGI is physically possible. And maybe that advantage in intelligence will make AGI vastly more powerful than us.

Those are a lot of "maybes" however, and thus all of this remains highly speculative.

Just one example that puts a pretty big dent into many scenarios of all-powerful AGIs:

I think everyone agrees that even the dumbest human still outsmarts bacteria, protists and viruses by several orders of magnitude. And yet, we haven't been able to rid the world off things like Measles, Cholera, Malaria or HIV. Even the common cold and influenza are still around.

So, if we, with out big brains, that split the atom, went to the moon and developed calculus, cannot beat tiny infectious agents that don't even have a brain, then I remain kinda sceptic that being a very very very smart ML model means an Automatic Win over the human species.


I would actually argue that yes, humans _have_ conquered the world.

As in: we are now at the top of the food chain and decide which habitats and which animals get to live and which don't. Because we are the most intelligent being on the planet. In our pursuit to that place, we've used other animals with lesser intelligence to our advantage (dogs, horses, ...)

The premise is that an AI with super-human intelligence could use us in the same way. And to be honest, we're really not that hard to manipulate or persuade (money, religion, blackmail, power, ...)


I thought we are considering a worst case scenario, where some super human AGI, which has access to a lot more data than any single human being, can much more quickly cross-reference everything than a network of humans ever could.

It can already write much quicker than a human. Imagine what an AGI could do that wouldn't need to painstakingly write papers, publish them, read them, meet on conferences, and so on, to make progress.


> I thought we are considering a worst case scenario

Well, I am not.

Unless someone can show me evidence of an AGI a) being possible, b) being within near future reach and c) being an existential threat.

Most humans are not leaving their house under the assumption that they will get hit by a meteorite either. And that is actually an occurrence that we know for a certainty is physically possible.


> Humans can do all these things, and much better than machines

No they can't. Show me a human that doesn't need to sleep.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: