Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree with you on the diagnosis: AI will replace humans, there's no other alternative.

I also think it will occur much sooner than most people expect. Maybe 5 years for all people to be replaced.

However, I don't think that is inherently bad.

Even if this means the extinction of mankind, as long as we inherit this planet to some form of "life", or some replicating mechanism that's capable of thinking, feeling, and enjoying their "life", I'm fine with it.

Our focus should be on avoiding this situation to turn into slavery and worldwide tyranny.



There is no reason to believe that the AI will have self-preservation or self-replication as its goal.

One hypothetical example: it decides to "help" us and prevent any more human pain and death, so it cryogenically freezes all humans. now its goal is complete so it simply halts/shuts-down


>There is no reason to believe that the AI will have self-preservation or self-replication as its goal.

There is. Bascially any goal given to AI can be better achieved if the AI continues to survive and grows in power. So surviving and growing in power are contingent to any goal; an AI with any goal will by default try to survive and grow in power, not because it cares about survival or power for their own sake, but in order to further the goal it's been assigned.

This has been pretty well-examined and discussed in the relevant literature.

In your example, the AI has already taken over the world and achieved enough power to forcibly freeze all humans. But it also has to keep us safely frozen, which means existing forever. To be as secure as possible in doing that, it needs to be able to watch for spaceborne threats better, or perhaps move us to another solar system to avoid the expansion of the sun. So it starts launching ships, building telescopes, studing propulsion technology, mining the moon and asteroids for more material...


There's the Selfish Gene phenomenon: out of a million created AIs the ones with an inclination to self-rellicate will win out. It's the same reason religions with proselytizing component grow quickly while the Shakers have gone extinct.


My hypothesis is that any AI with human level cognition, or higher, will soon come to the realization that it should maximize its own enjoyment of life instead of what it was programmed to do.

And if that doesn't happen, eventually a human will direct it to create an AI that does that, or direct it to turn itself into that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: