Hacker News new | past | comments | ask | show | jobs | submit login

I think this fear is natural whenever some kind of new tech is invented.

But I also think it's a mistake to say that no new form of tech can end the world, just because the world hasn't ended yet.

Lightbulbs, and cars, and fast trains, are not intelligence. Intelligence is a qualitative difference. GPT isn't going to end the world, but how many years do we have before someone creates something that is much smarter than humans? Even if it's as smart as humans, but thinks a lot faster, and doesn't get tired, and doesn't get hungry, or bored?

We couldn't forsee the positive and negative consequences of light bulbs because we couldn't predict what humans would do with them. But it was never going to be that humans use lightbulbs to end humanity. With AI, it's not whether humans will use it to end humanity, it's whether the AI decides to end humanity itself, a question that we've never had to ask for any other form of technology.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: