Hacker News new | past | comments | ask | show | jobs | submit login

Dabbling in and reading on AI for over a decade makes me laugh at any of these articles writing about a connection between OpenAI, AI research, and risk of superintelligence. Let's say we're so far from thinking, human-intelligence machines that we'll probably see super-intelligence coming long before it's a threat. And be ready with solutions.

Plus, from what I see, the problem reduces to a form of computer security against a clever, malicious threat. You contain it, control what it gets to learn, and only let it interact with the world through a simplified language or interface that's easy to analyse or monitor for safety. Eliminate the advantages of its superintelligence outside the intended domain of application.

That's not easy by any means, amounting to high assurance security against high-end adversary. Yet, it's a vastly easier problem than beating a superintelligence in an open-ended way. Eliminate the open-ended part, apply security engineering knowledge, and win with acceptable effort. I think people are just making this concept way more difficult than it needs to be.

Biggest risk is some morons in stock trading plug greedy ones into trading floor with no understanding of long-term disruption potential of clever trades it tries. We've already seen what damage the simple algorithms can do. People are already plugging in NLP learning systems. They'll do it with deep learning, self-aware AI, whatever. Just wait.




> Biggest risk is some morons in stock trading plug greedy ones into trading floor with no understanding of long-term disruption potential of clever trades it tries.

Actually, it's not the lack of understanding, it's the lack of moral responsibility.

We've spent the last few centuries transitioning from a society ruled by strongmen driven by personal aggrandizement to a society where people spend the majority of their adult life as servants to paperclip maximization organizations (aka corporations). Much of what you see in the world today, from the machines that look at you naked at the airport to drones dropping bombs on the other size of the planet to kill brown people, is a result of trying to maximize some number on a spreadsheet.

When we install real AI devices into these paperclip maximizing organizations, you'll have the same problem as you have today with people, except that machines will be less incompetent, less inclined to feather their own nests, and more focused on continually rewriting their software with the express goal of impoverishing every human on the planet to maximize a particular number a particular balance sheet.

[1] https://wiki.lesswrong.com/wiki/Paperclip_maximizer




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: