Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While this is a cool result, I wonder if the focus on games rather than real-world tasks is a mistake. It was a sign of past AI hype cycles when researchers focused their attention on artificial worlds - SHRLDU in 1970, Deep Blue for chess in the late 1990s. We may look back in retrospect and say that the attention Deepmind got for winning Go signaled a similar peak. The problem is that it's too hard to measure progress when your results don't have economic importance. It's more clear that the progress in image processing was important because it resulted in self-driving cars.


Firstly, research into Chess AI has had a surprising amount of beneficial spin-off, even if we don't call the result "AI".

Secondly, while it's still a simplification and abstraction, DotA's ruleset is orders-of-magnitude more similar to operating in the real world than Chess's is.

Thirdly, I'd argue that the adversarial nature of games makes it _easier_ to track progress, and to ensure that measure of progress is honest.

There's a lot of ways you can define "progress" in self-driving cars. Passengers killed per year in self-driving vs. human-driven cars? Passengers killed per passenger-mile? Average travel time per passenger-mile in a city? etc.

With games, you either win, or you don't.


Another benefit of showing off progress with games is it allows the everyday reader to follow and understand it as well. It works great as a public awareness standpoint, especially when an AI can beat a human (i.e. Gary Kasparov vs Deep Blue). Awareness is a good thing in the space.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: