Hacker News new | past | comments | ask | show | jobs | submit login

> and that degree of accuracy is only going up as we feed the models more data.

The problems comes when the data that is fed is of the "Hitler did nothing wrong"-type. That AI system will have no problem regurgitant something that takes that at face value, while a thinking individual knows it to be false.

There's also the issue of what do you do if the data being fed is only "valid" for people that happen to have a certain skin colour? Or a certain ethnicity? Or a certain gender? Or a specific socio-economic status?

There's a great short story about a "robot" ingurgitating lots and lots of data with no extrinsic value in Stanislaw Lem's "The Cyberiad" (minus the Hitler part). People like Norvig are smart enough to give lots and lots of references in order to prove their point but they're not smart enough to see the bigger picture (the one pointed to by people like Lem).




  "They are vulnerable to reproducing poor quality training data"
  (Peter Norvig, in the article)


I, of course, didn’t go trough all that huge article. Glad that Norvig is aware of it.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: