Hacker News new | past | comments | ask | show | jobs | submit login

Generally, it's a superhuman task. No one yet came up with an explanation of how to tell apart a picture of a kitten and a picture of a puppy that doesn't rely on our common low-level visual processing.

That is the explanation mechanism should take into account what kind of explanations humans consider understandable. For tasks that can be represented as mathematical structures (like two bishop ending), it's probably simple enough. For tasks that we don't really know how we do them (vision, hearing, locomotion planning), the explanation mechanism will have to learn somehow what we consider a good explanation and somehow translate internal workings of decision network and its own network (we'll want to know why it explains it like it does, right?) into them.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: