Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's impressive that you were able to teach it so much, how it learned from its mistakes when pointed out.

I wonder what the reason is for this missing "last mile" of understanding. Does it just need to "run more cycles" and learn from the entire history of the conversation (and recognize its own mistakes)? Or is there an insurmountable technical limitation with how it works? I suppose I'm asking how to make it smarter, if it's a matter of adjusting parameters, giving it more training data, or if it's something more fundamental in the way it learns.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: