Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not sure where is your analogy applicable. There are lots of people doing image recognition, voice recognition, NLP. None of it on its own relates that much to reinforcement learning and multitask solving. In fact in the last year I saw only a few papers trying to do nearly all of the above with a single NN.


And is that single NN better at any of these tasks when compared to specialized approaches? Don't get me wrong, I agree that AGI is the end goal, I am just not convinced that trying to solve the general problem (before simpler problems are solved) is the most productive way forward.

To go back to my analogy, physicists don't have a unified theory yet, but have a good understanding of, say, quantum mechanics or planet motion. Solving these sub-problems got them closer to the end-goal, however, and brough a lot of value on the way. Why should we tackle AGI any differently?


I would have to find the paper, but generally yes. If I remember correctly one paper presented a network, that was able to recognize giraffes on images, never seeing a giraffe during training.

It was trained on embedding sentences and images into the same latent space.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: