Hacker News new | past | comments | ask | show | jobs | submit login

But you won't be able to train it to the same accuracy.

I'm not sure I agree with this bit in theory. A Neural Network is a stack of basis functions; and this stack can also be seen as a bunch of basis functions. And basis functions are what kernels represent. Trivially, you could then "copy" the weights that a ANN would learn into a kernel and obtain the same accuracy.

The reason this doesn't work in practice is, in SVMs, you tend not to learn kernels from scratch but use (possibly a combination of) standard parameterized kernels - [1]. The learning step in the SVM adapts this standard kernel to your dataset as much as the parameters allow, but this would be sub-optimal compared to learning a kernel (or the corresponding basis functions) from scratch that's built just for your data. With a well trained ANN the latter is what you get.

[1] there has been a fair amount of work on learning kernels too, but its not as mainstream as using standard kernels.




Trivially, you could then "copy" the weights that a ANN would learn into a kernel and obtain the same accuracy.

Sure.

I'm not sure I agree with this bit in theory.

No one really agrees with it in theory - I'm not aware of a good theoretical explanation as to why some deep networks are easier to train. And yet there is a growing body of real, generalized practical hints which work pretty reliably.

This is pretty exciting! There is undiscovered ideas here. But it is unsatisfactory from the theoretical sense at the moment.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: