Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's useful only for predictions. Unless, GPU (Possibly with WebCL) support will appear, it will be impractical to use it for training.


I don't suppose you can expand the above? How would one go about training with one stack and replicating with this?

Wouldn't the libraries/algos be slightly different throwing off the weights?


even for predictions, you need the model (what was learned in the training phase). deep learning models are of multiple gigs in size. so, in browser wouldn't be practical, except for toy stuff.

sending the input data to the server, doing the computations there and getting the answers back will be the only practical way to go for remotely serious applications for a while still


And you would also have to make the model available to the client, which is not reasonable in many commercial applications.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: