Hacker News new | past | comments | ask | show | jobs | submit login

v1 used a very limited (albeit very easy and already quite impressive) form of transfer learning, e.g. take a pretrained network's 1000dim vector outputs given a bunch of images belonging to three sets (since the original was trained on Imagenet), and then just use K-NN to predict what a set "new" image falls into.

v2 does actually finetune weights of a pretrained network. At the time, it was a nice showcase how fast fast JS ML libraries were evolving.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: