Hacker News new | past | comments | ask | show | jobs | submit login

It looks like not only was the data ingested, but the neural network was trained from scratch up to 94% (?) accuracy in that time.



For the original competition, you can check out https://dawn.cs.stanford.edu/benchmark/#cifar10 . Special note is that these are multi-GPU setups which are much more complicated and out of the reach of most average Joes. Also, they're a pain to engineer around (though necessary at scale, I suppose).

96% is another candidate but much harder, and I think robs the 4-6x speedup in experimentation cycle times that 94% requires.


Yes, sorry I assumed it would be understood that it did the training and maintained quality.

The irony of pointing out missing context and providing insufficient context myself.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: