Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> ...and repeat, until the model is as tiny as you want.

Cool! So if I repeat long enough I can get any network down to a single neuron (as long as I really want to)? That is awesome!



Not quite. The Lottery Ticket Hypothesis paper showed that models could shrink to around 10% of their original size without a loss of accuracy [0]. So around a million neurons instead of 10 million.

[0] https://arxiv.org/abs/1903.01611v1


Read the original papers on the perceptron for similar claims.


I mean yeah... but the performance will be accordingly.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: