Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No. The outcome is the goal.

It's rapidly becoming apparent that some algorithms (eg Deep Learning related models) work much better at scale than on small amounts of data. It doesn't make sense to discount these better algorithms because they don't work as well as other models when tested against less data.

It is also apparent that these models require significantly more computing power to perform well than other models. That doesn't make them less worthy, just a cost people must consider.

It turns out that intelligence is hard..



That's another good point. The required computing power that is now cheap and widely available has changed our ability to even try these methods.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: