Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

PGMs are interesting in how they represent distributions over the values of their nodes. In neural networks, (for the most part) those nodes are deterministic, so from a PGM perspective the distribution is trivial (up until the final output prediction). Performing inference in a neural net with stochastic nodes would be crazy hard, so the best you can do is usually MC with some kind of reparametrization trick to keep your gradients around.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: