Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Newbie question: I've heard that PGMs are a superset of neural networks. In PGM materials that's I've read, the topology of the networks shown as example is made of node that are manually chosen and represent concept (smart student, good grades, difficult subject, etc.). Whereas a neural network example is usually a huge set of nodes that end up finding their meaning on their own. I also vaguely recall a tutorial in which you can highlight the nodes that contributed to the classification - the only thing is that they don't have meaning for a human. Then when the article states:

> restrict the way nodes in a neural network consider things to ‘concepts’ like colour and shapes and textures.

Aren't these just PGMs? Are they NNs? Is it just a methodology approach on how to select the topology? Don't you lose the automatic meaning / structure search? I'm a little bit confused...



PGMs are interesting in how they represent distributions over the values of their nodes. In neural networks, (for the most part) those nodes are deterministic, so from a PGM perspective the distribution is trivial (up until the final output prediction). Performing inference in a neural net with stochastic nodes would be crazy hard, so the best you can do is usually MC with some kind of reparametrization trick to keep your gradients around.


PGMs and neural networks are usually two completely different things. Neural networks involve graphs, but those graphs usually represent continuous relaxations of circuits -- rather than probability models.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: