Hacker News new | past | comments | ask | show | jobs | submit login

It certainly seems incomprehensible now, but that isn't necessarily true in neural networks - the amazing thing about some of the experiments using the results of intermediate layers in image recognition is that they seem to be building up higher and higher order "understanding" as you get to deeper and deeper layers which correlates directly with how a human might explain their strategy.

You can imagine a means of interpreting intermediate layers of alphago's weighting function similar to the second image in [1] (not the best example, I apologise) that would produce images or other abstract representations of the strategy that layer was encoding, similar to how a human might classify moves or patterns into categories.

[1] http://cs231n.github.io/convolutional-networks/




There's a name for this phenomenon: "Subsystem inscrutablity". See this presentation on Alpha Go linked to from here: http://nextbigfuture.com/2016/03/what-is-different-about-alp...




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: