We have a very very good idea of the mammalian eye-V1 pathway. It's not a straightforward path and describing it in a HN comment is not going to be useful to anyone. Explaining the complex, but well known, pathway is a job best done by other people than myself. We are well aware of what 'data' processing the mammalian brain does at nearly every synapse, and it is a LOT. Right form the cones and rods, data is being highly processed (synapses are VERY noisy, as it turns out). So while more research always comes up with more questions, we are fairly confident about the eye-V1 pathway (contrast that to other organs, like the vestibules, where we're fairly in the dark still).
That said, wait, what? We can't do the back-propagation calculus in a NN? When has that been true and for how long? I thought it was fairly straightforward to know the weights of the connections of the nodes in your network. It's just a tensor you grep for, right?
That said, wait, what? We can't do the back-propagation calculus in a NN? When has that been true and for how long? I thought it was fairly straightforward to know the weights of the connections of the nodes in your network. It's just a tensor you grep for, right?