Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder how our brain can train billion of neuron without matrix multiplication. What is the biological process that get a similar result?


Perhaps addition and multiplication are the tools we use to get a computer to appear to act like a biological system with all its chemical pathways and simultaneous stimuli.

Lets take that one step further. Who taught orbiting bodies how to do differential equations?


That’s a good one! I feel there is a difference as the cells are actively doing something, they are fighting entropy and create structure. In a similar way we are with our programs, of course what we do is a crude approximation of what neuron cells do in other ways. In other words, I wonder it there is a process that we could use to speed up neural training by looking at how the brain does it.


I feel that the presented algorithm is actually somewhat close to what our brains do in similar tasks.

As mentioned elsethread, the problem is essentially reduced to determining angles between two vectors (one of which is known ahead of time) in high dimensional space. This is done by projecting the other vector into different specially chosen subspaces and classifying it in those, then summing up the "scores" in each subspace.

Given the similar task of determining the angle between two lines in 3D space, we tend to do something very similar (I feel): We look at the pair of lines (or vectors) from a few different perspectives (="subspaces") and get a feel for how well-aligned they are in each (="score"). We can then guess pretty well how large the angle between the two is.

Of course, we frequently use other cues too (e.g. perspective and texture when talking about alignment of real-world 3D objects). And even when you are considering plain lines displayed on a screen (in which case these cues don't work), we tend to also take into account the relative movement of the objects as we (continuously) shift our perspective (e.g. drag-to-rotate a scene).

Maybe the latter part could also be a hint towards further algorithmic ideas. Maybe somehow involving (signs of) derivatives (finite differences?) or similar could be a cheap way to improve accuracy. Just spitballing here though.


The "original" and easy to grasp idea behind neural learning is reinforcement: every time a good result is obtained, the connections that contributed get strengthened (reinforced). Bad results lead to weakening. Back-prop is specific implementation, and it is usually expressed in terms of matrix operations, but you can describe it without as well at the single connection level. Implementing that isn't efficient, though, hence matrix operations.


btw neurons are not the only units of computation in the human body.

allosteric activation in enzyme binding sites act like transistors almost. inside every cell there's various computations occuring




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: