That’s a good one!
I feel there is a difference as the cells are actively doing something, they are fighting entropy and create structure.
In a similar way we are with our programs, of course what we do is a crude approximation of what neuron cells do in other ways. In other words, I wonder it there is a process that we could use to speed up neural training by looking at how the brain does it.
I feel that the presented algorithm is actually somewhat close to what our brains do in similar tasks.
As mentioned elsethread, the problem is essentially reduced to determining angles between two vectors (one of which is known ahead of time) in high dimensional space. This is done by projecting the other vector into different specially chosen subspaces and classifying it in those, then summing up the "scores" in each subspace.
Given the similar task of determining the angle between two lines in 3D space, we tend to do something very similar (I feel): We look at the pair of lines (or vectors) from a few different perspectives (="subspaces") and get a feel for how well-aligned they are in each (="score"). We can then guess pretty well how large the angle between the two is.
Of course, we frequently use other cues too (e.g. perspective and texture when talking about alignment of real-world 3D objects). And even when you are considering plain lines displayed on a screen (in which case these cues don't work), we tend to also take into account the relative movement of the objects as we (continuously) shift our perspective (e.g. drag-to-rotate a scene).
Maybe the latter part could also be a hint towards further algorithmic ideas. Maybe somehow involving (signs of) derivatives (finite differences?) or similar could be a cheap way to improve accuracy. Just spitballing here though.