Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I assume there are some neuroscientists here. I was trying to imagine how the brain thinks, and how it comes to conclusions and taps into a breadth of information so quickly.

One way I tried to conceive of it was that when the neurons in your brain fire, they compose patterns. These patterns -- the orders and timing of neurons firing -- might be likened against something like a hash table, wherein you represent data as a serialized pattern.

For instance, when I think of a dog, my brain fires some base neurons that are associated with the size of a normal dog, and some of its most basic attributes: 4 legs, fur. These could also be the same regions of the brain that would fire when I think of a cat, or a raccoon.

Does this in any way represent how the brain actually works?



I'm a computational neuroscientist (and data scientist and software engineer and startup founder).

The general principle in the brain is that the individual neurons (there are 25 billion in the cerebral cortex) represent statistically frequent occurrences of experienced phenomena. So if you know a lot about dogs and interact with dogs a lot, you will have a lot of neurons that "represent" different things about dogs (dog categories, dog behaviors, aspects of dog appearances, etc.).

The neurons "represent" perceptual experiences in a collective way called a population code. In one study on humans, a neuron was found that fired when viewing pictures of Jennifer Aniston but only when Brad Pitt was not in the photo. This does not mean the neuron had the sole job of representing Jennifer Aniston, but only that it was "tuned" to this perceptual occurrence. The tunings of neurons are distributed to "cover" in a statistical fashion the range and components of experiences. This particular human subject had perhaps watched many episodes of Friends.

What is still unknown is how the structure of a visual scene is represented. Neurons have been found for edges, contours, shapes, motion, depth, and objects. The mystery is how they all work together to parse and compose the scene. This is hard to determine because it is usually only possible to "listen" in on a few neurons at a time with electrodes, whereas it takes hundreds of millions to represent the visual world.

Regarding how visual mental imagery works, here is a post I made on Quora about that in case it is of interest: https://www.quora.com/How-can-we-see-images-in-our-minds


Can I float a conceptual conversion by you? I'm a pathologist (I look at human brain regularly and interact with neurologists, neurosurgeons, and neuropathologists) and my undergrad is in physics. I like your general representation. Here's my conversion:

Let's start with a large matrix operation, something a deep neural net neuron would do. Let's imagine that matrix is a sheet with colored dots instead of numbers.

We don't care so much about the order of the rows, but the direction of the columns holds some meaning. So we can connect the top of the matrix to the bottom, like a tube. Now the left side isn't exactly a beginning, and the right side isn't exactly the end, but there's this sort of polarity.

These ends of our tube (which was a sheet) are rings, and those rings can be reduced to points, something like Grothendieck's point with a different perspective at every direction (or in this case, many directions, one direction per row). But the left point and the right point are still different.

Now I have a polarized bag.

Like a neuron.

I could be silly and imagine the gates and channels on the neuron surfaces could be mapped to elements in the matrix like those colored dots, but I rather doubt the analogy extends quite that far...

And neurons don't absorb many giant tensors and emit one giant tensor. But they do receive their signals at many different points on the surface. So there is this spatial element to it. And there are many different kinds of signals (electrical, neurochemical, all the way to simple biochemical, like glucose). So there's this complexity that an inbound tensor would represent nicely. And they do in fact emit a single signal sent to many neighbors.

Anyway, that's my personal matrix-to-neuron conversion.

Is that sensical?


It feels like a wrong analogy - the large matrix operations are not really want a deep neural network is doing, it's an implementation detail, an artifact of how it's efficient to represent a large number of neurons and their connections.

The results of those tensor operations (not in their total, but each particular output number) may have some very rough analogy to the changes in a particular single synapse "connection strength" caused by various biochemical factors as a result of neuron operation, but the whole tensor operation doesn't map to any biological concept smaller than e.g. "this is a bit similar to how a layer of brain cells changes its behavior over weeks/months of learning". A machine learning iteration updating all of the NN weights is a rough correspondence to the changes that, over time, appear in our brains as a result of experience and learning (and "normal operation" as well).

I have seen an interesting hypothesis on how the particular layout of a dendritic tree and it's synapses can encode a complicated de facto "boolean-like" formula out of all the "input" synapses (e.g. a particular chain of and/or/not operations on a 100+inputs), instead of essentially adding all the inputs together as much of artificial neural networks assume, but I'm not sure about if how such hypothetical "calculations" actually are used in our brains.


I imagined the neurons themselves the "colored dots" in the matrix, but this makes so much more sense... My scaling was way off.


I recognized you were Paul King of Quora from the very first paragraph. Good to see you here.


I certainly wouldn't argue with neuroscientists and network experts here, but as a clinician dealing with products of brain function, that is, thinking, problem solving, sensory phenomena, etc., what happens in brains has long been a subject of intense interest and study.

I have the idea what gets lost in the various hypothetical models of neuron/brain mechanisms is the sheer magnitude of neuronal circuit complexity. Not only are there a vast number of "nodes" in human brain circuits, something like 100 trillion synapses, but that each node/synapse is itself enormously complex.

It's hard to quantify the number and variety of receptors "owned" by each neuron, but consider that while neurons work by communicating with certain "partner" neurons, they also are connected by very complex channels to all parts of the body. Furthermore neuron signal transmission involves the cascade of intracellular "downstream" effects mediated by intricate genomic events.

We know something about some of these channels, others not so much. "I/O" occurs through nerves, hormones and immune system signalling. In isolation each of these systems is highly complex, but considering the nature and meaning of interactions among these systems is overwhelming.

Given the immense scale of complexity it's unsurprising that we have only a primitive understanding of how neuronal systems work. I don't anticipate science will "figure it out" in my lifetime.

OTOH no doubt science will continue to discover intriguing clues in the coming decades, and that will be valuable. But I think it's important to appreciate just how vast the "problem space" is when it comes to grasping how brains actually work.


You may very much enjoy the book "On Intelligence" by Jeff Hawkins. He was the founder of Palm computer with a strong computer science background and then went on to found his own neuroscience institute.

He has been working on AI in a system which he terms Hierarchical Temporal Memory which he describes in his book. The interesting point is that his algorithms and data structures are based on existing neuroscience research and "reverse engineering the neocortex". It sounds very buzzwordy but I found the book an excellent cross-disciplinary read.


I appreciate the reco -- I'll def. check it out.


Yes, this is roughly how it is believed to work. In fact, one can make out neurons that are reliably involved in representing concepts like location and orientation [1], places [2] and persons [3]. Usually, multiple neurons are involved in representing single concepts (population coding [4] and distributed representations [5]). Concepts are constructed by successively extracting features and combining simpler features in more complex ones, from the sensory input regions towards the association regions in the cortex (see feature hierarchies [6]). You can visualize thoughts as population-rate coded probability distributions that interact very quickly in single "torrents" of activations.

[1]: https://en.wikipedia.org/wiki/Grid_cell

[2]: https://en.wikipedia.org/wiki/Place_cell

[3]: https://en.wikipedia.org/wiki/Grandmother_cell

[4]: https://en.wikipedia.org/wiki/Neural_coding#Population_codin...

[5]: http://psych.stanford.edu/~jlm/papers/PDP/Chapter3.pdf

[6]: http://www.scholarpedia.org/article/Neocognitron


You can find a neuron that fires when you see your dog. But whether it is a general dog detector, tuned to just to your dog, or part of a distributed sparse encoding etc. is not conclusively known

https://en.wikipedia.org/wiki/Grandmother_cell


Wow, your idea is heading incredibly towards the same general direction as https://en.wikipedia.org/wiki/Sparse_distributed_memory described there. Excellent!


Oh, surprising, I'd never heard of this before. I'm definitely going to dig into the concepts a bit more. Thanks for the link!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: