Hacker Newsnew | past | comments | ask | show | jobs | submit | bl's commentslogin

I'll give a try...

Imagine three, 4-by-1 vectors, each "one-dimensional". Twelve total scalars, each vector with four rows and one column. Arrange these three vectors side by side and merge them into a single 4-by-3 matrix. This matrix is "two-dimensional".

Now, let's imagine five such matrices, each 4-by-3. Stack the five matrices one on top of the other. We currently have a 4x3x5 matrix. This matrix, which contains 60 scalars, is "3-dimensional".

Repeat a similar exercise 997 more times and you have a 1000-dimensional matrix.

Compare that matrix to this: 1000 of our original 4-by-1 vectors arranged side by side, which gives a 4x1000 matrix, which is simply a "two-dimensional" matrix with 4000 elements.


A vector with four rows and one column is a four-dimensional vector. A one-dimensional vector can be described with a single number, a 1x1 matrix if you like.


Oops. You're correct: a 2x1 vector is two-dimensional, a 3x1 vector is three-dimensional, etc. <Trying to remember the terminology from linear algebra 15 years ago.> Each element of the mx1 vector represents a magnitude along an orthogonal dimension ('scalars' for a set of 'basis vectors'). So then a 1000x1 vector would be "thousand-dimensional"; each element represents a magnitude along an axis. But is this strictly equivalent to 1000 single-dimensional vectors? `eli173 suggests not, and I agree.

In constructing my incorrect answer in the grand-parent comment, my though process was being guided by the way Matlab/numpy treats these items (and I think I'm on solid ground that Matlab/numpy treat them differently because mathematicians consider them differently). The built-in functions operate very differently (if they work at all) for

    size(A) = (m,1)
and

    size(A) = (m,n≠1)
So there may be 1000 numbers floating in the ether, but conceptually they're not the same. Multiplying a 1000x1 vector by a 1xp vector has a completely different result than multiplying one thousand 1x1 vectors by that same 1xp vector.

Although, only many hours later do I realize that the original submission title might've been wordplay on the phrase "a picture is worth a thousand words", so my brain is not reliable today. I shall refrain from spewing more-likely-than-not incorrect statements concerning linear algebra.


The Mosers' work is very much deserving recognition. They've made fundamental discoveries with really beautiful experiments (well-designed, well-executed, thorough). On a personal note, I referenced their work on place cells as a possible use of phenomena I was studying in my own research. And I wasn't even aware of their ideas when I started (they occupied a slightly different sub-field of neuroscience). It's delightful to see the Mosers rewarded.

Another item of note: During introductory neuroscience class, the idea was "Hippocampus, learning & memory. Hippocampus, learning & memory. Hippocampus, learning & memory." Perhaps not so brutally, and I may have missed some subtleties. I found it quite refreshing to learn that the hippocampus was involved so deeply in a task that wasn't so starkly "This arbitrary experimental task is aversive; I, mouse, must avoid it." Hippocampus...is there anything it can't do? <Leonard Nimoy: "The answer is yes.">

Following is a neuroscientist's get-off-my-lawn rant

or

A Comment Wherein `bl Gripes about Stretched Tech Analogies Concerning Neuroscience.

A GPS device receives signals from a set of beacons with well-established, fixed locations (i.e., geostationary satellites). Knowing the beacons' fixed locations, the device is able to triangulate its own position. A GPS device can go to a completely unfamiliar location and work exactly as well as when its at a location its been to hundreds of times.

One could conceivably think of a mammalian visual system surmounted on a position-encoded cranium as equivalent to the GPS beacon system. But hippocampal place cells would have absolutely no contribution to navigating an unfamiliar environment because they had not been "trained".


Better to avoid the gripe... GPS satellites are not in fixed locations, and triangulation is not used to determine position (it is calculated based on time of flight to 4+ satellites.)

Sadly the ship has sailed on the use of GPS as a metaphor for "internal map". Since people don't know how it works there seems to be no possibility of correcting this trend. Another example is in the endorsement for this book: http://books.wwnorton.com/books/Just-Freedom/


Drat! I committed an error similar to the one I was griping against: being technically sloppy.

Believe it or not, I paused for half a heartbeat while typing fixed and triangulation while I was typing and thought to myself "Should I go double-check this on some hard-to-access reference, say Wikipedia? Nah. Better blurt this out before I head out the door for the day".

I ought to be careful how I phrase my gripes so that they convey my intention: keeping discussions accurate about subjects which I know a bit. Poor metaphors seem to aggravate me more than most.

Thanks for setting me straight.


I like your gist of putting the hypothesis to test, but there are serious practical factors that would prevent your proposed experiments. Small- and medium-sized mammals (e.g., rodents and primates, respectively) are somewhat convenient for experimenting in that they can be housed/fed humanely and fit into an fMRI machine whose aperture is ~0.5 meters in diameter. I do not see how one could reasonably do the same for elephants and whales.

But let's do a thought experiment and see if we can reasonably dispense with the need for experiment itself. The article (via the researchers' statements) zoomed past a detail: cortical surface area is much more indicative of neural processing capability than gross brain volume. In many contexts, a neuroscientist might use "brain size" as shorthand for cortical surface area. Also consider that more "advanced" mammals tend to have more convoluted cortexes, thus larger cortical surface areas. So it's quite possible for a large mammal's (whale's or elephant's) brain to be volumetrically larger than a human's, but to have relatively smaller surface area because it is less convoluted.

In the event that we could actually accomplish such a comparative study as you propose, we'd probably find that "tethering" does not monotonically increase with surface area. Then we'd determine that the authors' hypothesis is overly simplified.


The voltage change (i.e., depolarization) is not strictly local.

In some cases, depending on the actual geometry of the dendrite and the particular complement of voltage-activated ion channels, the voltage change as a result of neurotransmitter release might lead to quite a distributed depolarization even without triggering a dendritic action potential.

Conversely, an action potential initiated in the dendrites doesn't necessarily faithfully propagate to the cell body (soma). This is also dependent on the local geometry and ion channel distribution. Dendritic action potentials are not all-or-nothing events like those of the axon.

To answer your question: Smith, et al., did observe dendritic action potentials (spikes) by measuring a proxy: calcium influx indicated by a fluorescent dye that changes efficiency when bound to calcium. This calcium influx, and by extension, the dendritic spike, is what was spatially-restricted. The authors are extrapolating information processing from the spatially-restricted dendritic spike.


Thanks for the answer.

So just to close the loop and make sure I got it, a couple follow ups

'processing' in this case would refer to integrating signals/voltages/neurotransmitters from more than one neighboring neuron?

How do they show that this was processing/integrating and not just particular sensitivity to one external stimulus?

For 'processing' to be meaningful, would it not have to share the result? In other words propagate the action potential or release neurotransmitter?


I'm not a biologist and the parent poster seems to know in far more detail, but from a bunch of neuroscience lectures on how the dentritic spikes travel up to the soma, my takeaway (as a computer guy) was 'hmmm, it looks like a system implemented in FPGA layouts - the geometry features can work as logic gates or delays'; and 'hmmmm, it looks I could design a dendritic tree geometry for almost any boolean function of the inputs, so any computer-chip-like-functionality could be built out of them'.

I mean, if I needed (A xor B) and (C or D), then my impression is a single neuron with rather simple geometry and appropriate dendritic connections could calculate that in the sense that this neuron would spike iff the A,B,C,D neurons spiked as required by that formula; but since neurons tend to have much much more connections, then each neuron is technically capable of much more complex calculations, even if many of them in the end do something like 'spike iff any 100+ of my 1000 inputs are spiking'.

It's not so simple as that because timing is also relevant, and there were examples of known dendritic structures that do "processing" in terms that a neuron spikes if it receives A slightly before B, but doesn't spike if it receives A slightly after B; so it can be used for detecting motion direction and such.


"[I]t looks I could design a dendritic tree geometry for almost any boolean function of the inputs".

That's my outlook on the structure-function link between dendritic morphology and dendritic information processing, with the modification that I'd not restrict it to boolean functions. There are very many more types of functions, linear and non-linear, that can conceivably be built out of neuronal dendrites.

And I like the nuance of your second paragraph. There are all sorts of wacky, complex calculations one can image being possible, but any one neuron may implement a subset. Now, across a few hundred billion neurons in a mammalian nervous system...

You're spot on with regard to timing, too. All this "information processing" with branched dendrites + non-linear ion channels are greatly expanded with a timing component.


Well, AFAIK you don't need anything more than boolean functions, since if we're talking about single spikes (not spike frequency), then there either is or isn't a spike, you don't get some spikes larger than other.

The linear/nondigital functions IMHO seem to be used as implementation details - for example, a neuron "fire iff 1+ VIP-input fires or 3+ normal inputs fire" can be implemented in wetware by having 'vip-inputs' have thrice as strong synaptic connection, summing all input values in the dendrite, and adjusting so that the firing threshold is appropriate (i.e. a linear function); but in silicon the same thing can (should?) be implemented as a boolean function / logic gates.


I hope no one interpreted my statements to suggest that anything you said was wrong. Just trying to fill in details.

I merely want to avoid prematurely narrowing the range of functions that are possible. If we, for the moment, think of the neural computation of a single neuron as a neural network, then the spike/no-spike decision would be in the last layer and a whole host of linear/non-linear (some not necessarily boolean) functions could be implemented by the dendrites. And some single neuron processing we already know behaves in a non-boolean manner.

Be aware, just because arbitrarily powerful logic could be constructed solely out of boolean components (I don't even know if this is true. Isn't this kinda what is going on in an FPGA?) doesn't mean that neural hardware is purposed the same way. They may very well may be analog, at least for some computations.

And to speak to your second paragraph, I should declare my personal biases. As a dendritic physiologist, I wasn't much interested in whole-cell firing characteristics, but in the dendrite's sub-threshold behavior.

How do a smattering of synaptic inputs, each with varying strengths, interact within the complex electrophysiological scaffolding provided by a branched dendrite layered with non-uniform, non-linear ion channel distributions?

So my perspective is somewhat inverted: To me, neuron firing is the implementation detail! <smilie face>


"Processing" doesn't have a consensus definition in the neuroscience community, but a neuroscientist could, with good justification, use that definition if they had a particular experimental scenario. In this article, Smith, et al., use a more narrow definition based on well-known response in which a neuron is selectively-sensitive to a bar of light at a particular angle. How a neuron becomes selectively-sensitive (i.e., how it fires to that angle and not to all the others) is the "processing".

It is overly simplified to say that for the processing to be useful that it has to share the result. If a particular part of the dendritic sub-tree was stimulated enough, it could bring the neuron into a particular electrophysiological state that succeeding synaptic input would cause a wholly different computation to occur. Thus, you can see the importance of timing discussed by the sister comment.


If you are interested in making a very stretched analogy, demonstrating dendritic information processing is like realizing that a CPU's transistor is actually itself a little CPU that is itself capable of quite sophisticated computation. In fact, most of a neuron's computation my be carried out by the dendrites. Don't get tied up in the over-simplified model of dendrite=antenna, soma=computer, axon=wires.

Active dendritic information processing has, for several decades, been theorized and modeled. The combination of two-photon microscopy and more "classical" electrophysiology techniques (like patch clamping used in this article) is finally opening the theories to experimentation.

[Not to be too critical, but this paper is far from the first to experimentally investigate dendritic information processing. I, personally, am glad some segment of HN is interested in neural computation.]


Mr. Park's original piece on slopegraphs: http://charliepark.org/slopegraphs/

Previous Hacker News discussion of Mr. Park's original piece: http://news.ycombinator.com/item?id=2753343


Strictly speaking, there is still some cell division shortly after birth and into the early phases of post-natal development. So the brain increases in size, slightly, via that method. But mostly it is from the elaboration of the dendritic arbors of the existing neurons. The size and branching factors increase many times during development. Check out the beautiful (and stunningly accurate) drawings of the famous neuro-anatomist Santiago Ramon y Cajal.

For example, the Purkinje neurons of the cerebellum:

http://www.gladstone.ucsf.edu/wp/2009/06/knowledgegrows/

Note how the number of branches increases to sample more of the volume.


Synapse "rewiring" is not typically how we think memories are formed in adult animals. Mostly it is done by modulating the strengths of the existing connections (this process involves signaling cascades and protein expression, so it does take some time). So if you want to form a "memory", a particular connection is strengthened. There isn't a concomitant loss of another connection. It's not a zero sum game.

All my statements are based on my understanding of mammalian learning and memory. But I think you hit on the key with "Perhaps her neurons are different".

Indeed, invertebrate neurons are wildly different from those of mammals. In fact, if you are accustomed to looking at mammalian neurons [1], invertebrate neurons can look positively alien. For example, check out the Lobula Giant Movement Detector (LGMD) neuron of the locust [2] and other insects [3].

A) The scale is different: The thickness of some of its branches are about the size of the cell body on a mammalian neuron.

B) The organization is different: The dendritic arbor is divided into nearly independent subfields with very independent functions.

C) The behavior is different: The spike output patterns of an LGMD would be distinguishable to a first-year neuroscience student. And the output connections are extremely strong, pretty much one-to-one.

Add it all together, and this one neuron does the job of at least a few dozen mammalian neurons. How many, exactly, is difficult to tell. Not every insect neuron is as fantastical as the LGMD, but I would say that "600000" value ought to be scaled by some number greater than five. Given that, one could say our spider friend has the equivalent of several million mammalian neurons.

Raw neuron count is merely the crudest of measures of neural processing capability. How sophisticated the processing nodes are (i.e., the neurons) and how they are wired together (i.e., the network topology) are way more critical.

[1] Note that http://en.wikipedia.org/wiki/Neuron depicts exclusively mammalian neurons.

[2] Locust version: http://jn.physiology.org/content/97/1/159/F1.large.jpg ; figure from this article: http://jn.physiology.org/content/97/1/159.full

[3] Fly version, top row; rat (i.e., representative mammalian neurons) for comparison, bottom row: http://c431376.r76.cf2.rackcdn.com/995/fnsys-03-017/image_m/... ; figure from this article: http://www.frontiersin.org/systems_neuroscience/10.3389/neur...


"[I]t's a bit of a stretch to say that they've rewritten the rules of refraction."

Indeed. It appears more akin to the situation we have with classical and quantum mechanics: classical (Newtonian) mechanics fairly accurately describe the behavior of macroscopic objects at relatively slow velocities. We acknowledge that the system is not valid outside of that range. But since that range encompasses the bulk of our everyday, practical experience, classical mechanics are exceedingly useful.

Classical optics are not suddenly outdated with these discoveries. Snell's law and the lens-maker's formula are just as relevant as they were yesterday. We just need to add a few more terms to the equation if we etch a gradient of nano-scale resonators to the surface of our optical element. (Boy, do I feel like Geordi in Star Trek reading that last sentence aloud.)

Negative index of refraction materials are not quite new: wikipedia indicates that they are already being used in commercially-available products (http://en.wikipedia.org/wiki/Negative_index_metamaterials). For me, the most exciting application is creating a lens that circumvents the diffraction limit that limits optical imaging resolution. Right now, the most sophisticated, expensive microscope objective lenses can just barely resolve sub-cellular structures only under very particular conditions (i.e., not alive). A diffraction-unlimited "superlens" made out of this stuff could enable us to see even smaller objects under physiological conditions. It would be fantastic if we could capture the release of individual neurotransmitter vesicles at a diseased synapse, for example.

Besides negative index materials, there are a whole class of non-linear optics (http://en.wikipedia.org/wiki/Nonlinear_optics) that allow engineers to do all sorts of funky things in their instruments, like doubling the light frequency or self-focusing.


"[T]hese companies do nothing for us."

I agree 99%. The one thing they do for us, and the one thing that any alternative system would have to replicate, is provide a signal of quality through their history and prestige. The whole constellation of scientific publications, grant writing, grant reviews, tenure committees, faculty searches, and dissertation committees revolve around publications. But it seems nobody can, or desires, to read all of an individual's publications, synthesize the content of what they've produced, and compare it to the other work in increasingly specialized sub-fields.

Instead, to evaluate the quality of work and the scientists themselves, we rely on the number of articles published multiplied by the journal's impact factor (http://en.wikipedia.org/wiki/Impact_factor), roughly speaking. This we've used as a proxy for scientific quality. And besides lazy, it's insidious, too. Before even sitting down to read an article, I've already absorbed which journal this article appeared in. If a prestigious journal, how can I help being predisposed to think favorably of it?

It's a seductive shortcut. This is why the academic journals are so entrenched. And it shows what any alternative system must replicate. It is relatively easy to create a website that unites authors, peer reviewers, editors, and a publishing/editing system. What is not straight-forward is to create a system with that last 1%: the external quality signal.


To maintain the psychological impact it could be possible to have different outlets (websites) where to publish based on a ranking by the editorial committee...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: