Could someone explain what a neuromorphic chip is?
The article assumes we know, but I haven't heard of it. And the wikipedia article on "neuromorphic engineering" talks about stuff like analog circuits, copying how neurons work, and memristors, none of which seem that related.
The best definition I can come up with is hardware that implements neural network architecture directly, especially McCulloch–Pitts spiking neurons (which have a temporal component). In neuromorphic chips, neurons are an actual component in the hardware, you can ask questions like "how many neurons does this chip have?". Contrast to how neural nets as we use them today which are actually implemented as a computation graph on tensors. It turns out that a special kind of neural network can be abstracted well as a series of tensor ops (dense feedforward layered networks[1]), but this is not necessarily the case for any neural network. So neuromorphic chips have a possibility of being far more general.
[1]: Which are wired something like this: http://neuralnetworksanddeeplearning.com/images/tikz40.png - Notice that dense connections and layered architecture. For all intents and purposes, this what neural nets look like today because of how easily it is to treat a NN with this specific wiring as a chain of tensor computations and thus execute on more conventional hardware.
What I remember from my neuromorphic engineering course (or analog VLSI course) is that we designed the silicon layout (with n and p doping regions) in a way that the transistors are operating in the subthreshold regime in the IV characteristics. If I remember correctly the IV characteristic is linear in the subthreshold region? In contast in normal digital chips only the super-threshold region is used (voltage above a certain saturation threshold switches the transistor completely on). Using the subthreshold region it is possible to implement spiking neuons with only very few transistors. It works completely different than digital circuits. The connections between the transistors don't transmit just 0's and 1's. Instead all wires transmit analogue signals where the exact voltage matters. This makes these chips extremely energy and space efficient. These chips can also work much faster even in comparison to biological neurons (obviously using some assumptions and simplifications, such as neglecting certain special kinds of ion channels found in real neurons).
Theoretically , Analog is by far the best for neural networks. But why aren't we starting to see chips offered ? Heck even an old process like 130nm could have some practical uses .
Loosely defined, it's various chips that implement simplified computational models of neurons, and some plasticity functions. They are usually simplified because going into the greatest detail (modeling hodgkin-huxley type channels) would require too much computation. In the neuroscience community there is not yet an acceptable model for a simplified neuron so everyone picks some spiking neuron model or they make up one. Even less is known for plasticity functions.
I believe the idea is that if you simulate too many of them, something useful will happen.
All of the subjects you mention at the end of your comment, especially memristors, are indeed the focus of neuromorphic computing. It's a nascent technology field, but there are plenty of papers available on IEEExplore or ACM Digital Library to satisfy curiosity! I'll take a look at the survey paper I wrote a couple summers back-- if it's decent I'll edit this post with a link.
The article assumes we know, but I haven't heard of it. And the wikipedia article on "neuromorphic engineering" talks about stuff like analog circuits, copying how neurons work, and memristors, none of which seem that related.