Hacker News new | past | comments | ask | show | jobs | submit login

I agreed with your previous comment, but here I think you're missing an elephant in the room: a brain does not use "RAM", as it's implemented in von Neumann computers. There seems to be a long term storage of some sort, sure, but that's more like a hard drive storage than RAM. And in the brain, memory is integrated in the compute elements themselves. So the obvious way to copy such architecture is to use analog computing elements, which are at the same time memory elements - neural network weights.

When NN algorithm development reaches the point where they work well enough for commercial applications (which seems to be rather soon), the next step will be specialized hardware to run those applications, and that's where we can start building analog chips with huge weight arrays, programmed as analog values on transistors/capacitors/memristors/etc. That's when a true departure from von Neumann might take place.




You can't program a brain the way you can program computers. You can't program NNs either, im that sense. They have their place, I work for a company shipping them now with hw acceleration. Let's ignore the role of RAM here amd the fact that NNs are not like real neurons and let me grant the point that it's not a vNA, programming model-wise. Most of computing however will not be NN-based because you want to program things precisely, and the brain is terrible at executing commands precisely.


But that's the thing - I don't want to program things precisely! Most people don't. I want to tell my computer what I want in plain English, and I want it to understand me, translate my wish into actions, and perform those actions. Just like a human servant would do. That's the future of programming (or so we should hope, at least).


Well, I would say it sounds like you want a computer that understands you imprecisely so it can account for the idiosyncrasies of human communication, but I think you still want it to carry out the action it's decided you want in a precise manner. Protocol negotiation does not lend itself to imprecise ordering of actions or commands.


Of course. If I hire a human servant, I will expect him to understand me even when I'm not very precise in my commands, and perform the action I wanted precisely. In fact, the best servant is the one who does what you want before you have to ask him, right?

That's exactly what I expect from computers in the future.


I don't know why you're assuming the AI is smart enough to perfectly understand what you want even with incomplete knowledge but is stupid enough to need your help even though it's effectively capable of doing everything by itself.


Because it's a servant, not a master.

It's perfectly possible to have an automaton that's good at predicting needs and inferring outcomes without assuming that it can set independent goals of its own without being prompted.

One is driven by external stimuli in a deterministic way, which makes it a computer like any other.

The other is internally directed - which edges into questions of sentience, free-will, and independent mentation, and is unknown territory for CS.

Siri and Viv are already heading in the former direction. Siri has some nice canned responses which can create a fun illusion of a personality, but not many developers - and even fewer NLP designers - are going to tell you Siri is independently goal seeking.


Human brains are also very inefficient at computation (per unit energy, time, mass, volume, or some combination of the four), unless it's a problem they were tuned for over a really long evolutionary time (e.g. facial recognition, bipedal locomotion, etc).

Sure, if you find a problem that a NN algorithm is really good at, then mapping the hardware to the NN algorithm will get an efficiency speedup, but that's because of the application-specific hardware, not because the vNA is a bad general computation model in terms of what we can physically instantiate.


NN != brain. Not by a long shot. We still have no clue how it operates, don't kid yourself. There are some processes happening in the brain that aren't supposed to happen, like the massive amount of ions passing through membranes, causing some think there is some time-bending quantum effect making it happen. We simply have no clue.


You can say the same thing about our computers: they are very inefficient at tasks they were not designed to do. Actually they are very inefficient even at tasks they were designed to do, but since they do them very fast, we've been ignoring it (at least until Moore's law started to slow down). Just think about all layers of abstraction present in current computing platforms, and how much computation really happens when you type 2+2 in a Python console.

Also, there are a lot more problems a brain solves better than computers. Software NNs slowly learn to solve more and more of those, primarily by better copying how brain does it.

So, in summary, vNA might not be a "bad" general computational model, but it surely is very inefficient compared to human brain, when it comes to "computing" vast majority of tasks.


>You can say the same thing about our computers: they are very inefficient at tasks they were not designed to do.

Hold on -- we need to be consistent about what part of them was designed for what purpose. In the context of the discussion, it's the hardware. Computer hardware is only "designed" to be able to maintain a small state while performing a huge number of Boolean circuits (from some large set of possible circuits) per second, writing to external stores.

They were not designed to e.g. make graphs, host discussion forums, etc, and yet do that very well --because we can apply software that converts those problems into "do these Boolean circuits/writes really fast". Considering how handily they beat the alternatives at those tasks, I think it would be fair to say that they're good at things they're not designed for!

It's just that there remain a few problems that aren't tractable to this approach so far; but focusing on those problems is a misleading picture.

>Actually they are very inefficient even at tasks they were designed to do, but since they do them very fast, we've been ignoring it (at least until Moore's law started to slow down)

As above, they were only designed to do those circuits and they're good at that.


I have no idea what you mean by "perform Boolean circuits" in the context of this discussion.

Bottom line: vNA based computers are inefficient, regardless of what they are trying to compute. But they do most of the tasks fast enough for us not to care. When it's not fast enough, people build ASICs.


A Boolean circuit is a circuit that implements a Boolean expression (given input signals set to 1/0, output the 1/0 value corresponding to the result of substituting them into that expression.)

Computers at their core are machines designed to implement many such circuits, of many varieties, per second, and send their signals to output devices or their own internal state. When CPU makers design them, they are optimizing for the efficiency (with respect to cost, space, mass, energy consumption, et/vel cetera) with which they perform that task. They do not specifically optimize them for hosting forum discussions; instead, the software or OS designer chooses instructions for the CPU that allow it to accomplish that efficiently.

All of the above was in response to your claim that computers are only (somewhat) efficient at what they're designed to do. But they're designed to implement boolean circuits (and write to peripherals), they're not designed to host discussion forums, et cetera, and yet they're still good at the latter.

What do you disagree with there?

>Bottom line: vNA based computers are inefficient, regardless of what they are trying to compute

If modern computers are "inefficient" at summing numbers (or hosting discussion forums, for that matter), then may I forever be dredged in Soviet bureaucracy!


Computers are circuits. Those circuits don't change every second. They never change, unless there's a programmable logic embedded somewhere. Perhaps you're confusing "circuits" and "operations", e.g. "adder" and "addition"?

Of course CPU designers optimize those circuits for speed/power/etc. of the instructions. No one says the implementation of those instructions is inefficient. It's the way of performing high level tasks, dictated by available instructions in the ISA, which in turn is dictated by vNA, is inefficient.

Why would we care about how efficient are the instructions, if the tasks we are interested in, when performed with those instructions, are inefficient?


I'm not sure why you're belaboring the distinction between circuits and ops, if you understood and agreed with the (sub)point I was making there.

Do you disagree with the broader point I was making, that "computers" -- in the sense of computer hardware -- aren't specifically designed for many tasks for which they're used? We can agree that computers aren't designed to host discussion forums, right? And we can agree that the hardware design is focused on efficiency of fast execution of the cir-- sorry, operations, not the discussion forums per se, right? So what were you disputing there?

If you agree that computer hardware architecture isn't optimized for hosting discussion forums, then it sound like you agree computers aren't designed for that. And if you agree that they're nonetheless good at hosting forums, then it sounds like you're walking back on your earlier claim.

>It's the way of performing high level tasks, dictated by available instructions in the ISA, which in turn is dictated by vNA, is inefficient.

Again, where are you getting this? In what relevant sense is a vNA computer "inefficient" at sums or hosting discussion forums? It's not only efficient in absolute terms, but more efficient that any other known method. Again, if that's "inefficiency", I don't want to be efficient!

What would be the non-vNA architecture that would do it more efficiently? Why would a neural net be better at hosting discussions? Again, you can definitely hammer them into the shape of one application-specific computation of one size; but having to do that for every application you'd ever want to do does not sound like efficient hardware.


Being good at something, and being efficient at something, in general, is not equivalent.

Earliest computers were designed for calculating missile trajectories, and they were good at that task. Not efficient (required a large room, cost millions, and consumed megawatts of power), but fast enough to produce results faster than any other known method. Later, this design, which was inefficient even for the original task, was used for other tasks, and it was of course even more inefficient there. Modern computers have become more efficient at the original task (numeric calculations), but they are still horribly inefficient for hosting forums, because of the limits of vNA. If you want to have something that is more efficient, the answer is easy - build a specialized processor, designed to host forums. It won't be able to do anything else, but if will be orders of magnitude more efficient at hosing forums. You have to choose having one computer that is inefficient at many different tasks, or many computers that are efficient at single tasks.


I don't see where I ever disputed that specialized hardware is better than general purpose hardware at the task for which it is specialized; in fact, I specifically made note of that. Is there a reason you think it refutes a part of the thesis I was presenting?

The claim under contention is that there's some root deficiency of the vNA at computing in general, that perhaps could be surpassed by some ingenious FP-inspired model. If the only "limit" or "inefficiency" of vNA is that it doesn't achieve ASIC level efficient on every task, that's not much of a criticism. Even applied "horribly" to hosting discussion forums, the vNA is light years beyond all other general hardware.

I was under the assumption that a more substantial criticism of vNA was being offered. But the above criticism makes no sense unless it were somehow economical to rearchitect a computer for every distinct problem you plan to work on.


I was disputing your claim that brains are less efficient than vNA based computers. My point is that for vast majority of tasks we face on a daily basis, our brains are vastly more efficient than our computers. Note that a brain is a general purpose, non vNA based hardware. My conclusion: we should try to build a brain-like hardware out of silicon, as soon as we learn enough about how brains work.


> Just think about all layers of abstraction present in current computing platforms, and how much computation really happens when you type 2+2 in a Python console.

That's not a limitation of von Neumann architecture. That's a consequence of us using the vast and cheap computational resources available to us to make things easy.


Your arguments are compelling. I hope others will embrace them more.


Last I saw, a custom built computer is many many orders of magnitude worse than the brain at playing Go per unit energy. And its not like we evolved to play Go.


Fair enough, my claim was overbroad. I would still say it's true for most computation tasks: a vNA computer on typical hardware is still going to be more efficient as a general-use computer than something on a neural architecture.

Humans do currently have an advantage on certain tasks that require detection of overall global patterns adn symmetries. (But even then, only a subset of those tasks for which someone hasn't been able to specify a good algorithm for finding that structure.)

Even so, the core point remains: it's not that vNA is a bad architecture or that neural computers are more efficient in general, and the latter are probably much harder to reason about.


I don't think von Neumann computing will go away but agree that analog computing will massively boost perf of neural nets in the near future. There is a really good in depth talk about the subject here from Microsoft Research a couple of years ago - still extremely relevant

"Will Analog Computing and Neural Nets Power Innovation after Moore's Law?"

https://youtu.be/dkIuIIp6bl0




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: