Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm aware of that and I've done quite a bit of work on both spiking neural networks and modern deep learning. My point is that those complexities are not required to implement many important functional aspects of the brain: most basically "learning" and more specifically, attention, memory, etc. Consciousness may fall into the list of things we can get functional without all of the incidental complexities that evolution brought along the way. It may also critically depend on complexities like multi-channel chemical receptors but since we don't know we can't say either way.

It's a tired analogy but we can understand quite a lot about flight and even build a plane without first birthing a bird.



> It's a tired analogy but we can understand quite a lot about flight and even build a plane without first birthing a bird.

The problem is we don't know if we're attempting to solve something as "simple" as flight with a rudimentary understanding of airflow and lift, or if we're attempting to achieve stable planetary orbit without fully understanding gravity and with a rudimentary understanding of chemistry.

I think it's still worth trying stuff because it could be closer to the former, and trying more stuff may help us better understand where it is on that spectrum, and because if it is closer to the the harder end, the stuff we're doing is probably so cheap and easy compared to what needs to be done to get to the end that it's a drop in the bucket compared to the eventual output required, even if it adds nothing.


Your analogy is actually quite apt here - the wright brothers took inspiration from birds but clearly went with a different model of flight, just like ANN field has. The fundamental concept of the neurons are same, but that doesn't mean the complexity is similar.

Minimally, whatever the complexity inside a Biological neuron maybe, one fundamental propery we need to obtain is thr connection strengths for the entire connectome, which we don't have. Without that we actually don't know the full connectome even of the simplest organisms, and no one to my knowledge has hence actually studied the kind of algorithms that are running in these systems. I would love to be corrected here of xourse.


Even with connection strengths I still don't think we would really have the full connectome. Such a model would completely miss many of the phenomena related to chemical synapses, which involve signal transduction pathways, which are _astoundingly_ complex. Those complexities are part of the algorithm being run though!

(Of course we might still learn useful things from such a model, I just want to be clear that it wouldn't in any sense be a complete one.)


This. I simply cannot even begin to go into the sheer magnitude of the number of ways the fundamental state of a neural simulator changes once you understand that nothing exists monotonically. It's all about the loops, and the interplay between them. So much of our conscious experience is shaped by the fact that at any one time billions upon billions of neural circuits are firing along shared pathways; each internal action fundamentally coloring each emergent perception through the timbre it contributes to the perceptual integration of external stimuli.

It isn't enough to flip switches on and off, and to recognize weights, or even to take a fully formed brain network and simulate it. You have to understand how it developed, what it reacts to, how body shapes mind shapes body, and so on and so forth.

What we're doing now with NN's is mistaking them for the key to making an artificial consciousness, when all we're really playing with is the ML version of one of those TI calculators with the paper roll the accountants and bookkeepers use. They are subunits that may compose together to represent xmcrystalized functional units of expert system logic; but they are no closer to a self-guided, self-aware entity than a toaster.


Agreed, though continuously monitoring the propagation of the signals in vivo would allow us to at least start forming models on temporal or context specific modulation of connection strengths (which in the end is what decides the algorithms of the nervous system I presume)


It's easy to see if something flies or not. How would you know if your simulation is conscious?


This is, of course, the key problem.

I mean, I know that I'm conscious. Or at least, that's how it occurs for me.

But there's no way to experience another's consciousness. So behavior is all we have. And that's why we have the Turing test. For other people, though, it's mainly because they resemble us.


^ This. The AGI crowd consantly abuses the bird/plane parable.


> we can understand quite a lot about flight and even build a plane without first birthing a bird

Or fully understanding fluid dynamics and laminar flow. No doubt that the Wright Brothers didn't fully grok it, at least.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: