Hacker News new | past | comments | ask | show | jobs | submit login
Mind Emulation Foundation (mindemulation.org)
93 points by gk1 on Sept 1, 2020 | hide | past | favorite | 132 comments



"While we are far from understanding how the mind works, most philosophers and scientists agree that your mind is an emergent property of your body. In particular, your body’s connectome. Your connectome is the comprehensive network of neural connections in your brain and nervous system. Today your connectome is biological. "

This is a pretty speculative thesis. It's not at all clear that everything relevant to the mind is found in the connections rather than the particular biochemical processes of the brain. It's a very reductionist view that drastically underestimates the biological complexity of even individual cells. There's a good book, Wetware: A Computer in Every Living Cell, by Dennis Bray going into detail on how much functionality and physical processes are at work even in the most simplest cells that is routinely ignored by these analogies of the brain to a digital computer.

There is this extreme, and I would argue unscientific bias towards treating the mind as something that's recreatable in a digital system probably because it enables this science-fiction speculation and dreams of immortality of people living in the cloud.


I’ve posed this claim to dozens of neuroscientists. If you consider the connectome just the static connections then you might be right. If you include the dynamics of the brain (the biochemical processes) as part of the connectome then most neuroscientists would agree that is sufficient to produce the emergent property of mind. The honest answer is we don’t know yet. That said, it’s likely not necessary to model every atom’s interaction with one another so there must be a level of abstraction sufficient enough to emulate a mind. Our foundation is trying to identify what is the minimal level of abstraction necessary to emulate a mind.


In support of the requirement for high-fidelity (atom-for-atom) modeling is the notion that an evolved computer would converge toward behaviors that supervene on specifics of the host environment. If porting a binary to another CPU architecture is tough, how easy will it be to port a mind to a simulated simple physics? How many edge cases will it have to get right to even run at all? If brains are hacks designed over millions of generations to surf overlapping fitness functions, it makes sense they'd find implementation (real physics) dependent optimizations that compound in ways which fall apart in toy physics. That's not to say we can't add cool peripherals.

ps even with atom-for-atom modeling, how do you know the behavior doesn't depend on relations which are not computable? If physics ranges over the reals, some of those edge cases might be hard to find with a simulator.


I can hit myself in the head and I don't lose my train of thought, instantly lose consciousness, or die. If consciousness relied on the precise positions of individual atoms (as far as that makes sense with moving particles) it would be way more fragile than we've observed it to be. The fact that your brain is resilient to being knocked around a bit is evidence towards the underlying mind being at least slightly higher level than where strong quantum effects live and also fairly redundant.


I agree, but I think there is a case to be made that there is important state separate from just which cells connect to which and how strongly, but is also more coarse grained than single atoms floating around.

The cytoskeleton may be found out to have a role to play. The number and locations of ion pumps. Or epigenetic changes in clusters of brain cells.


If you hit your head hard enough all those things will happen though.


On the other hand, the brain is famously warm and wet. There's a limit to how much local state the brain can practically use to compute, given how messy it is.


>If physics ranges over the reals

I thought this was ruled out by https://en.wikipedia.org/wiki/Bekenstein_bound


That's my understanding as well. Quantum states encode finite information (despite their dynamics lying on the reals; the practical consequence is just that state transitions and information flow are smooth).

The formal discussion around this is largely centered on the Church-Turing thesis:

https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis

Essentially stating (in a certain interpretation) that any physical process can be simulated by a Turing machine (with no mention of efficiency). This seems to be the case, although I'm not sure we have a convincing proof from quantum field theory yet (note that quantum process can be simulated in classical Turing machines, although with an exponential cost).


> If porting a binary to another CPU architecture is tough, how easy will it be to port a mind to a simulated simple physics?

It will be very difficult, but we shouldn’t underestimate what is possible decades from now. As an analogy, consider when the Nintendo Entertainment System (NES) came out in the 1980s. Did anyone ever imagine it could be fully emulated in JavaScript in a browser [1]? Certainly not since those technologies hadn’t been invented yet.

[1] https://jsnes.org/


What's JavaScript? What's a browser?


Hopefully the brain doesn't rely on undefined behavior.


What if we find that the gut, which has 100 million nerve cells, also plays a part in the emergent property of mind?

https://www.sciencemag.org/news/2018/09/your-gut-directly-co...

"In a petri dish, enteroendocrine cells reached out to vagal neurons and formed synaptic connections with each other. The cells even gushed out glutamate, a neurotransmitter involved in smell and taste, which the vagal neurons picked up on within 100 milliseconds—faster than an eyeblink."


That is one reason why I was very careful to name this the Mind Emulation Foundation and not the Brain Emulation Foundation. I also use the word 'body' instead of 'brain' throughout and define a connectome as: the comprehensive network of neural connections in your brain and nervous system.


If that were true, then quadriplegics would have cognitive issues, as would those who have their vagus nerve severed. Those people don't suffer from impaired cognition or drastic personality changes, so we can be sure that the nerves in the gut are not important for brain emulation.

Also human brains have an average of 86 billion neurons, so emulating an extra 100 million cells (0.1%) would be trivial in comparison.



It’s nice to see someone has consciousness all figured out.

But seriously, do you know of any studies that show no changes to mental state or capacity or personality or memory or any of the other things that compose “the mind” in such people?

I can’t believe anyone has done such studies yet. And just because you don’t see changes in such people, does not mean there haven’t been minuscule but measurable changes.


I feel like it just isn't that interesting if there are "miniscule but measureable changes" beyond a platonic ideal of self. Removing my minor back pain, or if I stopped drinking coffee, or a hundred other things would have larger-than-minuscule change on my personality but it's still "me".


Good point :)


86 billion neurons and ~1 trillion other cells, which interact with those neurons in non-trivial ways.


People who have large parts of their gut removed surgically don't lose their mind.


"once you have a comprehensive human connectome, there is still the challenge of emulating it digitally"

What would you do if Christof Koch is right and consciousness can not be computed?


His hypothesis if I understand it correctly, consciousness is the core part of existence so you would need to bring hardware into the world that not only is able to simulate the whole reality but to exist in it. Like property of physical reality and composition. That dude is a physicist believing in one single Universe, which probably also applies to most neuroscientists.

I'm not completely alien to the thought that someday we will get a digital companion that would be able to build a simulation of you in silico, but that simulation wouldn't be you.

This foundation sounds like complete science fiction, MRI really? We already have destructive approach in some countries - it's called euthanasia.


I hear the term 'emergent property' bandied about in relation to the mind as if using it somehow explains anything. It says nothing more than mind exists, somehow, yet we have no clue about its nature.

Scientists and philosophers agreeing on something means nothing as they have agreed on utter bunk before. The short of it is that we know little about the mind and have no idea how to even start expanding on the little we know.


> Scientists and philosophers agreeing on something means nothing as they have agreed on utter bunk before.

That is an extremely cynical position to take; one could use it to dismiss anything, even things obviously true. For instance, "I don't believe the Sun is powered by fusion -- nobody has ever gone there to take a sample. Sure, they claim to have all sorts of indirect evidence, and there is 99.9999% consensus on it, but scientists have backed things before that were utter bunk."

> It says nothing more than mind exists,

It makes a much stronger claim than that. There are many people who believe that consciousness exists outside the mind, and the brain is a kind of consciousness receiver, akin to a radio receiver, that picks up the signal and relays it to the body. The claim of emergent behavior is an explicit rejection that that mystical explanation that is compelling to many people.

> The short of it is that we know little about the mind and have no idea how to even start expanding on the little we know.

This is entirely at odds with reality. Is it your position that brain researchers haven't learned anything over the past 10, 20, 30 years? Clearly they have, so obviously they do have ideas about how to expand on that knowledge.


"Emergent" is description of its nature. An alternative is "fundamental property". If you study an emergent property, you don't need to find its structure as a fundamental object, and this sets general direction of research.


I think we may be aligned in thinking that the evolution of the complexity within our cells is a greater marvel than the evolution of complex intelligence building on the base of the eukaryotic cell.

But perhaps we don't need to model cells, we could use real cells for that. If the connectome is the fragile short lived structure, we could model that in software, with the cellular level modelled by a vast array of vivisected tardigrades.

Ok probably not tardigrades, probably small replaceable batteries of human neural tissue.

But I do like that phase about future humans being 'bootstapped from tardigrades"


Your post started like a legitimate well-thought out criticism based on science (even if a bit cherry-picking, assuming that the author meant only connection between the cells, when they could have also meant the various physical connections between different parts and not only electrical impuplses between whole neurons).

But then it started throwing unsubstantiated claims in there along with the legitimate criticisms.

How does

> unscientific bias towards treating the mind as something that's recreatable in a digital system

follow from

> It's not at all clear that everything relevant to the mind is found in the connections rather than the particular biochemical processes of the brain.

Even if the consciousness arises in a more profound way from the biochemical processes inside cells than from electrical connections between the cells, what is stopping us from recreationg those biochemical processes in a digital system as well? You know there are simulation equations for chemical reactions and complex biochemical systems, right? If this will be found useful, they will just be emulated the same ways the DL neurons are simplified and emulated now.


> It's not at all clear that everything relevant to the mind is found in the connections rather than the particular biochemical processes

I wouldn't expect anyone to consider the connectome to be absent the processes inside each of the individual neurons that are connected. I consider this to mean just that it's an emergent property of the collection working in concert. After all, everything is just connections all the way down, even deep inside individual cells.


> everything is just connections all the way down

Is that true? Could the "biochemical processes" also include relationships between cells?


Correct, in order to emulate a mind we might need to include that as well so that is in scope.


Indeed. We humans largely create devices that function either through calculation or through physical reaction, relying on the underlying rules of the universe to "do the math" of, say, launching a cannonball and having it follow a consistent arc. The brain combines both at almost every level. It may be fundamentally impossible to emulate a human personality equal to a real one without a physics simulation of a human brain and its chemistry.

A dragonfly brain takes the input from thirty thousand visual receptor cells and uses it to track prey movement using only sixteen neurons. Could we do the same using an equal volume of transistors?


No one is saying a neuron is a one to one equivalent with a transistor. That behavior does seem like it's possible to emulate with many transistors, however.


Was just talking about quantum cognition and memristors (in context to GIT) a few days ago: https://news.ycombinator.com/item?id=24317768

Quantum cognition: https://en.wikipedia.org/wiki/Quantum_cognition

Memristor: https://en.wikipedia.org/wiki/Memristor

It may yet be possible to sufficiently functionally emulate the mind with (orders of magnitude more) transistors. Though, is it necessary to emulate e.g. autonomic functions? Do we consider the immune system to be part of the mind (and gut)?

Perhaps there's something like an amplituhedron - or some happenstance correspondence - that will enable more efficient simulation of quantum systems on classical silicon pending orders of magnitude increases in coherence and also error rate in whichever computation medium.

For abstract formalisms (which do incorporate transistors as a computation medium sufficient for certain tasks), is there a more comprehensive set than Constructor Theory?

Constructor theory: https://en.wikipedia.org/wiki/Constructor_theory

Amplituhedron: https://en.wikipedia.org/wiki/Amplituhedron

What is the universe using our brains to compute? Is abstract reasoning even necessary for this job?

Something worth emulating: Critical reasoning. https://en.wikipedia.org/wiki/Critical_reasoning


Nor did I. I asked if we could do the same function with an equal volume. Moore's law is dead. We're not going to scale performance forever. What good is an emulated human brain if it's the size of a building and takes a power plant to operate?


> There is this extreme, and I would argue unscientific bias towards treating the mind as something that's recreatable in a digital system probably because it enables this science-fiction speculation and dreams of immortality of people living in the cloud.

I would argue differently: perhaps your point of view that's unscientific (or at least I don't see any scientific path to concluding mind emulation is impossible!).

Say there are complex processes going inside each neuronal cell. It should not be impossible to simulate those as well with arbitrary fidelity.

I think what is practically questionable is mind uploading -- we don't know if reading all this information is feasible, and it should be even more remote that reading it non-destructively would be possible. If it is feasible, then there's the question of cost.

I don't see why with a large enough computer we couldn't emulate a mind (or any other existing system). It is possible a quantum computer may be necessary, but there's still little evidence of entangled processes in the brain (indeed we strongly expect no large-scale entanglement due to high temperatures).

I think it's quite radical, and difficult to understand, the possibility of mind emulation and its impact in our society. The rights of the individuals, all the possibilities like making copies, backups, modifications, pausing a simulation, running an emulated mind faster than real minds, expanding memory, and much more. This goes all completely outside usual human experience (live out your consciousness, with occasional pauses in sleep and sedation, until you die and it stops). I predict it'll take very long until we come to gripes with all consequences of it.

It should be noted almost certainly Agent-AGI (AGI that acts like a person) will come before trying to emulate exactly a human mind. AGI has its own very interesting ethical questions. I think the main point we need to start taking in is that those beings will be conscious like us, thus we need to give them rights, take care with their experience, make sure they aren't exploited (by say simulating many AGI-individuals in terrible conditions). Again there is much outside of human experience, because they will have much more freedom. An AGI doesn't necessarily need to feel pain. Any real system might be able to override pain signals if convenient (something we can't do); but still, if it has motivators, I think it will be able to "suffer" (when things it cares about go bad).

One of the ways we will tackle this questions is through fiction. It's been ongoing, from what I know at least since William Gibson's Neuromancer there have been simulated minds (and robots existed long before, although they are not always treated as equals, or the possibility of different cognitive natures is explored).


My favorite books about mind emulation:

* "Fall, Or Dodge in Hell" by Neil Stephenson [1]

* "Permutation City" by Greg Egan [2]

[1] https://www.amazon.com/dp/B07PXM4VMD/ref=dp-kindle-redirect?...

[2] https://www.amazon.com/Permutation-City-Greg-Egan-ebook/dp/B...



Thumbs up for "The Age of Em". A nice example of how to assume a technology and work out economic consequences. Hanson goes into some good detail and includes physical constraints, emergent properties like relative time elapsed for simulated minds when transferring between locations and such.


Yes, "Fall" was really great! It was after reading it that I decided I didn't want to emulate my mind anymore. :)


I really loved Permutation City, and loved the beginning of Fall, but man it felt like Fall just turned into a really hard slog after a certain point. I have definitely enjoyed some of Neal Stephenson's other novels too, even slower-paced ones like Anathem, but for some reason Fall just didn't do it for me.


I agree that the middle of the novel was dragging, but after a while, the story was picking up speed again for me.

Stephenson's early novels suffered from weak endings, IMHO. His newer ones (Fall, Seveneves) suffer from bloated midsections).


And for games, SOMA is probably the best example.


(I’m the founder & CEO of the Mind Emulation Foundation)

Flattered to see this hit the front page! This is a project I’ve been passionate about for a while and been keeping it mostly under the radar.

Now that it’s public, I’m happy to answer any questions the HN community has.


My feedback. I think if I were to donate I would like to see a clear roadmap for even getting in the ballpark of doing this.

I have donated to SENS for years, and particularly when they started, they didn't really say exactly "donate to cure aging", they said "these are the important known problems that we need to spend time/money on to make progress here."

I believe there are multiple large known problems doing brain emulation. What are they? How will donating to you progress those?


How do you think SENS is doing, sorry to go off topic.

I like them, and they give me hope, but i am not sure where they are.


Not really a question, more of a remark: both the destructive and non-destructive approaches result in the same thing: a copy of 'you', not actually 'you'. (The same is true when you go to sleep, of course. Whoever wakes up isn't you either)

What you would need is a Ship of Theseus approach - preserving the consciousness stream while neuron after neuron is being replaced by a digital version, slowly, to convince the stream into thinking it's still the same. You can't just take a single scan; you have to keep scanning continuously and keep a running feedback stream between the (decreasing) wetbrain and the (increasing) bitbrain to ensure the illusion of continuity.

(But to be honest, I don't actually believe in consciousness or a Self. Whoever started writing this short comment wasn't 'me', and neither am I, nothing is)


The gradual replacement approach (as you describe it) is a useful thought experiment that gets people more comfortable with the possibility of mind emulation but the end state is the same for both approaches. One just intuitively feels more you than the other, but as you said, it's likely all an illusion so doing it while you "sleep" is probably enough to make you sufficiently happy to be alive when you wake up non-biologically.


This would be like Bicentennial Man in reverse. Call me skeptic but it sounds like a good foundation for a new novel and not like a good basis for Foundation.

https://en.wikipedia.org/wiki/The_Bicentennial_Man


With the gradual replacement scenario it might be easier to detect any fundamental differences between start and end states. Like if the end state is actually a p-zombie. But I haven't really thought this through.


You seem to be conflicted on what you believe on many of these topics. Which is understandable for such a profound question.

There is a very good game called Soma which explores many of these topics, which may help explore your own beliefs and subconscious reactions to the questions of instantiation and your own relation to the various modes and levels of existence of consciousness and self. I highly recommend you play it if you have time and haven't already.


Can you be clearer about what’s going on behind the scenes here? Additionally, are you aware of the humanityplus and ##hplusroadmap communities?


Do you have any affiliation or contact with https://www.brainpreservation.org/ ?


> This digital emulation could then interact with the rest of the world using a simple text interface or a more sophisticated robotic avatar with digital representations of taste, sight, touch, smell, and sound.

I found this to be the most intriguing bit of the described process. I can digest the (theoretical) idea of reproducing the mind, mostly agreeing with a materialistic perspective of it. What I got myself imagining was the process of adaptation this mind would have to go to be capable of interacting with the new kinds of inputs and outputs it would have. Imagine you freeze a computer while it was running a VR game with max resolution settings. You then go on and move the machine to a different setup, with a CRT monitor and a keyboard to interact with it. How can any meaningful interaction happen in such a context? Unless you provide a virtual environment to interact with it and allow it to adapt... But then, how would such an environment look like to a copy of a mind? Would you have any insights on that?


I am always amused at this kind of approach to immortality. While the copy of me that is reborn would appreciate my preparedness, that doesn't make this copy any less unhappy about dying.


When you wake up in the morning billions of your cells have changed from the night before. Are you any less you?

It is possible one day you will go to sleep biologically and wake up non-biologically. It’s just a matter of sufficiently emulating the processes that were present when you were the biological you.


I have meditated enough to be unconvinced that even the me that goes to sleep is the same guy who woke up that morning.

I try to be good to the guy who wakes up with my memories and body the next morning. That makes me no less unhappy about dying.


I'd even go so far as to say that the me from 10 minutes ago isn't the same me as now. Consciousness isn't continuous, it arises and ceases. It's our memory that gives us the illusion of continuity. And this is what makes me absolutely unconcerned about dying. From what I read, this is what the Gautama Buddha meant with Anatta[1].

[1] https://en.wikipedia.org/wiki/Anatta


From an Islamic religious perspective, the soul is taken out of the body when you sleep and then put back into it when you wake up. Much the same way as when you die.

So there you have it :)



That is a refreshing perspective.


What if you and the non biological copy both wake up? Which one is "you"? I'm obviously going to care more about the one that I appear to actually be, and won't be happy if me copy decides to kill me.

The idea of a continuous self is probably an illusion, but if your body dies that illusion does too. Your copy just lives its own version of that illusion.


Sure, there is plenty of churn with some parts of the body.

But it is not as if the brain destroys and creates brain cells at anywhere close to that rate. The neurons and the connections between them are quite long-lasting. I'm going to assume that's the "me" in here, until proven otherwise.


For many people it's not the bodydying that's scary it's the end of consciousness while a copy of your consciousness lives on. But I believe it's possible to make the transition seamless without a break from conscious thought.

The process would probably involved disconnecting parts of the brain while simultaneously activating the emulated versions. It might be possible to do while awake even. but not for a long time


Recent work in philosophy has called into question the notion that the mind/cognition/identity is entirely independent from the body.

https://plato.stanford.edu/entries/embodied-cognition/

Personally I think Western culture in particular has neglected the physical aspects of existence. The idea that our bodies are simply vessels for our minds seems more the result of cultural neglect than anything.


We are not claiming they are entirely independent, quite the contrary. The mind is an emergent property of the body. Just like music is an emergent property of sound waves.


But if your mind and self is formed by and dependent on your body, how can you transfer it to a bodyless existence without losing that self?

The mind as an emergent property of the body is also not an established philosophical or scientific fact, and is quite dependent on positivism, which has plenty of issues.

It seems to me that at best, you’re creating a surface-level copy, but one inherently limited to contemporary scientific knowledge. Not to say that this isn’t interesting or useful, but it’s certainly not the same ‘self.’


The notion of self is an illusion the mind creates. (There are many benefits for doing so, not least of which is self-preservation which is helpful to producing progeny and so therefore is selected for during natural selection.)

To answer your question directly: one can transfer an emergent property of a system to another system by sufficiently transferring the mechanism that produce that emergent property in the first place. A good analog would be emulating the hardware of the Nintendo Entertainment System (NES) entirely in software [1].

[1] https://jsnes.org/


The self is far more complex than the simplistic positivist notion of it. And again, this only works if you assume that at the time of the mind creation, your knowledge is complete. That seems fairly ignorant considering the history of science, not to mention the inherent limitations of empirical knowledge.

The NES example is not really a good one because it’s a created object and knowledge of it is complete, therefore replicating it is possible.

Even then, assuming all of this didn’t matter- I still don’t see how the mind maintains itself in a new body. It’s not as if a human mind is a static entity-it constantly comes into contact with the world through its embodied form and this reinforces and extends this notion of identity. Assuming you could emulate it on a computer, it would seem logical that the mind would change to adapt to its new body, thus no longer being the same self.

Ultimately any “transfers” will actually just be the creation of new minds, which IMO is more interesting anyway.


> The NES example is not really a good one because it’s a created object and knowledge of it is complete, therefore replicating it is possible.

One could replicate the NES hardware without any knowledge of how it was built by reverse engineering it

> Ultimately any “transfers” will actually just be the creation of new minds, which IMO is more interesting anyway.

I actually agree but in the same way that you have a "new mind" when you wake up in the morning. The continuity of self is an illusion. "I" would be very happy to one day wake up having been transferred into a non-biological system the same way that "I" would be very happy to wake up tomorrow.


>One could replicate the NES hardware without any knowledge of how it was built by reverse engineering it

Not sure if one could do that if that person is not very familiar with similar projects. Give the NES system to a scientist from 1800 and tell me what they could conclude.


Sure but the mind is reinforced by its constant embodiment. If you woke up in another body, or with no body, then that identity would seem difficult to maintain.


"The self" is just a other way of saying "the unified stream of experience". It is not an illusion, just a fact. No one has ever experienced two consciousnesses. The term "the self" is simply a way of stating that fact.


Who is the one who is experiencing 'one consciousness'?


That's the whole mystery right there isn't it? No one has ever had a solid explanation for the apparent unity of the stream of consciousness. I suspect it has something to do with the composition of oscillatory patterns, similar to a sound wave being a combination of many individual sounds which get combined through wave interference into a unified pressure wave. But that leaves many questions unanswered. Better to be puzzled by this than assume we should know the answer right now. We just don't.


Thank you for saying this. It's always so strange to me that there are people obsessed with prolonging the inevitable, as if immortality itself could be an end-state.


All of life is prolonging the inevitable. This would seem to be an argument for suicide.


> Personally I think Western culture in particular has neglected the physical aspects of existence.

Do you mean eastern?

The western culture is the one that has a lot of focus on the physical aspects and comforts of life. Arguably, many times too much focus on that which has been detrimental for many people's own psychological and spiritual well-being.


By physical I meant bodily, as in, connected with the body. Our predominant mental model seems to be that our mind/head is the real person, while the body is just sort of an add-on. The intellectual is superior to the bodily; developing the mind while neglecting the body is completely acceptable (see: obesity statistics). And so on. This is mostly just a consequence of a society over-reliant on technology, I'd say.


Recent? Buddha taught all this 2500 years ago


Once you become immortal, someone can put you in hell


I think there is a fundamental paradox at the root of consciousness and thought: once we succeed in alleviating suffering by discarding the body, we are already in hell. What is the purpose of a mind without a body? To witness? To what ends? There's a reason that the lower brain is at the seat of the throne. Maybe we should focus more on cooperating and less on becoming a purposeless husk. If the string is too tight it breaks, if it is too loose it will not play.


If you can emulate a brain, you can probably emulate a body.


But to emulate the body, you must emulate the environment that shaped it, so now we're talking microverse creation. You could at best emulate the full set of current functions, but without the environment that model would be subject to incompleteness, entropy. You could perhaps watch for change, using the human corpus as input; all set conclusions seem to lead back to AI as human local intelligence orchestrator, rather than AI as emulation of human thought, maybe the two becoming one over time.


> But to emulate the body, you must emulate the environment that shaped it, so now we're talking microverse creation.

Yes, this is called "virtual reality" and "physics engine". The brain does not need its environments to be 100% indistinguishable from its ancestral environment; if it did, we could never move away from where we were born.


>>"[..]What is the purpose of a mind without a body?[..]"

What's the purpose of a mind with a body?


To help the body survive


Yep, this kind of terrifies me. The human mind has an incredible capacity for suffering. An emulated mind might have many orders of magnitude higher capacity for suffering, on top of being effectively immortal. At least the human mind will die after 80 or 90 years or so.


The best I've seen the topic of mind and computation bridged and well communicated has been by Joscha Bach. Recommend watching this video on Computational Meta-Psychology [1]. Also his 3 hour conversation with Lex Fridman is a whole other amazing rabbit hole [2].

[1] https://www.youtube.com/watch?v=WRdJCFEqFTU

[2] https://www.youtube.com/watch?v=P-2P3MSZrBM


I'm not convinced.

1. We still don't know what the memory engrams truly are ( https://en.m.wikipedia.org/wiki/Engram_(neuropsychology) ). I once read that aside from relying on the interconnections of the neurons they also rely on specific proteins created during memory creation. They are then vital for memory recollection.

2. We know that the connections between neurons are important but we just realized that the support cells (glia) also affect the firing mecanism: they are not only support but a filter as well ( https://en.m.wikipedia.org/wiki/Glia )

3. Inhibitory interneurons provide a way for synchronous firing of neurons to form a learning experience: https://www.nature.com/articles/s41467-019-13170-w

All in all to replicate the brain functionality we would need to fully replicate the chemical composition of the brain to the lowest level (molecules).

I'm not holding my breath.


> They are then vital for memory recollection.

Even if specific proteins are needed for memory creation (which is debatable [1]) it doesn't mean you need to model those proteins to retrieve the stored memories from the structures that they created. You can read data from a hard drive without modeling the CPU or memory bus of the computer that stored the data.

> support cells (glia)

Glia cells are the order of 40-50 microns [2] and can easily be seen with an electron microscope. In fact, they are present in the Lichtman paper [3] linked from the Mind Emulation website.

[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2745628/ [2] https://psych.athabascau.ca/html/Psych402/Biotutorials/4/par... [3] https://www.cell.com/cell/fulltext/S0092-8674(15)00824-7


Glia cells can be seen, sure. But to my knowledge we still don't understand their impact.

Anyway, my point is that even if you are able to recreate the structure you would need to replicate the functionality.

A better analogy would be that even if you have the data on the hard drive you would need a special program that would know how to interpret it. In this case the data also contains a time variant exactly when the data applies which is not captured within the structure unless you go to the molecular level.


Memory engrams are thought to be DNA methylation patterns in a neuron's DNA.

http://learnmem.cshlp.org/content/23/10/587.full

https://www.nature.com/articles/s41467-020-14498-4


Wow, have not read this before. Thanks for sharing!


So the C. Elegans connectome was done in 1986, and we still haven't made a fully functional model of the c. elegans brain. I'm not sure that this bottom up approach (synapses -> model -> behavior) will work better than a top down approach that has been making a lot of progress in AI (behavior -> model)


OpenWorm [1] has made a lot of progress toward building a fully functional model and when it simulates a worm's behavior it is almost indistinguishable from the real thing. Here is a video they produced in 2013 of C. elegans moving: https://www.youtube.com/watch?v=SaovWiZJUWY

[1] http://openworm.org/


It's modeling the firing of neurons contracting muscles. C. elegans is capable of learning about its environment, looking for food, finding a mate, and so forth.

It's only 1mm long and has 300 neurons.

You're going to scale to 86 billion neurons by 2084?

I hate to call shenanigans, but the idea that you will have made any real progress on a mammal of any size in 60 years is not realistic. If "Life is Precious" (as stated in your pitch) why not spend the money on things that are possible for humans that are alive?


Focussing on synaptic connections seems rather simplistic. For full "emulation" they would probably need to emulate the electric fields and neurotransmitter concentrations, otherwise just-the-spikes simulation will probably capture only a small percentage of brain dynamics.


You're probably right, at least in terms of neurotransmitter concentration, but I doubt that synaptic connections are only a 'small percentage' of brain dynamics.


This is so ahead of the curve of reality that it’s easy to dismiss- BUT at the least it’s possible that it could lead to some interesting basic research being done. Hopefully that, rather than misleading rich marks and separating them from their cash is the real goal.


Thank you for your optimism. The goal is to fund and conduct basic research.


That plot half way down the page where they fit a straight line through 2 points and predict to be able to map human brains by 2084 made me laugh.


The y-axis is logarithmic so it’s actually representing exponential improvements which is a fair upper-bound assumption given the rate of improvement in cost to map a human genome was better than exponential.


I get that. It's just that for an excel-trapolation 2x outside the observed range based on only 2 data points, 2084 is a strangely precise estimate. I find it hard to take this seriously.


Pull requests welcome. :)


> the body will be partitioned such that it could be scanned with an electron microscope

"Partitioning" the brain will destroy some of the nanometer-scale tissue as it is sliced.


this could effectively lead to the creation of heaven, an afterlife that would eliminate the notion of death. I never quite understood why the entire human species, once made aware of the non zero probability of this working, not diverted most of its entire global energy, time and resources towards this effort. I can only imagine it is because most people have no capacity to really imagine, or outright refuse to ever imagine, death. In a way I do envy them.


How can you be sure that if we could eliminate death, we would also be able to make life so great that our mind would find it worthwhile to keep living century after century? In other words, if we enabled people's minds to continue forever, how do we know it would be heaven and not hell?


I am not at all sure I want my mind copied.

It is very hard to destroy or even contain all copies of information. Thus, I find it very plausible that if a copy of my mind is created, at least one copy will be effectively enslaved or tortured.

Possibly more copies would find themselves in heaven than in hell, but how we weight that tradeoff is subjective, and I'm a bit of a pessimist so I would prefer not to take the risk, thanks. (Especially since as a Christian I believe my original mind is likely to reach God's heaven.)


You're basically arguing a techno-religious Pascal's Wager, so my contra-argument is the same: if the nonzero probability is a 0.1% chance, would you really risk your one and only life dedicated to something that has a 99.9% chance of not happening, whether it be real or techno-heaven?


You're implicitly assuming that accepting the Wager means "losing" your one and only life. OK, it depends on your utility function, but most religious and techno Wagers don't require immediate martyrdom.


Mathematically that makes sense if the life expectancy of an upload is greater than 100 000 years.


Ive been wondering whether this was possible and/or feasible. We don’t know if the “brain in the vat” would alter their original personality, which is very likely having that it has to live in foreign artificial environment. The mind itself could become depressive and unwilling to tick because why would it? Just so it exists while not really existing phisically?


Reading this reminds me of a great Cory Doctorow / Charlie Stross book - "The Rapture of the Nerds" (https://craphound.com/category/rotn/)


My hesitation with mind emulation is not so much with the technical side; I think it’s fairly clear that we’ll get there.

However, the question of responsible stewardship looms large, and is rarely addressed. With whom am I entrusting my mind? How can I be sure that such stewardship won’t be transferred to another party at some point? Who’s to guarantee that my mind won’t be installed into an eternal torment sim?

The stewardship questions have always bothered the hell out of me, and the lack of convincing answers has always led me to avoid buying completely into the preservation of my body for future scanning and uploading into a sim.



How does having the connectome get you the mind? Don't you need the weights in the neurons? I thought I also read that neurons are not discrete units, that there might be weights within the dendrites/axons.


The connectome includes the weights.


Do you have a reference somewhere that says that? The Wikipedia entry for connectomics does not imply that and I've not read of anything that can get the weights out of neurons.


Connectome: How the Brain's Wiring Makes Us Who We Are [1]

[1] https://www.amazon.com/Connectome-How-Brains-Wiring-Makes/dp...


That is because there are no weights to begin with. In actual biological neurons synapses perform a rather complicated biochemical dance involving vesicles, ion channels, second messengers, whenever an action potential arrives. Some of that can be described by simple ODEs which might contain a constant that you could call “weight” but the actual physical synapses know nothing about that.


At this point, I wonder if it would make sense to establish a Time Travel Foundation, to ensure that such dangerous technology does not fall into the wrong hands once it is, inevitably, created.


I'm not as critical of these goals as some, but to not even mention another dimension of complexity beyond connections: electrophysiological firing pattern, is quite an oversimplification.


With a sufficiently robust representation of the connectome including its spatial orientation one can also emulate the electrophysiology.



Get ready for the pseudo science terms to get investor money


Imagine the torture of being the result of an early "success" in such an experiment. There's a real pickle of an ethical dilemma here.


If you haven't, you should check out the book Fall, or Dodge In Hell by Neal Stephenson. It explores this concept very deeply. (Sometimes far too deeply.)


>a reasonable estimate for the first human brain being mapped would in 2084.

:(


Looks like a bunch of kids have watched too much Westworld.


Any major difference from the Carboncopies Foundation?


is this troll content?




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: