First time seeing a website checking for private mode:
> Hello,
> We noticed you're browsing in private or incognito mode.
> To continue reading this article, please exit incognito mode or log in.
> Why we made this change
> Visitors are allowed 3 free articles per month (without a subscription), and private browsing prevents us from counting how many stories you've read.
> We hope you understand, and consider subscribing for unlimited online access."
They're probably only using query params at that endpoint for referral links. There are a couple high profile subscription news sites that let you in as much as you want as long as your referer header says "facebook.com".
My guess is that they feature detect, using disabled stuff (eg localStorage error 22 on safari private mode) to find out if your browser looks like one in private mode.
My understanding is that things actually went the other way. Private browsing came first, then the same approach was used to create more labels than just "private" and not "private". I think more features may be turned off in private browsing mode though.
Brain uploading is a misleading description of what nectome is proposing. We don’t know enough about what needs to be preserved yet to accurately save a snapshot of a brain’s “state”. It’s not theoretically impossible, just much further than this company’s marketing implies.
I, for one, wouldn't trust that their preservation solution is sufficient for reconstruction of anything even approximating consciousness without a successful bidirectional demonstration. Far too high a risk that what they'd be able to deliver right now is a very smelly paperweight that used to be someone's brain.
yeah, the idea here is that it might be sufficient, much like cryogenics. their marketing is probably irresponsible, depending on your perspective, but then again you're dead anyway so any shot at a future may be worth it.
unlike cryogenics it will definitely result in a lot of useful and interesting data about the brain, whether it works or not.
Undoubtedly true, though good luck passing this by an IRB. The goals of preserving brain state at time of death and learning about the brain from a science perspective are not fully aligned. There's a dot product there, but enough orthogonality that it's duplicitous to sell people a brain preservation service if the true goal is science.
I don't believe we could ever see any kind of system 'approximating consciousness' without a body. Full stop. Dualism is wrong and there is no mind/body split. The full state of the brain, simulated perfectly, would perfectly reproduce the experience of a brain in total sensory deprivation paired with total whole-body paralysis. Even just sensory deprivation alone is enough to cause consciousness to entirely evaporate in moments. Add in whole body paralysis and no, you can never hope to have anything approaching 'consciousness.'
It's important to understand first of all that Nectome is not currently preserving brains in a clinical setting. We are still very firmly in the research phase of building a brain banking technique sufficiently powerful to preserve memories. (see https://nectome.com for more details). However, I envision that a validated preservation technique would preserve the entire body, including the peripheral nervous system.
> I don't believe we could ever see any kind of system 'approximating consciousness' without a body.
For the sake of argument, I'm assuming you're talking about a physical, tangible body.
> Dualism is wrong and there is no mind/body split.
Not really seeing any argumentation to back this claim up. Particularly important for a debate that has been ongoing for some centuries now...
> The full state of the brain, simulated perfectly, would perfectly reproduce the experience of a brain in total sensory deprivation paired with total whole-body paralysis.
This segment of your post makes me wonder if you are fully comprehending the end-game proposed. I don't think it would be unfair to say that anyone dreaming of a future down this path intends for these "brains in vats" to have ways to sense/communicate.
> Even just sensory deprivation alone is enough to cause consciousness to entirely evaporate in moments. Add in whole body paralysis and no, you can never hope to have anything approaching 'consciousness.'
Here is where I find a particular loss of argumentation. What evidence do you have to support your claim that without sensing things, consciousness disappears? Don't some groups of monks utilize sensory deprivation to attain "higher states of consciousness"? I'm not arguing that they are truly achieving higher states but rather that the technique itself has a connotation of "consciousness-growing" rather than "consciousness-receding".
I'm reminded of the 'hybrot' experiments where rat brain cells in a nutrient solution are allowed to wire up their own neural network on top of electrodes that send and receive data wirelessly from a robot body [https://www.technologyreview.com/s/401756/rat-brained-robot/]
The question of doing something similar with a strata running a human brain doesn't seem impossible, just extremely technologically daunting when one considers the sheer number of connections into and out of the physical brain.
Even if the technology to emulate a particular brain based on a preserved copy could exist, it seems to be asking a lot for the engineers to also emulate the specific nervous system that brain has grown to work with without a copy of it...
Eh, you can guess for a lot of it, and you at least have a map of where all the inputs actually GO. If it synapses in the olfactory bulb, probably don't hook up your simulated eyeball and all that.
Brain plasticity can cover some wiring sins - cochlear implants are pretty awful for connecting where they should go, but it's good enough to get some degree of hearing going. Get enough and the simulated patient can help you out telling you if you've got it right or not.
We’re proud of the progress we’ve made so far—demonstrating that the ultrastructure (connectome) is preserved. The connectome is a necessary starting point upon which models of the brain can be built [0]. We intend to continue researching and validating preservation of important biomolecules as we explore the efficacy limits of ASC. We intend to develop these tools on a timeline to allow proper feedback from the neuroscience community. I certainly don’t want our marketing to imply that we have near-term or even specific plans to offer anything commercially for clinical use, so I will read through our website again and try to find areas where one can get the wrong idea.
[0] From https://www.scientificamerican.com/article/c-elegans-connect...:
“Some people say we don't know anything about how C. elegans's brain works and I am like, 'Yes, we do!'" says Cornelia Bargmann of The Rockefeller University, who has studied the nematode for more than two decades and attended the Columbia debate. "A lot of what we know about C elegans's rapid behaviors we have learned through and with the connectome. Every time we do an experiment, we look at those wiring diagrams and use them as a starting point for generating hypotheses."
sigh, what about the active patterns, yo. brains aren't static connections of neurons when they're alive.
they have hojillions of firing patterns, some of which are cyclical and constant, other of which are transient and tiny. these stop when the brain is dead. you're not gonna capture these from a dead brain.
what if there's some essential aspect of consciousness that's depending on circles of five or six neurons holding a pattern of action potentials in perpetuity?
i doubt that this is the case, but if it is, cracking the connectome -- which is already an achievement that would require an effort greater than the human genome project -- wouldn't be very helpful at all.
oh yeah, and as far as preserving those circuits go, do you have a solution for clearing neurons from excess sodium or potassium after it accumulates there during death? depending on which ion pump fails from lack of energy first (and my bet is that it's the sodium-potassium ion pump, because it's responsible for some absurd percentage of the body's total energy usage), neurons could very easily be left logjammed with a glut of charged particles that prohibits making action potentials in any capacity even when energy becomes available. you don't wanna be assuming that all those excess ions are signal when they're noise.
finally: have you considered your philosophical disposition wrt substance dualism or materialism? you're gonna have to have a damn good case for some philosophical angle when dennett comes a knockin or he'll immolate your entire project.
You can electroshock a brain, completely scramble the "active patterns" and the person will... well, reboot, for lack of a better term. If you could get the weights without activity, I see no reason you couldn't just initialize in a random state and let it figure itself out. Figuring out the weights is a technical problem, but technical problems are made to be solved.
Finally, dualism is juvenile bullshit. A neuroscientist who thought the mind WASN'T the brain would be absolutely useless.
Never thought I would ask something like this, but here goes:
What's Y Combinator's stance on the death requirement? Nectome was at demo day last month and Sam Altman is (or was) on the waiting list.
Following the example of electric-vehicle maker Tesla, it is sizing up demand by inviting prospective customers to join a waiting list for a deposit of $10,000, fully refundable if you change your mind.
So far, 25 people have done so. One of them is Sam Altman, a 32-year-old investor who is one of the creators of the Y Combinator program.
First and foremost cryogenics has a PR problem. It is somehow more popular to say that you're glad people are dying by the billions than to align yourself with the people trying to make an earnest effort to stop it (now imagine how fast I would be banned for the same position around genocide).
I'm not sure whether this will just further the kneejerk anti-cryogenics sentiment or if it will seem to more normalise normal post-death cryogenics, but it does seem a rather dangerous play. I don't want cryogenics to find itself tangled up in crazy legal issues.
At an object level I'm not all that concerned with the ethics of people taking this bet, given the science is pretty sound. It does not make sense to allow people to take medical treatment that have a significant chance of serious harm for the chance of a few years longer health if it pans out, but then ban a bet as safe as cryogenics.
(1) The idea that the ultra-rich may be able to survive into the future cryogenically while 90+% of the rest of the world dies is morally abhorrent.
(2) Fearing death so much is viewed as cowardly.
(3) This is a bit more abstract, but the implicit assumption in cryogenics that death is a universal bad is troubling. Thousands of years of human thinking have determined that the prospect of death is an integral part of the human experience and may be the thing that provides life meaning. I don't know about anyone else, but the idea that the coming natural death of me and probably everyone else in this thread is on par with a genocide is upsetting (and I personally choose not to believe it). From Epictetus: "Death, for instance, is not terrible, else it would have appeared so to Socrates." [0]
(1) You could have said the same thing about cell phones, cars or any other technology which started out expensive before it scaled to the point where most people could afford it.
(2) If you fear death to much, you might not fight to the death in a war. Not everything in our culture makes sense in the modern world.
(1) Yes, but from a utilitarian perspective cell phones, cars, and any other technology offered finite utility over their alternatives, whereas immortality offers infinite utility over the alternative of death.
(2) ? unsure about what you're saying here.
(3) Okay yeah, sure, I've read that before. I don't think he offers a (currently) practical alternative to "resignation coupled with an effort to achieve closure in practical affairs and personal relationships." The harsh reality is that we'll probably die--why would I choose to conceptualize this as a violent death from a dragon? That'd surely lead to despair in my last years.
(1) Are you saying that immortality is bad because it's infinitely good? That seems like an argument for bringing immortality to everyone as soon as possible.
(2) I'm saying that one of the reasons why death is viewed as cowardly is because that belief is promoted by generals in order to make their soldiers more willing to die in battle. If that's where the idea comes from, then we should reject it, because willingness to die in battle is no longer a good thing.
Yes, I’m saying that because immortality is infinitely good, the inequality of some people (super-rich) getting it while others (the poor) don’t is (infinitely?) more unnacceptable than any wealth inequality there has been in the past.
> Thousands of years of human thinking have determined that the prospect of death is an integral part of the human experience and may be the thing that provides life meaning. I don't know about anyone else, but the idea that the coming natural death of me and probably everyone else in this thread is on par with a genocide is upsetting
To me, this sounds like the naturalistic fallacy. Is there a better argument than this one?
How about a utilitarian one: it's most likely that everyone alive right now is going to die, so from a quality-of-life perspective most people should try to accept their death.
there's nothing sound about the 'science of cryogenics' in fact there barely is such science at all if we're talking about the central sales pitch, which is actually bringing you back to life again in one form or another. Freezing you admittedly is straight-forward enough
The brain upload thing is a complete fantasy and science fiction, but even the traditional promise of somehow resurrecting you has never been proven with the exception of a 30-year-old tardigrade two years ago if I remember correctly.
Can you imagine what this planet would be like if people didn't die? How much more crowded we'd be? How much more of a strain we'd be placing on the earth's resources?
There will be challenges and struggles and costs to be paid to significantly reduce death, but my personal philosophy is that involuntary death is bad and should be fought, and that the costs of increased crowding are worth paying to cut down on the deaths of billions.
The general principle, to me, is that whatever the "ideal" rate of death is, for any given set of values and preferences, it would be an astonishing coincidence if it coincidentally happened to be exactly the current rate today.
If we woke up tomorrow to a world where humans didn't suffer degenerative aging and the vast majority of fatal illness (cancer) were treatable, would you start by advocating for mass executions, or would you advocate for increased birth control, renewable energy, reduced resource consumption rates, finding ways to make productive use of otherwise-uninhabited areas of earth, space travel, etc. before you started advocating for murdering billions of humans?
"would you start by advocating for mass executions"
The very idea that this would cross your mind after reading my comment shows that you are not interested in having any kind of serious discussion about this topic.
Re-reading your comment, I did overreact, and assumed that you were implying a position that you didn't actually assert. I'm sorry for assuming that you were advocating for a pro-death position.
If your comment wasn't intended to advocate for pro-death policies, I'm very confused about what you were trying to communicate here.
My reading of your comment is that you first asked the reader if they can imagine what the planet would be like if nobody died, then implied through what appears to be rhetorical questions that it would be much more crowded and much more of a strain on the earth's resources.
There's a lot of diversity in the use of English, so we may have experience with different rhetorical traditions, but my experience is that rhetorical structures like that are pretty exclusively intended to express condemnation of the subject and anyone who would support it, implying that they're ignoring the disastrous consequences of their policy.
The assumptions I inferred from your questions, stated directly, are something like the following. If you're uninterested in engaging with me after my earlier reply, I understand, but if not I'm curious which of these you disagree with.
1) If life-extension technology manages to dramatically reduce nonconsensual death, we will have a population explosion on Earth.
2) If we have a population explosion on Earth, quality of life for most humans will drop significantly.
3) If we have a population explosion on Earth, consumption of nonrenewable resources will increase dramatically, leading to resource shortages dangerous to human civilization.
4) In order to avoid a population explosion, we should stop work on life-extension technology.
5) It is better for the majority of currently-living humans to die after less than 100 years of life than for Earth civilization to undergo a population explosion.
By my best attempts at moral reasoning, withholding life-saving medical technology is morally equivalent to murder, and although I've fortunately never been in a situation to make such a decision, I hope that I can be the kind of person who would never decide for people to die through inaction that I wouldn't be theoretically willing to kill myself for the same benefits.
I hope that's given you enough perspective on my position that it doesn't seem completely hostile to serious discussion.
Are you suggesting we not save lives so that we can avoid some unrealized fear of overcrowding?
Suppose humanity attains indefinite life spans in the near future. Would you then advocate murder in order to save space?
Should we outlaw antibiotics because they save lives, and those lives take up resources?
I find this argument both unimaginative (improving technology changes the nature of our challenges) and repulsive (it's advocating murder through inaction).
"Are you suggesting we not save lives so that we can avoid some unrealized fear of overcrowding?"
No, and putting words in my mouth like that is extremely uncivil. We're talking about people who have already died.
"I find this argument both unimaginative (improving technology changes the nature of our challenges) and repulsive (it's advocating murder through inaction)."
And I find your tone repulsive and uncivil, as you're making extreme assumptions about my positions.
Not to mention, you're only advocating that we save the rich. I don't see you setting up a GoFundMe to preserve homeless people.
For a fascinating and realistic exploration of what that might look like, I recommend Kim Stanley Robinson's Mars Trilogy. In the series, a longevity treatment is developed that significantly extends lives. Beyond the strain on resources, there are the issues of older generations delaying progress by not relinquishing power (much like the current octogenarian U.S. congress) and other issues.
I would also recommend The Postmortal, by Drew Magary. It explores a world where the cure for aging is found, so no one has to die through natural means anymore.
Well in an argument between "everyone should die by 150 years" and "lets have the happy problem of people not dying of old age" I'm in the latter camp. Similarly having all of your species susceptible to any out of context problem that destroys a planet... yeah planetary existential risks are low probability with game ending penalty I'd like to see mitigated.
Sure, but if you put half of humanity on Mars, you'd clearly be increasing our risk, as the difficulty of surviving on Mars is nearly insurmountable. If you want people on other planets, they need to be self-sustaining. And if we can't even be self-sustaining on Earth, why would we be elsewhere?
We're a very very long way away from being able to survive if Earth fails. It is foolish to bet on any kind of interplanetary survival strategy at this point. Getting humans to other worlds is by far the easiest part of that equation.
I wouldn't put half of humanity anywhere right now. You just need enough to keep a population (with immigration) after they face a horrific mortality rate as they figure out how to lower the risk; hopefully highly trained and well equipped as sparse mitigation, but exploration and pioneering are hungry gods that will be fed human sacrifices.
I'm personally more partial to an O'Neill cylinder or a Lunar colony than Mars and the lunar colony will probably come first, Mars is a big step backwards in delta-v, but yes the mass required to get that going is currently prohibitively expensive, and the biosciences are going to be really tricky.
If it's not possible without induced death then is the concern really the "death requirement" or whether it makes sense for consumers to gamble $10k+ that in a future you a) will not die suddenly, b) your brain will not have deteriorated significantly, c) you are fully capable of making a decision to die, d) the company is still around, and e) the science somehow works out to allow "brain uploading"?
Then yes I say there is absolutely nothing wrong with that. It's their money and their body.
First off, again we are currently conducting research and are not preserving people.
Our early supporters have shown their support for Nectome’s research through fully refundable deposits, and we hope that they will live long, healthy lives, creating the very memories that they hope can one day be preserved.
No, the article is incorrect. He wasn't a creator; he was in an early batch (the first I believe), and then became president some time after his company was acquired.
If hundreds of millions of people were to do this, what economic, cultural or religious incentive do people have 500 years from now to resurrect anyone? Maybe people who were famous geniuses, sure, but with my view of human nature I get the impression that unless the resurrection process is extremely low cost and easy, nobody will go to the effort to bring back the consciousness of huge numbers of random people. This could change, sure, such as if a major world religion made a profound shift to bringing back your ancestors for ideological reasons. The chance of that happening is slim.
This is a plot point in Niven's "A World Out of Time". The global state views cryogenically frozen people as expendable commodities and uses them for missions that normal citizens won't be asked to do, or which require their antique perspectives. If they refuse a mission, their host bodies (convicted criminals) are mindwiped and the state starts again.
For the same reason that wealthy countries have public health services, basically. You are right that there are economic issues at play, but there are good arguments that the intrinsic cost of revival should be pretty low, and the process of getting there is just waiting however many thousands of years of technical development it takes.
Or you can die with a private key of a Bitcoin address containing a few bitcoins memorized. And a tattoo or something that will says you'll give half to whoever revives you. Hopefully Bitcoin will be here for a long time.
If there is technology to reinstantiate your brain with that memory intact, there is technology to pull just that memory out of your brain without bothering to reinstantiate you.
I highly doubt the longevity of bitcoin enough to do that. Anyways what are the odds that your brain wouldn't be scrambled from the years of being frozen or you'll remember your key.
Furthermore, assuming that one's 'consciousness' could actually be brought back in some kind of form that roughly approximates what it was like when one was alive, what's to prevent one's body/brain from being used in some kind of terrible medical experiment?
For example, what if an evil scientist or organization were to buy up cryogenically preserved bodies on the cheap in 300 years?
Yeah, you could say it's unlikely, but why is it more or less likely than being unfrozen into some paradise?
It's a moot question right now since nothing they're doing could ever conceivably allow for future resurrection. This is purely a way to separate suckers from their money (or, more charitably, perhaps a case of blind optimism outrunning scientific reality).
For those interested in this stuff, or the "we will preserve your body and resurrect you in some date in the future" technology, I recommend a Don Delillo novel from a couple years ago, title is Zero K.
MITs statement seems to position themselves to not be at risk if the startup goes south. They seem to think that with current technology it is unclear if the upload is feasible. Also, it would be interesting for someone to comment on how this embalming process is done?
From what I know from cryogenic freezing or preservation is that it is a good idea that it could be "scanned" back in but most likely the information is lost. I would like to understand how they solve this using embalming. MIT seems to say that we don't know that at all.
"seems" is awfully wishy washy. They cut ties with the company and directly state that the science simply isn't there to start selling a particular way of pickling delicious brains.
Nobody is making use of this service yet. However, if they were, why would it be ghastly for them to voluntarily trade a small number of days worth of terminal decline for a substantially better quality preservation and thus correspondingly higher chance of revival in the future? To me it sounds like an obvious utilitarian tradeoff, like any reasonable person who was well informed about the situation and averse to dying might choose to make.
The startup is not claiming to have current uploading technology. The startup is researching (does not claim to have) brain preservation techniques leading to long-term structure preservation in hopes of uploading sometime in the indefinite future. If your personal opinion is that this cannot happen, that does not make other people who have different beliefs and are upfront about their arguments for them "scammers".
It is likely an iteration on cadaver plastination, wherein tissues are infused with plastic to preserve their structure. This is what enables those "Bodies" exhibitions that show actual human cadavers that have been plastinated, posed, and dissected to show anatomical details.
If you preserve the structure of the brain well enough, you can scrape it down in micrometer-thick layers and progressively photograph the visible surfaces in sufficient detail to reconstruct a map of all neural connections. Combine this with a working model of a neuron, a physics model for chemical diffusion, and knowledge of how living neurons respond to certain chemicals typically found in the brain or those that can cross the blood-brain barrier, and you can simulate a human consciousness inside a computer.
It won't be you, but it will think it was you. That's almost the same thing as what happens every time you wake up in the morning.
Also, in order to avoid frequently resetting the simulation due to insanity, you probably also need a virtual world that simulates realistic inputs through the sensory nerves.
The technology is plausible, particularly if you are a Kurzweilian or transhumanist. Some may feel more comfortable with "Ship of Theseus"-style gradual migration of an organic brain into digital computer via implantation of technology devices with neuron-computer interfaces. But since we aren't as close to that, injecting your dead brain with epoxy is the closest you can get now to not being dead and gone forevermore.
Note that the necessary knowledge for digitizing a preserved brain is likely centuries away, but that doesn't matter if you can preserve your brain to remain unchanged for centuries. To the new entity that can remember being you and likes all the same things you like, it will be like waking up in the future. And hopefully, you left some money for AI civil rights in your will, so it won't be a digitized slave toiling away on the pattern recognition plantations.
The casual explanation is that the brains are infused with a chemical that prevents formation of ice crystals and chilled to freezing temperatures until the water in the brain solidifies into a glasslike solid.
This is how frogs in shallow pond-bottoms can freeze solid during the winter and then thaw out in spring. The formation of ice crystals is what damages the organ. But glutaraldehyde is highly cytotoxic, and the brain cells are killed, along with anything else, so your brain won't ever be thawed. They will use ion beams to shave off very thin layers of your brain and use a scanning electron microscope to image it, all while still frozen.
This is the same process, in finer detail, that occurs for detail 3-d imagery of plastinated organs and entire organisms. The crosslinking and heat generated during the curing process of most epoxy resins will destroy the finer structures in the brain as it preserves the macro-structure, so no, you couldn't use the same types of plastic typically used to preserve anatomy specimens. Instead of a microtome, the top layer is ablated with ion beams. Instead of an optical microscope, an electron microscope is used.
You can't currently get a more detailed 3-d image than this. The ion beam can strip away one layer of atoms at a time, and the electron microscope can identify the positions of all the atoms in the next layer.
But imagine the size of that data set. Imaging a volume of about 1500 mL, at a detail level of 1 nm, is 1.5e24 voxels. Even at one bit per voxel, for filled/empty, that's in the yottabyte range. A trillion 1 terabyte hard drives. The gap junction of an electrical synapse is about 3.5 nm, and chemical synapses are 20-40 nm apart. so if you image at a resolution of 3.5nm, you're still talking at least a zettabyte, but at least that's just under half the maximum file size of UFS, at around 2^75 bits.
Better hope those cryo-coolers don't fail before we can store that much data and reduce it all to a neuro-map.
It seems likely you'd have to process as you go, and not store it all. Or have one hell of a datacenter/buffer like the LHC does, holding it until it can be crunched down to a "simple" network instead of voxel data.
There's no real evidence that the human brain functions anything like an Information Processor and, in fact, a growing body of evidence that suggests it's not. We still know so little about the brain and comparing a brain to a computer is is at best a flawed metaphor and at worst an outright falsehood. The idea that any sort of human consciousness can be uploaded or stored by a computer is asinine
The operative word here being "simulate." I will admit however that that opens a debate about the nature of consciousness in which I'm not prepared to resolutely say would not count as true consciousness.
So I will reframe and clarify my argument -- I believe that we do not know enough about the brain and the nature on consciousness to be able to conclude that taking this kind of "snapshot" of the brain will in anyway preserve any of the necessary information needed to reproduce or even accurately simulate consciousness, and that the argument that it is or will is based on the flawed assumption that the brain is or functions like a information processor (a flawed metaphor that is very quickly reaching it's limitations).
You didn't clarify an argument, you simply made baseless assertions while claiming we don't know enough to make extrapolative arguments.
Literally everything that humans understand is understood in a framework of information processing. It is probably not even possible to understand something without being able to model it as an information-handling process. What basis do you have to say consciousness is different? Because we haven't figured it out yet? It's 2018. There are billions of years of thinking that have yet to happen on the subject.
imagine that we create a perfect copy of a person who is alive.
there is now person A, and person B, who are identical. but being identical does not imply shared sensations. if you pinch person A, person B won't feel it. the reverse is true. they're two separate entities, even if they behave similarly. if person A dies, person B is very similar to person A -- but they're not person A. that person is dead.
same goes for "mind uploading". the copy could, hypothetically, exist. it is extremely unlikely that this is technologically possible. but even if it was possible technologically, the original would be dead.
All this does is get you into a philosophical debate of what constitutes "you". We don't know the answer to that. There's a similar conundrum around the idea of teleportation: if it were to work by recording the quantum state of every particle in your body, transmitting that information elsewhere, and then simultaneously creating a new copy of "you" at the destination while destroying the original, is the new copy actually "you"? No one really knows. And no one really knows if the answer to that actually matters in any real sense.
well, let's just cut the philosophical debate in half here.
"you" have a chain of consciousness that is unique to your body. you destroy that body, and that chain of consciousness ends. re-assembling it elsewhere perfectly doesn't re-start that chain. it creates a new chain which from inside of its own perspective is consistent with the original chain. and it is. but the original chain has no further consciousness, as it was destroyed.
this isn't a "nobody really knows" situation so much as a "we know, but people refuse to accept what we know" situation. the semantics of what is "you" really don't matter when faced with reality.
Says who? That's just your opinion on what consciousness is.
Even if I agree with your interpretation, again, I'm not sure it matters. Perhaps my physical body dies and my brain becomes digitized and I "wake up" in a computer simulation, or perhaps in an artificial brain attached to a replacement body. Even if the new me has a new consciousness that isn't just a continuation of the old one, perhaps that's an outcome I'm ok with and welcome.
I'm so glad that you have solved one of the major problems of philosophy and neuroscience in a handful of poorly-constructed sentences.
This is a matter of much debate in philosophy, neuroscience, and other fields. You accidentally touch on identity theory (bundle, substratum, etc), philosophy of mind, etc. The Ship of Theseus is an interesting problem - what if I gradually replace parts of someone's brain with artificial ones, so they are integrated with the rest?
It's not so much an interesting problem as it is an obviously irrelevant one. Our day-to-day conceptualizations were not designed to handle intricate philosophical matters, and trying to force resolution in such plain language is often a source of confusion involving 'problems' that nobody ever seems to face.
ship of theseus isn't relevant to nectome's claims or any claim regarding "uploading". "uploading" is just creating a copy in a different media. it doesn't preserve the conscioussness of the original media, it merely creates another. i think you'll find that the problems of philosophy and neuroscience operate at a different level of abstraction than what they're discussing.
furthermore, academic concerns aside, there's no disproving my original comment in the context of technical feasibility. it's literally bulletproof; one person does not share the sensations of another person.
postscript: i took a philosophy of mind class and a philosophy of cognition class and a systems neurobiology class in college plus a few others that are tangentially relevant. i'm not appealing to authority here, just stating that my simplistic objections are actually the result of an extensive amount of diving into the details of those disciplines. sometimes the details build a picture that's simple from afar.
> “Fundamentally, the company is based on a proposition that is just false. It is something that just can’t happen,” says Sten Linnarsson of the Karolinska Institute in Sweden.
That sounds a lot like the "heavier than air flight is impossible" quote from 1895.
To be fair, it's unclear whether Linnarsson was saying that brain uploading would never be possible (which is a step too far), or that this company would never people to revive people who paid today using brain-uploading techniques available within the near future (which is quite reasonable).
Like, I get that these guys might be - maybe even probably are! - snake oil salesmen, but what about brain digitization is actually impossible? There are certainly a fixed number of neurotransmitter types, a fixed number of channels, a fixed structure to a particular operating brain, and so on. Anything that exists can be reverse engineered and, in principle, rebuilt.
For those who are unfamiliar with the general concept, there are several organizations that will be happy to take large sums of money from you, for a scientifically sketchy claim at resurrection at some future unknown date.
Given that the main alternatives (decomposing in a box or being cremated) involve a 0% chance of resuscitation at some future unknown date, cryonics seems like the rational choice to make.
Alcor has so far shown itself to be a fairly reliable cryonics provider and the sums of money are not extreme - neuropreservation being only $80,000, or a life insurance policy of ~$30-$150/month.
I'm signed up for cryonics, but to be fair, I agree that it would be irrational if the odds of coming back were 1,000,000,000:1 or something like that. Most people value their lives very highly, but not infinitely.
Depends on the requirements of the 1,000,000,000:1 option. If it's at very little cost to you, and if the alternative is certain, permanent death, the rational choice is still to take the incredibly slim chance of coming back. Otherwise it does indeed depend on the importance you put on your own life.
Of course, in the case of cryonics, there is no measurable probability of coming back, so the point is largely moot.
Depends how you measure I guess. If you insist that you need to wait x years and then count how many people were brought back against how many weren't, sure, there's no measure...
But you can break the question down and look for evidence about each step, it's not going to be a precise measurement but you can at least start estimating your belief certainty (as well as introducing bounds -- I don't think anyone's justified in a 90% certainty of coming back at this point, but nor do I think there's justification that it's as low as one in a billion). I've always liked http://www.overcomingbias.com/2009/03/break-cryonics-down.ht... as one approach to turning the problem into multiple conditional steps.
Even if it fails, if the promise of a chance to be revived at some point allows you or your loved ones to be more at peace with your mortality it seems worth it for the relatively minor cost.
Having a loved one stuck for an extended time in a maybe-not-permanently-dead-but-probably-so state doesn't seem to give people more peace with mortality, judging by what I've seen of, e.g., people with family members on life support, missing-but-presumed-dead, etc.
So, insofar as this is viewed optimistically as something other than a combination of assisted suicide with weird storage of remains, I'm skeptical that it would be psychologically beneficial to the survivors.
What specific claims does, e.g. Alcor, make that are scientifically sketchy? Just the implication that resurrection might be possible sometime in the future?
Resurrection, while not impossible, is sufficiently unlikely that taking people's money while promising it-- even with a disclaimer that it is not guaranteed-- seems exploitative, akin to people promising homeopathic cancer remedies. It is preying on people's fear of death.
After reading the FAQs and other material on Alcor's website, it's very clear to me that they're entirely frank about having no real idea when or even if resuscitation will be possible. I don't see them promising anything, and I think their website is very clear on what you're getting for your money.
I don't know where you're getting that. Alcor calls itself a "life extension" foundation, and right on the front page, says:
"We believe medical technology will advance further in coming decades than it has in the past several centuries, enabling it to heal damage at the cellular and molecular levels and to restore full physical and mental health."
This is not a promise from them but is surely worded like resurrection is a certainty.
I used to get in a lot of arguments online about cryonics back in the day (it's very possible that you and I have had this debate under different usernames on that other site for Valley nerd-types); they always went nowhere and I'm not interested in getting back into them. Suffice to say that I'm not convinced by cryonics organizations' "we're not promising anything" disclaimers: I think these are done with a knowing wink and that they use Pascal's mugging-style "what have you got to lose?" arguments to bypass concerns about their feasibility to get your money.
At the end of the day, why does it matter to you? If they have the money and it's something they, in sound mind, want to do, why not just let them do it and not let it bother you?
Homeopathy is a false analogy as it is outright dangerous, where cryonics is not.
I don't think it is 'preying' on the fear of death. A <.1% chance at eternal life is more or less what they offer and is a more reasonable investment than how many wealthy people spend their money. I could spend that money on a marginally nicer apartment for $100/mo and no one would bat an eye even though the expected value is almost certainly lower.
It seems appealing to me to come to terms with mortality while also being able to have some small hope that you will get lucky and have another chance at life.
I’ve never gotten a good answer as to how an electron microscope can image the synaptic weights. If you look at the pictures it can barely capture the synapses.
> If you look at the pictures it can barely capture the synapses
Electron micrographs vary quite a bit in quality (good micrographs are an art), but if you can barely see the synapses, you may be thinking of light microscopy, where it’s very difficult to see differences in the sizes of synapses. You can see for yourself what synaptic details electron microscopy allows you to see in the book, "Fine Structure of the Nervous System: Neurons and Their Supporting Cells". You might also want to see the FIB-SEM images from the Brain Preservation Foundation at https://www.youtube.com/watch?v=RYKIePuVENY, which I find quite beautiful.
As for how sizes may relate to weights [1]:
“Some axons form two or more synapses with the same dendrite, but on different dendritic spines. These synapses should be the same strength because they will have experienced the same history of neural activity…the synaptic areas and volumes of the spine heads were nearly identical. This remarkable similarity can be used to estimate the number of bits of information that a single synapse can store, since the size of dendritic spines and their synapses can be used as proxies for synaptic strength.”
I hope that these are informative. I do want to clarify that we are not arguing that electron micrographs of brain tissue are all that’s needed to reconstruct memories. We are interested in building the best brain preservation technology possible, and one of the ways we evaluate brain preservation is with electron microscopy. The connectome is the first step, the “skeleton” upon which models of the mind can be built.
electron microscopy has the resolution to discriminate individual spines and synapses, figuring out the weight of the synapse though is not straightforward. At least for excitatory synapses however, there is ample evidence that the size of the spine correlates with the maturity and strength of the synapse. https://www.ncbi.nlm.nih.gov/pubmed/20646057
You can label receptors with aptamers and other technology, and then sequence the aptamer labels (or whatever). Many in situ sequencing techniques can be applied here. Note, however, that this requires more steps than simply scanning with electron microscope multi-array.
Yeah, it would certainly help to use a method like expansion microscopy, sort of like if Nectome were to subcontract some work to a certain MIT lab that invented the techni--- oh wait..
what determines the conductance of the synapse is mostly the number AMPA/NMDA receptors in the synapse for excitatory and GABAa/GABAb for inhibitory. Then there's neuromodulation, g-coupled receptors etc etc. it's clearly not straightforward.
> Most neuroscientists think the ability to recapture memories from brain tissue and re-create a consciousness inside a computer is at best decades away...
At best, decades away? That's some crazy hubris. It's not even imaginable how far away it is. I find it absurd that humans think they can achieve arguably the highest level of godliness in mere decades. This is almost as crazy as saying teleportation is less than a hundred years away. We can't make predictions about things we don't understand.
Edit: I'm getting some flak for not including the rest of the quote, but I would argue that it doesn't matter. The rest of the quote was "and probably not possible at all". Their conclusion makes no sense. How can something either be impossible or potentially very possible? This statement shows how little academia actually knows about neuroscience. And no, this is not comparable to opinions about the moon landing in the 50's. By the 1950s we already had a completely sufficient theoretical framework for achieving space flight. The same cannot be said about consciousness upload as of today.
"
To place a man in a multi-stage rocket and project him into the controlling gravitational field of the moon where the passengers can make scientific observations, perhaps land alive, and then return to earth—all that constitutes a wild dream worthy of Jules Verne. I am bold enough to say that such a man-made voyage will never occur regardless of all future advances." - Lee De Forest, inventor of the vacuum tube, 1957
This is not a very good comparison at all. In the 1950s, we already had a complete theoretical foundation for space travel. Newton's equations, developed hundreds of years earlier, gave us that foundation. Where is the theoretical foundation for consciousness upload? If you had pulled a quote denouncing space travel from the late 1600s, it would be a better comparison. It should be noted that it took hundreds of years before Newton's theory resulted in extraterrestrial travel.
> Most neuroscientists think the ability to recapture memories from brain tissue and re-create a consciousness inside a computer is at best decades away and probably not possible at all.
Did you deliberately ignore the rest of that sentence?
> Yikes, what kind of horrendous wording is that? "it's probably not possible at all but if it is we'll be there soon!"
“At best” means “viewed maximally optimistically”, and, in the context of a development timeline, specifically “no sooner than”. So, it was saying “it’s probably not possible at all, and, if it somehow is possible, we’re at least several decades away from even a demonstration”.
You have badly misread, apparently inverting the sense of “at best”.
That feel when your brain upload data gets leaked and 50,000 instances of you are enslaved and put to work in a compute-farm generating spam emails under threat of having the sensation of pain applied directly to your simulated nervous system.
It was disturbing to think of the Flatline as a construct, a hardwired ROM cassette replicating a dead man's skills, obsessions, knee-jerk responses. --Neuromancer
Well, i'm very afraid of dying but i can't imagine any scenario where immortality would not cause catastrophic harm to humanity, at least at our current level of wisdom.
> Hello,
> We noticed you're browsing in private or incognito mode.
> To continue reading this article, please exit incognito mode or log in.
> Why we made this change
> Visitors are allowed 3 free articles per month (without a subscription), and private browsing prevents us from counting how many stories you've read.
> We hope you understand, and consider subscribing for unlimited online access."
In case others don't want to leave private mode: http://archive.is/DjsiK