Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Now go back and ask, if you believe that the only thing we can "see" around us that doesn't have the "physical entity" is the "soul" of humans why do you do so?

I don't think this. I don't think that anything about the consciousness or agency of humans is exceptional to humans.

> The "consciousness" as a "neurological process by which the animal or the human sees it as special, the center of its attention in order to protect itself" can be seen as a product of evolution.

This is an interesting perspective that definitely made me think, but I think it ultimately begs the question. If you're talking about "seeing" or "attention," you're already speaking in concepts that presume the existence of consciousness. I'm not sure this gives us a useful framework to know when any computational process we have created does indeed have "consciousness," or where the line is between animals advanced enough to have it and animals (or even plants) that don't.



> If you're talking about "seeing" or "attention," you're already speaking in concepts that presume the existence of consciousness.

Maybe I wasn't clear enough. The world around us (and animals) is projected in our (and animal's) neurological system as some kind of the model. The question is just if the model is such that favors the representation of the uniqueness of the organism or not. I claim that there's evolutionary advantage in producing a system (hardware and software) where organism cares for itself and where the model is so formed that "me" is in the center, to the level that "me" is "conscious" in the sense, processing the stimuli as much that the dog understands that the paw is "his paw" and that it's dangerous to put the paw in the fire and up to the level of you and me considering "us" "us" and being able to talk about it. Therefore we're not so much different from the rest of the animals.

> I'm not sure this gives us a useful framework to know when any computational process we have created does indeed have "consciousness,"

I'd still like to know how you can define consciousness in a way that it doesn't sound religious. If you can't, then of course we can't progress in our discussion.

Let me give you very "primitive" and "simplified" view of the "free will" subject. I see it as a purely religious construct, based on the following history: initially, gods weren't "almighty and omnipresent." If you've read ancient Greek literature, there are such gems like "Zeus was at that moment in Egypt so he wasn't there when the soldiers he supported lost." If you've read Bible, the oldest myths (I mean, stories in the Bible) are actually based on such concept of god(s). Then the theology "theory" grew more demanding, postulating the "almighty omnipresent" but also the "loving" god. Which made the myths (stories) much absurder than they were as they were written. How can almighty loving god produce the world and then see that it's bad, so that it has to send the flood to destroy it? It's either not almighty or not loving etc. As a rescue, the priests invented the concept of "free will" as in "once god creates humans, they have their own free will (which god can't control!?) and they do what they do so then they get to deserve to suffer, go to hell and all that nice stuff." Most of the "modern" theological concepts are constructed as an attempt to make less absurd the whole "body of work." A kind of "justification by obscurity" which obviously works for believers, giving them easy covertly nonsensical sentences to answer the "hard questions" others would give them.

In reality, we just have what user jonsen formulates here as "the purpose of consciousness is to keep its organism viable. It does that by collecting an essence of past and from that predict an essence of the future."

Now you ask about the "framework to know when any computational process we have created does indeed have "consciousness."" It depends on the definition of he consciousness, but that definition can't involve gods and "nonphysical entities." As soon as that is clear we can measure the "consciousness as what we observe in a dog" or the "consciousness as what we observe in a six months kid," the "consciousness as what we observe in a twenty years old healthy human" and the "consciousness as what we observe in 80 years old human with Alzheimer's disease which progressed this much."

Note that neurologists as the part of their daily routine have to evaluate the level of consciousness in their patients. It's very instructive to read just some examples of he cases they work with.

When we're there, do you think that a human with Alzheimer's disease which progressed so much that he can't remember anybody from his life or what he did just 15 seconds ago has a "soul?" (a lot of healthy animals remember for months different things, or if you talk with the owners of pets, you'll hear that there are animals which have traumas and a lot of the psychological symptoms you'd just associate with humans). So back to the patient, do you consider him "conscious?" If not, at which point did he lost his consciousness which you equate with soul? If yes, where is he different from some good Perl script? There is no "single unique unchangeable" consciousness, there are just different levels of functioning of the model I mention at the start. Of course, when you die, the model completely stops functioning as it needs both the hardware and the software, the brain and the body as the hardware and all these nice electrical impulses and chemical reactions in the living organisms as the software. There's no such thing as the "agency of the nonphysical entity" there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: