Hacker News new | past | comments | ask | show | jobs | submit login

You've shifted the goalposts and erected strawmen so many times in this brief passage, I hardly know where to start...

> No matter how many Turing Test they pass, intelligent machines will be no more conscious or self-aware than the Mechanical Turk!

I see. Well, this is just a rephrasing of the "Chinese Room" discussed in the article. Taken to it's logical conclusion, I am certainly self aware, but the rest of you are all just acting out complex behaviors encoded in chemical and electrical gradients, successfully mimicking consciousness.

I think that if any entity exhibits the behaviors associated with conscious thought, it would well behoove us to treat such entities as conscious, or we may very well find ourselves holding the short end of that particular stick sooner than we'd like.

> That's because solving the problem of self-awareness, or consciousness is a different engineering challenge than solving problems of AI. Consciousness is a more complicated, and specialized a thing.

Since there is no doubt that ML/AI has a long way to go toward AGI, and along the way we can expect the discipline to evolve considerably in many unexpected directions, this assertion of yours is close to tautological.

> Were we to build an artificial self-awarene machine we would not expect it to pass a Turing Test.

Why not?

> Instead we might expect different things of it and ask different questions to determine if it is self aware: can it adapt and survive without human help,

So, anyone severely ill to the point that they cannot do without assistance is not conscious and self aware?

> ie can it trap and store energy and reproduce itself,

So, a single-celled organism is conscious?

> and what purpose does it find for itself is what objective does it pursue ...

Ah, this seems a relevant criteria, but keep in mind that humans can be subjected to operant conditioning ("brainwashing") to impose external goals, not to mention that humans actually require a couple of decades of such conditioning (albeit rather more gradual and haphazard) before being considered competent members of society, but we don't consider humans to be less conscious or less self-aware on either side of that particular divide.

> it's a different engineering challenge than producing information that is organized to be sensible to human mind, which is the AI challenge, and the Turing Test.

Given that people have to be specially educated to produce information that is organized to be sensible to a computer, I don't see why an AGI, whatever it's capabilities "out of the box", so to speak, shouldn't be expected to be capable of learning to be sensible to humans.




I am not sure we are going to be able to understand each other. I find your thoughts to be completely missing a foundation that I'm thinking would be necessary to understand what I'm saying. I don't mean to be rude ...

Yes of course a single celled organisim is conscious.

Exactly the way an amoeba is self aware is how an self-conscious intelligence system would need to be to pose any kind of threat: organized to find energy sources and metabolize, replicate, etc.

I'll tell you: a single celled organism is way more self aware, and way more functionally complex than any computer or software - in fact it's orders of magnitude more complex of a machine.

That's my point: solving problems that make a machine capable of producing intelligence that is sensible to you and I is not solving the and problems that make a machine like a single cell organism, which is to say vertically integrated from the atom upwards to be a self-sustaining, self propagating, energy trap.

A self-aware human who is disabled and can't live without intervention of other humans, can't self-sustain without others and therefore will not pass the test of being able to self-sustain. It's a test, and so one failure isn't validation of hypothesis. It can still be a great test affairs fail a percentage of the time.

In general we know that all self conscious organisms self-sustain, even social, super-organism ones that need each other to survive so a criteria for a self aware organism is that it be capable of self sustaining. We don't even have a good test for that yet. But a test that would fail a perfectly self-aware disabled human wouldn't be a good one.

We could very well administer a Turing Test to an artificial consciousness, but my point is that it wouldn't be a very accurate test. A Turing Test only proves the accuracy of a facsimile of human intelligence. It proves nothing about self consciousness systems. An amoeba would fail it in an instant as would a parrot or dolphin - and if you tell me these organisms aren't self-aware and conscious then we are definitely not on the same page.

I could be wrong. I'm absolutely interested in anyone who can make a convincing argument otherwise, however until then I'm pretty certain that no emerging conscious machine will happen by accident. Rather it would take a Manhattan project or greater to produce an artificial consciousness on par for sophistication with an amoeba. And we don't have much motive to attempt it either, so I'm doubting we will do it anytime soon.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: