Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One thing I've always wondered about turing tests: Wouldn't AI's need to lie a hell of a lot in order to pass it.

For example, if I asked someone to tell me the capital city of every country in the world, I'd be very surprised if they could. However, a half decent AI could do this easily. But what if I pushed it further and started to ask really complex maths questions (something computers are much better at than humans) then It would become clear very quickly that I'm talking to a machine.

Also, humans have holes in their knowledge. For example, given the question "Who is the prime minister of the Netherlands?" the answer for most people is going to be, "I don't know". Or what about "Which team won the first ever FA cup?". Despite not knowing the answer (The Royal Electrical & Mechanical Engineers) most people would hazard a guess (Manchester United, Liverpool) and be wrong.

Programming an AI to play dumb would be relatively easy. But what use is an AI that lies? Passing the test may well be possible, but what use is Artificial intelligence that pretends to as dumb as humans?



You assume that any AI worth the label will already be as capable as current PCs.

But perhaps there is a tradeoff? Maybe becoming "intelligent" in the way we understand it is incompatible with the "dumb calculator/encyclopedia" capabilities of regular computers? Maybe true AI will necessarily lose the ability to look anything up instantly or calculate large columns of numbers?

I don't really believe that, but it is a possibility.


I've sort of had the same idea. I've wondered wether the power required to have a robot process all it would need to in order to move around and interact with the world by responding to all the different stimuli (optical, audio, kinetic) wouldn't leave many CPU cycles left for doing the super human things we're used to computers doing.


I'd never thought of this - but it's interesting that one of the foundational things we'd be teaching this AI is to lie to/deceive humans. Seems like a bad starting place.

Of course, the turing test isn't REALLY some kind of gateway through which a strong AI is probable to develop, but still


The AI might not need to play dumb. There's no reason I can think of that an AI must be good at math or embed encyclopedias. Asimov had a short story, whose name I can't remember, about an AI that believed it was human.


I think it isn't so much that an AI has a good reason for that but rather that there is no good reason to have an AI incapable of it.

Of course, an alternate solution may be just around the corner--imagine talking to a person who has some sort of direct interface to Wlfram|Alpha (maybe Google glasses or something like that)--he would be able to answer those questions as easily as a computer, for the same reasons.


This is just a weakness of the original formulation of the Turing Test - for a machine to fool a human into believing that it is, in fact, a human. But we don't really need that, do we? What we need is for an AI to "fool" us into believing it is truly intelligent. For that, I don't care if it knows all the capitals or can do complex maths. In fact I expect it to be able to do that. I already know it's a machine, after all.

Unfortunately that is far more subjective test. It's easy to devise an experiment based on the classic Turing Test, you just put some people in a room with some terminals, some of which are wired to computers and some of which are wired to humans, and have at it. But it doesn't tell you much, really. But whether a machine that can convince you it is intelligent, is in fact intelligent, is not really a scientific question, and as such doesn't really have a scientific (that is to say, satisfying) answer.

What I'm getting at with this is that the important question to ask about AI is the same question you can ask about other humans, and the answer for either is the same. The difference is that solipsism is a lot easier when you're talking about machines.


I feel that one day computers will start trying to convince us they're intelligent, and we won't be listening.


Not for lack of trying. We might not understand them, however.

My fear is one day we're going to realize some machines are intelligent, and then deny them their rights anyway.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: