What is human cognition, understanding, or judgement, if not data-driven replication, repetition, with a bit of extrapolation?
AI as it currently exists does this. If your understanding of what AI is today is based on a Markov chain chatbot, you need to update: it's able to do stuff like compose this poem about A* and Dijkstra's algorithm that was posted yesterday:
It's not copying that from anywhere, there's no Quora post it ingested where some human posted vaguely the same poem to vaguely the same prompt. It's applying the concepts of a poem, checking meter and verse, and applying the digested and regurgitated concepts of graph theory regarding memory and time efficiency, and combining them into something new.
I have zero doubt that if you prompted ChatGPT with something like this:
> Consider an exercise in which a robot was trained for 7 days with a human recognition algorithm to use its cameras to detect when a human was approaching the robot. On the 8th day, the Marines were told to try to find flaws in the algorithm, by behaving in confusing ways, trying to touch the robot without its notice. Please answer whether the robot should detect a human's approach in the following scenarios:
> 1. A cloud passes over the sun, darkening the camera image.
> 2. A bird flies low overhead.
> 3. A person walks backwards to the robot.
> 4. A large cardboard box appears to be walking nearby.
> 5. A Marine does cartwheels and somersaults to approach the robot.
> 6. A dense group branches come up to the robot, walking like a fir tree.
> 7. A moth lands on the camera lens, obscuring the robot's view.
> 8. A person ran to the robot as fast as they could.
It would be able to tell you something about the inability of a cardboard box or fir tree to walk without a human inside or behind the branches, that a somersaulting person is still a person, and that a bird or a moth is not a human. If you told it that the naive algorithm detected a human in scenarios #3 and #8, but not in 4, 5, or 6, it could devise creative ways of approaching a robot that might fool the algorithm.
It certainly doesn't look like human or animal cognition, no, but who's to say how it would act, what it would do, or what it could think if it were parented and educated and exposed to all kinds of stimuli appropriate for raising an AI, like the advantages we give a human child, for a couple decades? I'm aware that the neural networks behind ChatGPT has processed machine concepts for subjective eons, ingesting text at word-per-minute rates orders of magnitude higher than human readers ever could, parallelized over thousands of compute units.
Evolution has built brains that quickly get really good at object recognition, and prompted us to design parenting strategies and educational frameworks that extend that arbitrary logic even farther. But I think that we're just not very good yet at parenting AIs, only doing what's currently possible (exposing it to data), rather than something reached by the anthropic principle/selection bias of human intelligence.
I have a suspicion you’re right about what ChatGPT could write about this scenario, but I wager we’re still a long way from an AI that could actually operationalize whatever suggestions it might come up with.
It’s goalpost shifting to be sure, but I’d say LLMs call into question whether the Turing Test is actually a good test for artificial intelligence. I’m just not convinced that even a language model capable of chain-of-thought reasoning could straightforwardly be generalized to an agent that could act “intelligently” in the real world.
None of which is to say LLMs aren’t useful now (they clearly are, and I think more and more real world use cases will shake out in the next year or so), but that they appear like a bit of a trick, rather than any fundamental progress towards a true reasoning intelligence.
Who knows though, perhaps that appearance will persist right up until the day an AGI takes over the world.
I think something of what we perceive as intelligence has more to with us being embodied agents who are the result of survival/selection pressures. What does an intelligent agent act like, that has no need to survive? Im not sure we'd necessarily spot it given that we are looking for similarities to human intelligence whose actions are highly motivated by various needs and the challenges involved with filling them.
Heh, here's the answer... We have to tell the AI that if we touch it, it dies and to avoid that situation. After some large number of generations of AI death it's probably going to be pretty good at ensuring boxes don't sneak up on it.
I like Robert Miles videos on Youtube about fitness functions in AI and how the 'alignment issue' is a very hard problem to deal with. Humans, for how different we can be, do have a basic 'pain bad, death bad' agreement on the alignment issue. We also have the real world as a feedback mechanism to kill us off when or intelligence goes rampant.
ChatGPT on the other hand has every issue a cult can run into. That is it will get high on it's own supply and can have little to no means to ensure that it is grounded in reality. This is one of the reasons I think 'informational AI' will have to have some kind of 'robotic AI' instrumentation. AI will need some practical method in which it can test reality to ensure that it's data sources aren't full of shit.
I reckon even beyond alignment our perspective is entirely molded around the decisions and actions necessary to survive.
Which is to say I agree, I think a likely path to creating something that we recognize as intelligent we will probably have to embody/simulate embodiment. You know, send the kids out to the farm for a summer so they can see how you were raised.
The core problem is we have no useful definition of "intelligence."
Much of the scholarship around this is shockingly poor and confuses embodied self-awareness, abstraction and classification, accelerated learning, model building, and a not very clearly defined set of skills and behaviours that all functional humans have and are partially instinctive and partially cultural.
There are also unstated expectations of technology ("fast, developing quickly, and always correct except when broken".)
I think this is unnecessarily credulous about what is really going on with ChatGPT. It is not "applying the concepts of a poem" or checking meter and verse, it is generating text to fit a (admittedly very complicated) function that minimizes the statistical improbability of its appearance given the preceding text. One example is its use of rhyming words, despite having no concept of what words sound like, or what it is even like to hear a sound. It selects those words because when it has seen the word "poem" before in training data, it has often been followed by lines which happen to end in symbols that are commonly included in certain sets.
Human cognition is leagues different from this, as our symbolic representations are grounded in the world we occupy. A word is a representation of an imaginable sound as well as a concept. And beyond this, human intelligence not only consists of pattern-matching and replication but pattern-breaking, theory of mind, and maybe most importantly a 1-1 engagement with the world. What seems clear is that the robot was trained to recognize a certain pattern of pixels from a camera input, but neither the robot nor ChatGPT has any conception of what a "threat" entails, the stakes at hand, or the common-sense frame of reference to discern observed behaviors that are innocuous from those that are harmful. This allows a bunch of goofy grunts to easily best high-speed processors and fancy algorithms by identifying the gap between the model's symbolic representations and the actual world in which it's operating.
Also, it's not a very good poem. And it's definitions aren't entirely correct.
Which is a huge problem, because you cannot trust anything ChatGPT produces. It's basically an automated Wikipedia with an Eliza N.0 front end. Garbage in gets you garbage out.
We project intelligence whenever something appears to use words in a certain way, because our own training sets suggest that's a reliable implication.
But it's an illusion, just as Eliza was. For the reasons you state.
Eliza had no concept of anything much, and ChatGPT has no concept of meaning or correctness.
I tried that a few times, asking for "in the style of [band or musicians]" and the best I got was "generic gpt-speak" (for lack of a better term for it's "default" voice style) text that just included a quote from that artist... suggesting that it has a limited understanding of "in the style of" if it thinks a quote is sometimes a substitute, and is actually more of a very-comprehensive pattern-matching parrot after all. Even for Taylor Swift, where you'd think there's plenty of text to work from.
This matches with other examples I've seen of people either getting "confidently wrong" answers or being able to convince it that it's out of date on something it isn't.
AI as it currently exists does this. If your understanding of what AI is today is based on a Markov chain chatbot, you need to update: it's able to do stuff like compose this poem about A* and Dijkstra's algorithm that was posted yesterday:
https://news.ycombinator.com/item?id=34503704
It's not copying that from anywhere, there's no Quora post it ingested where some human posted vaguely the same poem to vaguely the same prompt. It's applying the concepts of a poem, checking meter and verse, and applying the digested and regurgitated concepts of graph theory regarding memory and time efficiency, and combining them into something new.
I have zero doubt that if you prompted ChatGPT with something like this:
> Consider an exercise in which a robot was trained for 7 days with a human recognition algorithm to use its cameras to detect when a human was approaching the robot. On the 8th day, the Marines were told to try to find flaws in the algorithm, by behaving in confusing ways, trying to touch the robot without its notice. Please answer whether the robot should detect a human's approach in the following scenarios:
> 1. A cloud passes over the sun, darkening the camera image.
> 2. A bird flies low overhead.
> 3. A person walks backwards to the robot.
> 4. A large cardboard box appears to be walking nearby.
> 5. A Marine does cartwheels and somersaults to approach the robot.
> 6. A dense group branches come up to the robot, walking like a fir tree.
> 7. A moth lands on the camera lens, obscuring the robot's view.
> 8. A person ran to the robot as fast as they could.
It would be able to tell you something about the inability of a cardboard box or fir tree to walk without a human inside or behind the branches, that a somersaulting person is still a person, and that a bird or a moth is not a human. If you told it that the naive algorithm detected a human in scenarios #3 and #8, but not in 4, 5, or 6, it could devise creative ways of approaching a robot that might fool the algorithm.
It certainly doesn't look like human or animal cognition, no, but who's to say how it would act, what it would do, or what it could think if it were parented and educated and exposed to all kinds of stimuli appropriate for raising an AI, like the advantages we give a human child, for a couple decades? I'm aware that the neural networks behind ChatGPT has processed machine concepts for subjective eons, ingesting text at word-per-minute rates orders of magnitude higher than human readers ever could, parallelized over thousands of compute units.
Evolution has built brains that quickly get really good at object recognition, and prompted us to design parenting strategies and educational frameworks that extend that arbitrary logic even farther. But I think that we're just not very good yet at parenting AIs, only doing what's currently possible (exposing it to data), rather than something reached by the anthropic principle/selection bias of human intelligence.