It’s really hard to say. I very much doubt that brains and LLMs operate on the same principle; but that doesn’t mean we aren’t discovering something important.
A few years ago Google claimed credit for achieving Quantum Supremacy by building a quantum processor that simulated itself…by running itself. If that sounds tautological, then you see the problem.
They fed in a program that described a quantum circuit. When they ran the program, the processor used its physical quantum gates to execute the circuit described by the program and sampled from the resulting distribution of the quantum state. It’s a bit like saying we “simulated” an electrical circuit by physically running the actual circuit. (Their point was that the processor could run many different circuits depending on the input program, so that makes it a “computer”.)
Ignoring the task, that processor did exactly what an LLM does: it sampled from a very complicated probability distribution. Did the processor “understand” the program? Did it “understand” quantum physics or mathematics? Did it “understand” the quantum state of the internal wave function? It definitely produced samples that came from the distribution of the wave function for the program under test. But it’s hard to argue that the processor was “understanding” anything—it was doing exactly the thing that it does.
If we had enough qbits, then in theory we could approximate the distribution of an LLM like GPT. Would we say then that the processor “understands” what it’s saying? I don’t think the Google processor understands the circuit, so at what point does approximating and sampling from a bigger distribution transition into “understanding”?
Like the quantum device, the LLM is just doing exactly the thing that it does: sampling from a probability distribution. In the case of the LLM it’s a distribution where the samples seem to mean something, so it’s tempting to think that the model _intended_ for the output to have that meaning. But in reality the model can’t do anything else.
None of that proves that humans do anything different, but it certainly seems like we (and many other animals) are more complex than that.
A few years ago Google claimed credit for achieving Quantum Supremacy by building a quantum processor that simulated itself…by running itself. If that sounds tautological, then you see the problem.
They fed in a program that described a quantum circuit. When they ran the program, the processor used its physical quantum gates to execute the circuit described by the program and sampled from the resulting distribution of the quantum state. It’s a bit like saying we “simulated” an electrical circuit by physically running the actual circuit. (Their point was that the processor could run many different circuits depending on the input program, so that makes it a “computer”.)
Ignoring the task, that processor did exactly what an LLM does: it sampled from a very complicated probability distribution. Did the processor “understand” the program? Did it “understand” quantum physics or mathematics? Did it “understand” the quantum state of the internal wave function? It definitely produced samples that came from the distribution of the wave function for the program under test. But it’s hard to argue that the processor was “understanding” anything—it was doing exactly the thing that it does.
If we had enough qbits, then in theory we could approximate the distribution of an LLM like GPT. Would we say then that the processor “understands” what it’s saying? I don’t think the Google processor understands the circuit, so at what point does approximating and sampling from a bigger distribution transition into “understanding”?
Like the quantum device, the LLM is just doing exactly the thing that it does: sampling from a probability distribution. In the case of the LLM it’s a distribution where the samples seem to mean something, so it’s tempting to think that the model _intended_ for the output to have that meaning. But in reality the model can’t do anything else.
None of that proves that humans do anything different, but it certainly seems like we (and many other animals) are more complex than that.