> This analogy falls even more apart when you consider LLMs. They also are not Turing machines.
Of course they are, everything that runs on a present day computer is a Turing machine.
> They obviously only reside within computers, and are capable of _some_ human-like intelligence.
They so obviously are not. As Raskin put it, LLMs are essentially a zero day on the human operating system. You are bamboozled because it is trained to produce plausible sentences. Read Thinking Fast And Slow why this fools you.
> Of course they are, everything that runs on a present day computer is a Turing machine.
A turing machine is by definition turing complete. You can run non-turing complete systems within turing machines. Thus your statement contains a contradiction.
> They so obviously are not.
I'm well aware of their limitations. But I'm also not blind to their intelligence. Producing unique coherent and factually accurate text is human-like intelligence. Powerful models practically only fail on factuality. Humans also do that, but for different reasons.
It is human-like intelligence because there are other entities with varying degrees of intelligence, but none of them have been able to reason about text and make logical inferences about it, except LLMs.
I know they aren't very reliable at it but they can do it in many out of distribution cases. It's fairly easy to verify.
I read that book some years ago but can't think of what you're referring to. Which chapter/idea is relevant?
Of course they are, everything that runs on a present day computer is a Turing machine.
> They obviously only reside within computers, and are capable of _some_ human-like intelligence.
They so obviously are not. As Raskin put it, LLMs are essentially a zero day on the human operating system. You are bamboozled because it is trained to produce plausible sentences. Read Thinking Fast And Slow why this fools you.