Humans obviously don't "think" the same way. GPT needs memory that humans can't ever have and more importantly an unthinkably large training data set to generate the observations it does. If a human (or another biological system) needed that much training data nothing would have ever gotten off the ground in the first place, it's completely out of reach. This type of a model just doesn't "understand" the same way.
Still, none of this is btw to discount how impressive the technology is. It makes a regular search engine so very quaint by comparison.
> Still, none of this is btw to discount how impressive the technology is. It makes a regular search engine so very quaint by comparison.
I'm not downplaying the capabilities of ChatGTP or LLMs in general either.
They're basically a practical implementation of a Chinese Room, which was unthinkable just a few years ago.
What makes it dangerous is the notion to even compare it to a search engine - the two are very different concepts that do very different things. The danger lies in people perceiving the models in exactly that way - a super-powered search engine that they consciously or subconsciously put trust in. The latter is both important and dangerous, because unlike search engines, the output of an LLM cannot be trusted. The model has no concept of differentiating between hallucinated results and extracted knowledge or facts.
At the same time it's capable of generating results in a format that is so convincing that the unsuspecting user cannot easily distinguish made-up output from facts either. This is not an issue that has a near-term technical solution and must be addressed by making users aware of it.
Unlike the 9000 series, ChatGPT is not the most reliable computer ever made.
ChatGPT often makes mistakes or distorts information. It is - by any practical definition of the words - not fully approved and very capable of errors.
Still, none of this is btw to discount how impressive the technology is. It makes a regular search engine so very quaint by comparison.