Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’m not doing God of the Gaps—it’s simply that in the case of LLMs there’s both nothing we know of doing understanding and also no gaps where it might be. We both know what it’s doing, and that it’s not doing understanding.

Let’s try this:

We could apply an LLM to made-up language and corpus that does not actually carry meaning and it would do exactly what it does with real languages.

“Well maybe you accidentally encoded meaning in it. We could always, say, cryptoanalyze even an alien language and maybe be able to come up with some good guesses at meaning”

Maybe we could. But now imagine also you have no “knowledge” whatsoever except the trained patterns from that language. Like, no understanding of how to do cryptoanalysis, or linguistics, or what a planet is. Or an alien. All you’re doing is guessing at patterns, based on symbols that you aren’t even attempting to understand and have no basis for understanding anyway. That’s an LLM.

I think people are assigning way too much power to language sans… all the rest of what you need to derive meaning from it. None of what’s going into or coming out of an LLM needs to carry any meaning for it to do exactly the same thing it does with languages that do.

To the extent that an LLM has a perspective (this is purely figurative) all languages are gibberish alien languages, while also being all that it “knows”.

> We don't comprehend the entire Chinese Room; the instructions that Searle is following are a massive handwave. Does following the instructions require Searle to make human judgements on where to branch? Then it's offloading understanding onto his human brain. Does it not require that but it still outputs coherent responses? Then the instructions must encode intelligence in them in some way - if intelligent behaviour doesn't demonstrate intelligence we're in non-scientific nonsense land.

I remain stubbornly unconvinced that simulating a real process (by hand or otherwise) is the same thing as it actually happening with real matter and energy, even setting aside that the most efficient way to achieve it is to… not simulate it, and use real matter to actually do the things.

It’s why I find the xkcd “what if a guy with infinite time and an infinite beach and infinite rocks moved the rocks around in a way that he had decided simulated a universe?” thing interesting as an example but also trivial to solve: all that happens is he moved some rocks around. The meaning was all his, it doesn’t do anything.



> "thing interesting as an example but also trivial to solve: all that happens is he moved some rocks around. The meaning was all his, it doesn’t do anything."

You opened by saying you aren't doing God of the Gaps, but here you are doing it. Brains move chemicals and electrical signals around. That doesn't do anything, apparently. Matter doesn't do understanding. Energy doesn't do understanding. Mathematical calculations don't do understanding. Neural networks don't do understanding. See how Understanding is retreating into the gaps? Brains must have something else, somewhere else, which does understanding? But what, and where? It's a position that becomes less tenable every decade as brains get mapped in finer detail leaving smaller gaps, and non-brains get more and better Human-like abilities

> "there’s both nothing we know of doing understanding .. it’s not doing understanding."

It is. The math and the training and the inference is the thing doing understanding. Identifying patterns and being able to apply them is part of what understanding is, and that's what it's doing. [Not human level understanding].

> "We could apply an LLM to made-up language and corpus that does not actually carry meaning and it would do exactly what it does with real languages."

We do that with language too; the bouba/kiki effect[1] is humans finding meaning in words where there isn't any. We look at the Moon and see a face in it: Pareidolia[2] is 'the tendency for perception to impose a meaningful interpretation on a nebulous stimulus so that one detects an object, pattern, or meaning where there is none'.

We are only able to see faces in things because we have some understanding of what it means for something to 'look like a human face'. "We see a face where there isn't one" is no evidence that we don't understand faces and so "an LLM would find patterns in gibberish" is no evidence that LLMs don't understand anything.

> "All you’re doing is guessing at patterns, based on symbols that you aren’t even attempting to understand and have no basis for understanding anyway. That’s an LLM."

Trying to build patterns is "what attempting to understand" is! You're staring right at the thing happening, and declaring that it isn't happening. "AI is search" said Peter Norvig. The Hutter Prize[3] says "Being able to compress well is closely related to intelligence as explained below. While intelligence is a slippery concept, file sizes are hard numbers. Wikipedia is an extensive snapshot of Human Knowledge. If you can compress the first 1GB of Wikipedia better than your predecessors, your (de)compressor likely has to be smart(er). The intention of this prize is to encourage development of intelligent compressors/programs as a path to AGI". Compression is about searching for patterns.

Understanding is either magic, or it functions in some way. Why not this way?

> "all languages are gibberish alien languages, while also being all that it “knows”."

If we took some writing in a Human language that you don't speak, you can do as much "predict the next word" as you want, take as much time as you need, and put together as an output. The input is asking for a reply in formal Swahili which explains yoga in the style of Tolkein with Tourette's, but you don't know that. The chance of you being able to hit a valid reply out of all possible replies by guessing is absolutely zilch. But you couldn't do it by " predicting the next word" either, how would you predict that the reply should be in Turkish if you can't understand the input? How would you do formal Turkish without understanding the way people use Turkish? Conversely if you could hit on a good and appropriate reply, it would be because your studying to "predicting the next word" had given you some understanding of the input language and Swahili and yoga and Tolkein's style and how Tourette's changes things.

> "I remain stubbornly unconvinced that simulating a real process (by hand or otherwise) is the same thing as it actually happening with real matter and energy"

Computers are real matter and energy. When someone has a cochlear implant, do you think they aren't really hearing because a microphone turning movement into modulated electricity is fake matter and fake energy, and an eardrum and bones doing it is real matter and real energy? Yes it's true that you can't get on a simulation of a plane and fly to New York, but if you see the output of an arithmetic calculation there's no way to tell if it was done with a redstone computer in Minecraft or with Python or with brain matter. (Is it possible for arithmetic to be not-simulated?).

[1] https://en.wikipedia.org/wiki/Bouba/kiki_effect

[2] https://en.wikipedia.org/wiki/Pareidolia

[3] http://prize.hutter1.net/


> You opened by saying you aren't doing God of the Gaps, but here you are doing it.

No! There’s a difference between a thing happening, and symbols we decided mean something bearing manipulated. The assigned meaning isn’t real in the way an actual process is. A flip-book of a person jumping rope isn’t a person jumping rope.


What do you think is the "real" version of understanding which brains do, and where / how do you think brains do it?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: