Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> "thing interesting as an example but also trivial to solve: all that happens is he moved some rocks around. The meaning was all his, it doesn’t do anything."

You opened by saying you aren't doing God of the Gaps, but here you are doing it. Brains move chemicals and electrical signals around. That doesn't do anything, apparently. Matter doesn't do understanding. Energy doesn't do understanding. Mathematical calculations don't do understanding. Neural networks don't do understanding. See how Understanding is retreating into the gaps? Brains must have something else, somewhere else, which does understanding? But what, and where? It's a position that becomes less tenable every decade as brains get mapped in finer detail leaving smaller gaps, and non-brains get more and better Human-like abilities

> "there’s both nothing we know of doing understanding .. it’s not doing understanding."

It is. The math and the training and the inference is the thing doing understanding. Identifying patterns and being able to apply them is part of what understanding is, and that's what it's doing. [Not human level understanding].

> "We could apply an LLM to made-up language and corpus that does not actually carry meaning and it would do exactly what it does with real languages."

We do that with language too; the bouba/kiki effect[1] is humans finding meaning in words where there isn't any. We look at the Moon and see a face in it: Pareidolia[2] is 'the tendency for perception to impose a meaningful interpretation on a nebulous stimulus so that one detects an object, pattern, or meaning where there is none'.

We are only able to see faces in things because we have some understanding of what it means for something to 'look like a human face'. "We see a face where there isn't one" is no evidence that we don't understand faces and so "an LLM would find patterns in gibberish" is no evidence that LLMs don't understand anything.

> "All you’re doing is guessing at patterns, based on symbols that you aren’t even attempting to understand and have no basis for understanding anyway. That’s an LLM."

Trying to build patterns is "what attempting to understand" is! You're staring right at the thing happening, and declaring that it isn't happening. "AI is search" said Peter Norvig. The Hutter Prize[3] says "Being able to compress well is closely related to intelligence as explained below. While intelligence is a slippery concept, file sizes are hard numbers. Wikipedia is an extensive snapshot of Human Knowledge. If you can compress the first 1GB of Wikipedia better than your predecessors, your (de)compressor likely has to be smart(er). The intention of this prize is to encourage development of intelligent compressors/programs as a path to AGI". Compression is about searching for patterns.

Understanding is either magic, or it functions in some way. Why not this way?

> "all languages are gibberish alien languages, while also being all that it “knows”."

If we took some writing in a Human language that you don't speak, you can do as much "predict the next word" as you want, take as much time as you need, and put together as an output. The input is asking for a reply in formal Swahili which explains yoga in the style of Tolkein with Tourette's, but you don't know that. The chance of you being able to hit a valid reply out of all possible replies by guessing is absolutely zilch. But you couldn't do it by " predicting the next word" either, how would you predict that the reply should be in Turkish if you can't understand the input? How would you do formal Turkish without understanding the way people use Turkish? Conversely if you could hit on a good and appropriate reply, it would be because your studying to "predicting the next word" had given you some understanding of the input language and Swahili and yoga and Tolkein's style and how Tourette's changes things.

> "I remain stubbornly unconvinced that simulating a real process (by hand or otherwise) is the same thing as it actually happening with real matter and energy"

Computers are real matter and energy. When someone has a cochlear implant, do you think they aren't really hearing because a microphone turning movement into modulated electricity is fake matter and fake energy, and an eardrum and bones doing it is real matter and real energy? Yes it's true that you can't get on a simulation of a plane and fly to New York, but if you see the output of an arithmetic calculation there's no way to tell if it was done with a redstone computer in Minecraft or with Python or with brain matter. (Is it possible for arithmetic to be not-simulated?).

[1] https://en.wikipedia.org/wiki/Bouba/kiki_effect

[2] https://en.wikipedia.org/wiki/Pareidolia

[3] http://prize.hutter1.net/



> You opened by saying you aren't doing God of the Gaps, but here you are doing it.

No! There’s a difference between a thing happening, and symbols we decided mean something bearing manipulated. The assigned meaning isn’t real in the way an actual process is. A flip-book of a person jumping rope isn’t a person jumping rope.


What do you think is the "real" version of understanding which brains do, and where / how do you think brains do it?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: