Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Small children track in their 'minds eye' the entire layout of a chess board and where each piece is? Tokenization starts screwing stuff up if you're start printing out a board after each move.

Push out a plugin that sets up a virtual board GPT-4 can read after each move and see if its any better.



No, they look at the board, though strong chess players can do it blindfolded.

My point is that the hyperbole about “we’ve cracked AI but they changed the goalposts” is self evidently not true. You just proved it right there: I need to add more plugins because ChatGPT does not understand what it’s doing.

It’s a potentially useful tool but that doesn’t make it intelligent.


I mean, all you've managed to state is that gpt isn't a strong AGI, not that it isn't intelligent. Stop thinking of intelligence as a binary option of 'is or !is' and more of a capability gradient.

Being able to properly use tools is a sign of intelligence in of itself.


I’m using “intelligent” in the “intelligent life” meaning, not the “how intelligent is this person” meaning. By this meaning, any reasonable person would realise it’s not crossing that threshold.

Where is this threshold? I don’t think anyone really knows yet. That’s why I don’t see a problem with “moving the goalposts,” because doing so is the best way to help us truly understand what it means to be an intelligent life form (artificial or otherwise).


And this is why the word intelligent is useless. In your use of the word we're not getting anywhere then suddenly terminator kicks in the door and steps on your head and then "Oh, yea, I guess we reached the intelligent point". This is a piss poor predictor of capabilities.

And again "reasonable" is a pretty useless metric. Asking a 'reasonable' person about any system that requires expert knowledge to understand is going to derive an unreasonable answer. This is because they'll conflate intelligence with human behavior.


> In your use of the word we're not getting anywhere then suddenly terminator kicks in the door

This is just silly. Also, a machine killing me is still not enough. We have drone targeting systems in development right now but I don’t think you’d call them AI.

> And again "reasonable" is a pretty useless metric.

When you can’t define intelligence, it’s really the starting point.

The thing with LLMs is that they look almost there but, as many are pointing out, the method by which they make inferences is an analysis of how words fit together without understanding the meaning. This is why things like ChatGPT confidently spout absolute nonsense about topics they weren’t trained on (with much human intervention): it doesn’t know what it’s saying so it doesn’t realise it’s making stuff up.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: