Hacker News new | past | comments | ask | show | jobs | submit login

Isn't this just a form of next token prediction? i.e. you'll keep your options open for a potential rhyme if you select words that have many associated rhyming pairs, and you'll further keep your options open if you focus on broad topics over niche





Assuming the task remains just generating tokens, what sort of reasoning or planning would say is the threshold, before it's no longer "just a form of next token prediction?"

This is an interesting question, but it seems at least possible that as long as the fundamental operation is simply "generate tokens", that it can't go beyond being just a form of next-token prediction. I don't think people were thinking of human thought as a stream of tokens until LLMs came along. This isn't a very well-formed idea, but we may require an AI for which "generating tokens" is just one subsystem of a larger system, rather than the only form of output and interaction.

But that means any AI that just talks to you can't be AI by definition. No matter how decisively the AI passes the Turing test, it doesn't matter. It could converse with the top expert in any field as an equal, solve any problem you ask it to solve in math or physics, write stunningly original philosophy papers, or gather evidence from a variety of sources, evaluate them, and reach defensible conclusions. It's all just generating tokens.

Historically, a computer with these sorts of capabilities has always been considered true AI, going back to Alan Turing. Also of course including all sorts of science fiction, from recent movies like Her to older examples like Moon Is A Harsh Mistress.



I'm not sure if this is a meaningful distinction: Fundamentally you can describe the world as a "next token predictor". Just treat the world als a simulator with a time step of some quantum of time.

That _probably_ won't capture everything, but for all practical purposes it's non-distinguishable from reality (yes, yes, time is not some constant everywhere)


It doesn't really make explain it because then you'd expect lots of nonsensical lines trying to make a sentence that fits with the theme and rhymes at the same time.

recursive predestination. LLM's algorithms imply 'self-sabotage' in order to 'learn the strings' of 'the' origin.

In the same way that human brains are just predicting the next muscle contraction.

Except that's not how it works...

To be fair, we don't actually know how the human mind works.

The most sure things we know is that it is a physical system, and that does feel like something to be one of these systems.





Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: