Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes yes yes we're all aware that these are word predictors and don't actually know anything or reason. But these random dice are somehow able to give reasonably seemingly well-educated answers a majority of the time and the fact that these programs don't technically know anything isn't going to slow the train down any.




i just don't get why people say they don't reason. It's crazy talk. the kv cache is effectively a unidirectional turing machine so it should be possible to encode "reasoning" in there. and evidence shows that llms occasionally does some light reasoning. just because it's not great at it (hard to train for i suppose) doesn't mean it does it zero.

Would I be crazy to say that the difference between reasoning and computation is sentience? This is an impulse with no justification but it rings true to me.

Taking a pragmatic approach, I would say that if the AI accomplishes something that, for humans, requires reasoning, then we should say that the AI is reasoning. That way we can have rational discussions about what the AI can actually do, without diverting into endless discussions about philosophy.

Eh...

Suppose A solves a problem and writes the solution down. B reads the answer and repeats it. Is B reasoning, when asked the same question? What about one that sounds similar?


If a human does A, that required reasoning. Same for AI.

If a human does B, that didn't require reasoning. Same for AI.

Believe it or not, people do make an effort to test their AIs on problems that they could not have seen in their training data.


The crux of the problem is "what is reasoning?" Of course it's easy enough to call the outputs "equivalent enough" and then use that to say the processes are therefore also "equivalent enough."

I'm not saying it's enough for the outputs to be "equivalent enough."

I am saying that if the outputs and inputs are equivalent, then that's enough to call it the same thing. It might be different internally, but that doesn't really matter for practical purposes.


Could it be said that reasoning requires intent?

fine. prove to me llms aren't sentient. your proof can't just be "vibes"

See: "This is an impulse with no justification." In that sense yes my justification absolutely can be vibes, and it is! Suck it!

i see we are in agreement

"Majority" may be a bit generous, and would highly depend on the context and application.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: