Who’s to say that a large language model is fundamentally incapable of learning some kind of ability to reason or apply logic?
Fundamentally, our brains are not so different, in the sense that we are not also apply some kind of automated theorem solver directly. We get logic as an emergent behavior of a low-level system of impulses and chemical channels. Look at kids, they may understand simple cause and effect, but gradually learn things like proof by contradiction (“I can’t have had the candy because I was in the basement”). No child is born able to apply logic in a way that is impressive to adults - and many adults are not able to apply it well either.
I don’t think LLMs are going to automatically become super-human logicians capable of both complex mathematical proofs and composing logically consistent Homerian Epics, but to me there is no reason they could not learn some kind of basic logic, if only because it helps them better model what their output should be.
Fundamentally, our brains are not so different, in the sense that we are not also apply some kind of automated theorem solver directly. We get logic as an emergent behavior of a low-level system of impulses and chemical channels. Look at kids, they may understand simple cause and effect, but gradually learn things like proof by contradiction (“I can’t have had the candy because I was in the basement”). No child is born able to apply logic in a way that is impressive to adults - and many adults are not able to apply it well either.
I don’t think LLMs are going to automatically become super-human logicians capable of both complex mathematical proofs and composing logically consistent Homerian Epics, but to me there is no reason they could not learn some kind of basic logic, if only because it helps them better model what their output should be.