Hacker News new | past | comments | ask | show | jobs | submit login

Here’s a Quora thread from 4 years ago:

https://www.quora.com/Once-Queen-Elizabeth-dies-will-Prince-...

There are loads of articles and discussions online speculating about what “will” happen when Queen Elizabeth dies.

When you have a very, very, very large corpus to sample from, it can look a lot like reasoning.




I see what you mean, and it's indeed quite likely that texts containing such hypothetical scenarios were included in the dataset. Nonetheless, the implication is that the model was able to extract the conditional represented, recognize when that condition was in fact met (or at least asserted: "The queen died."), and then apply the entailed truth. To me that demonstrates reasoning capabilities, even if for example it memorized/encoded entire Quora threads in its weights (which seems unlikely). If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.


Yes, this.

There's clearly an internal representation of the relationships that is being updated.

If you follow my Twitter thread it shows some temporal reasoning capabilities too. Hard to argue that is just copied from training data: https://twitter.com/nlothian/status/1646699218290225154




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: