Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is it. They’re language models which predict next tokens probabilistically and a sampler picks one according to the desired ”temperature”. Any generalization outside their data set is an artifact of random sampling: happenstance and circumstance, not genuine substance.




However: do humans have that genuine substance? Is human invention and ingenuity more than trial and error, more than adaptation and application of existing knowledge? Can humans generalize outside their data set?

A yes-answer here implies belief in some sort of gnostic method of knowledge acquisition. Certainly that comes with a high burden of proof!


Yes. Humans can perform abduction, extrapolating given information to new information. LLMs cannot, they can only interpolate new data based on existing data.


Can you elaborate on what you mean by that, and prove it?

https://journals.sagepub.com/doi/10.1177/09637214251336212


The proof is that humans do it all the time and that you do it inside your head as well. People need to stop with this absurd level of rampant skepticism that makes them doubt their own basic functions.

the concept is too nebulous to "prove" but the fact im operating a machine (relatively) skillfully to write to you shows we are in fact able to generalise. This wasn't planned, we came up with this. Same with cars etc. We're quite good at the whole "tool use" thing



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: