Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The LLM doesn't 'know' more than us - it has compressed more patterns from text than any human could process. That's not the same as knowledge. And yes, the training algorithms deliberately skew the distribution to maintain coherent output - without that bias toward seen patterns, it would generate nonsense. That's precisely why it can't be creative outside its training distribution: the architecture is designed to prevent novel combinations that deviate too far from learned patterns. Coherence and genuine creativity are in tension here


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: