Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Exactly. LLMs are closer to a plinko game than consciousness. And I'd bet they quickly start returning gibberish when you run them in a feedback loop, sort of like an opamp going to a power rail in a badly designed circuit


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: