Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Funnily, even two different LLMs, when put in conversation with each other, can end up completing each other's sentence. I guess it has something to do with the sequence prediction training objective.


And this regularly happens with humans too




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: