Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> These LLMs are not about helping anyone, their goals are engagement and mining data for that engagement.

Wow, this is a really interesting idea! A sneaky play for LLM providers is to be helpful enough to still be used, but also sufficiently unhelpful that your users give you additional training data.



This is obvious in retrospect - instead of making LLMs work better, LLM's handlers invented various techniques to make LLMs to look like they work better, one such example is summarization. Next gen LLMs then get trained on that data.

Now instead of having some answer right away, the user has to engage in discussion, which increases the cost that is sunk into the work with LLMs.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: