Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I fear an LLM that is trained to provide ad-based responses in ways that don’t clearly disclose to users that the answer is an ad. YouTube review video culture already has a huge problem with this and it’s not even AI-driven.

Once companies start training models to respond based on advertising inputs (and you know they will eventually), it’s gonna be even harder to trust anything it says.



There is a simple solution to this with even the most basic regulation. Which means Europe will block this behavior, but America won’t.


The simple solution is to run your own LLM.


And hope the ads aren't trained into the models on huggingface.


For the average person that is not a simple solution.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: