I fear an LLM that is trained to provide ad-based responses in ways that don’t clearly disclose to users that the answer is an ad. YouTube review video culture already has a huge problem with this and it’s not even AI-driven.
Once companies start training models to respond based on advertising inputs (and you know they will eventually), it’s gonna be even harder to trust anything it says.
Once companies start training models to respond based on advertising inputs (and you know they will eventually), it’s gonna be even harder to trust anything it says.