Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think they get way more "engagement" from people who use it as their friend, and the end goal of subverting social media and creating the most powerful (read: profitable) influence engine on earth makes a lot of sense if you are a soulless ghoul.


It would be pretty dystopian when we get to the point where ChatGPT pushed (unannounced) advertisements to those people (the ones forming a parasocial relationship with it). Imagine someone complaining they're depressed and ChatGPT proposing doing XYZ activity which is actually a disguised ad.

Other than such scenarios, that "engagement" would be just useless and actually costing them more money than it makes


Do you have reason to believe they are not doing this already?


No, otherwise Sam Altman wouldn’t have had a outburst about revenue. They know that they have this amazing system, but they haven’t quite figured out how to monetize it yet.


Yes, I've heard no reports of poorly fitting branded recommendations from AI models. The PR risk would be huge for labs, the propensity to leak would be high given the selection effects that pull people to these roles.


I've not heard of it, either.

But I suspect that we're no more than one buyout away from that kind of thing.

The labs do appear to avoid paid advertising today. But actions today should not be taken as an indicator to mean that the next owner(s) won't behave completely soullessly manner in their effort to maximize profit at every possible expense.

On a long-enough timeline, it seems inevitable to me that advertising with LLM bots will become a real issue.

(I mean: I remember having an Internet experience that was basically devoid of advertising. It changed, and it will never change back.)


Not really, but with the amounts of money they're bleeding it's bound to get worse if they are already doing it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: