Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This seems to be a common trait of a lot of the more "aligned", "helpful" LLMs out there. You can drop any random excerpt from your diary into ChatGPT and it will tell you about how brilliant, sensitive, and witty you are. It's really quite sickening.


Reminds me of my father who'd tell every kid that they're a genius, including myself. It got me motivated to try things, but whenever there was a failure, I felt terribly betrayed.


General advice from psychology is that when it comes to success you should praise the kids for things they control, like effort, time spent, inquisitiveness, concentration not things that are out of their control like talent or luck. Basically praise for what they did, not what they are.

When it comes to morality, it's the other way around. You praise kids for being good people when they do something right. Because you want them to internalize identity of a good person and associate it with those behaviors.

Internalizing identity of a genius is mostly useless, rarely beneficial, often harmful.


That sucks. But it's why I keep trying to remind my kinds that even though they are smart, they will fail at things. Failing is a part of learning. Possibly even the most important part. "If you're not making mistakes, you're not trying hard enough."


Honestly, it's obviously horrendously gag-worthy and everything, but also kind of funny that there is so much bullshit marketing copy out there that LLM's invariably converge on this inspirational Stanford application letter / upbeat linkedin influencer tone of voice, and just apply it to everything.


Well, an LLM doesn’t have the capability to like anything more than anything else. It doesn’t really matter to GPT if your diary excerpt is the worst piece of writing ever written, or the most brilliant - it’ll just tell you what you want to hear and that’s that.


Only because they've been RLHFed and prompted to be agreeable. A Marvin the Paranoid Android LLM could similarly be designed to hate everything equally.

Genuine People Personalities, indeed.


"Tell you what you want to hear" is a matter of training and prompting, not the technology itself. But I agree that asking an LLM to make an aesthetic judgment is a fool's errand.


How is it sickening? Tell it to roast you if you think it's a problem.


It feels sickening to be praised meaninglessly for something not worthy of praise. ChatGPT in particular loves to talk about how clever and interesting text you show it is, even if you're not actually asking for that kind of analysis.

It's also sickening that I see people using these LLMs to rewrite performance reviews, peer feedback, business reports, etc. I've already started to notice business communication getting even more saccharine and toothless.


Sickening in the same way you get sick from eating too much sugar.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: