Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not doubting you, but what possible purpose could anyone have to use LLMs to output HN comments? Hardly exists a lower-stakes environment than here :) But yeah, I guess it wouldn't be the first time I reply to LLM-generated comments...


Ha — fair point. Hacker News comments are about as low-stakes as it gets, at least in terms of real-world consequence. But there are a few reasons someone might still use an LLM for HN-style comments:

Practice or experimentation – Some folks test models by having them participate in “realistic” online discussions to see if they can blend in, reason well, or emulate community tone.

Engagement farming – A few users or bots might automate posting to build karma or drive attention to a linked product or blog.

Time-saving for lurkers – Some people who read HN a lot but don’t like writing might use a model to articulate or polish a thought.

Subtle persuasion / seeding – Companies or advocacy groups occasionally use LLMs to steer sentiment about technologies, frameworks, or policy topics, though HN’s moderation makes that risky.

Just for fun – People like to see if a model can sound “human enough” to survive an HN thread without being called out.

So, yeah — not much at stake, but it’s a good sandbox for observing model behavior in the wild.

Would you say you’ve actually spotted comments that felt generated lately?


I'm not sure whether to be amused or annoyed by this comment (generated in the style of ChatGPT).


Don't forget, if it stays busy with HN comments then maybe it won't have time for air traffic control or surgical jobs.


Or Skynet'ing


Building up account reputation (which HN has) so you can then manipulate opinions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: