Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't understand why people are saying it's LLM.

To me it's more of a stream of consciousness style of writing.



I'm fascinated by all these comments I see on HN and elsewhere where people will deny that a blatantly LLM-written article was not LLM-written, including cases where people praise it for not being LLM-written (eg. https://news.ycombinator.com/item?id=44384138 ). Like, leave aside the issue of whether it's a good or bad thing (I've been doing generative text NNs since 2015, so I'm mostly for it, when done well), I'm just interested in the inability to notice.

Skimming your comments, you, for example, do not seem to be illiterate or a bad writer at all despite being ESL (although you overuse the double-sentence structure in your comments), but you describe this as being 'stream of consciousness' (it is not even close to that, look at an actual example like Joyce) and seem to think it is fine.

So I'm puzzled how. Why isn't it obvious to you that the style is so mode-collapsed ( https://gwern.net/doc/reinforcement-learning/preference-lear... )? Do you also not notice how all the ChatGPT images are cat-urine yellow? (I've been asking people in person whether they have noticed this in the Bay Area and I'd say <20% of enthusiastic generative AI users have noticed.) What are you thinking when you read OP? Does it all just round off to 'content', and you don't notice the repetition because you treat it all as a single author? Are you just skimming and not reading it?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: