Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, I think a primary challenge with LLMs is to somehow identify them. If we can't, the net will become swamped by fake humans bent on distorting human reality in whatever ways serve their masters' whims.

Frankly, I think identification of LLM personas vs humans will be impossible. It's increasingly likely then that the rise of LLMs may introduce the decline of the internet as a way to connect with other humans. Soon nobody will want to take an internet entity seriously unless it can prove it's human. And I don't see any way to do that.

So maybe LLM fake entities will be the straw that breaks the Net's back as a social medium and propels us back into the physical world? I hope so.



It is true that the rise of language models has the potential to introduce fake personas that could manipulate and distort human reality. However, I believe that identifying LLM personas versus humans is not impossible. There are techniques that can be used to detect synthetic content, such as detecting patterns in the language used or analyzing the response time. Additionally, researchers and companies that have been discussed on HN itself that are already working on developing tools to combat deepfakes and synthetic content.

While it's important to remain vigilant about the potential negative impact of LLMs, it's also important to remember the positive ways in which they can be used. Language models have the potential to revolutionize industries such as healthcare and education by analyzing vast amounts of data and providing insights that can improve people's lives.

Furthermore, I don't think that the rise of LLMs will necessarily lead to the decline of the internet as a social medium. Rather, it will likely lead to a greater emphasis on trust and transparency in online interactions. We may see the development of new technologies that enable people to verify the authenticity of online identities, or the rise of platforms that prioritize human-to-human interaction.

Overall, I believe that while there are challenges associated with the rise of LLMs, there are also opportunities for positive change. It's important to continue to monitor the development of these technologies and take proactive steps to ensure that they are used in a responsible and ethical manner.


@dang, should boring, ChatGPT-generated comments likes these be banned?


That's fair (but please don't ban me for that @dang). I genuinely just wanted to see if anyone would figure out it was chatgpt generated. What tipped you off?

Because I believe it's going to get harder and harder to figure out what's human and what's not.


It's very typical of ChatGPT, the neutral, regular phrasing and the general, anodyne statements. ChatGPT's averaging of words produces very flat writing.


Sorry my comment about banning was a bit harsh.

Re what tipped me off: gonna stay security-through-obscurity on that for now :)


Yeah, I figured it out too. It sounds too formal in its structure, like a high school paper.


No, the op just forgot to give Chatty some context. Human enhanced AI if you will.

Tell it to talk like a typical HN'er, adopt a more robust and aggressive stance, and include five grammatical and spelling errors.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: