Hacker News new | past | comments | ask | show | jobs | submit login

What I find frightening is how many are willing to take LLM output at face value. An argument is won or lost not on its merits, but by whether the LLM say so. It was bad enough when people took whatever was written on Wikipedia at face value, trusting an LLM that may have hardcoded biases and is munging whatever data it comes across is so much worse.



I’d take the Wikipedia answer any day. Millions of eyes on each article vs. a black box with no eyes on the outputs.


Even Wikipedia is a problem though. There are so many pages now that self-reference is almost impossible to detect. Meaning, the citation of a statement made on Wikipedia that uses an outside article for reference, which is an article that was originally written using that very Wikipedia article as its own citation.

It's all about trust. Trust the expert, or the crowd, or the machine.

They're all able to be gamed.


False equivalence. "Nothing is perfectly unreliable, therefore everything is (broadly) unreliable, therefore everything is equally unreliable." No, some sources are substantially more reliable than others.


*perfectly reliable, but yes.


> "Millions of eyes on each article"

Only a minority of users contribute regularly (126,301 have edited in the last 30 days):

https://en.wikipedia.org/wiki/Wikipedia:Wikipedians#Number_o...

And there are 6,952,556 articles in the English Wikipedia, so an average article is corrected every 55 months (more than 4 years).

It's hardly "Millions of eyes on each article"


But of those 126,301 people who have edited in the last 30 days, some of them have edited more than one article. In fact, some have made up to millions of edits (lifetime), which disproportionately increases the total. At least 5000 people have edited more than 24,000 times.

https://en.wikipedia.org/wiki/Wikipedia:List_of_Wikipedians_...

(And also: each editor has (approximately) 2 eyes :) )


This is what people said about the internet too. Remember the whole "do not ever use Wikipedia as a source". I mean sure, technically correct, but human beings are generally imprecise and having the correct info 95% of the time is fine. You learn to live with the 5% error


A buddy won a bet with me by editing the relevant Wikipedia article to agree with his side of the wager.


I think it brings forward all the low-performers and people who think they are smarter than they really are. In the past, many would just have stayed silent unless they recently read an article or saw something on the news by chance. Now, you will get a myriad of ideas and plans with fatal flaws and a 100% score on LLM checkers :)


People take texts full of unverifiable ghost stories written thousands of years ago at face value to the point that they base their entire lives on them.


I've seen someone use an LLM to summarize a paper to post it on reddit for people who haven't read the paper.

Papers have abstracts...


Sounds fun, if only to compare it to the abstract.


You know, these days I think the abstracts are generated by LLMs too. And the paper. Or at least it uses something like Grammarly. If things keep going this ways typos are going to be a sign of academic integrity.


A proper LLM will include realistic rates of typos eventually. ;)


Amen, Eliza wins.

The humans' mistakes of irrational response still boggles my mind.


Darn.


> frightening

Don't be scared of "the many," they're just people, not unlike you.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: