Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think the cases are really the same. With Wikipedia people have learned to trust that the probability of the information being at least reasonably good is pretty high because there's an editing crucible around it and the ability to correct misinformation surgically. No one can hotpatch a LLM in 5mins.




The best LLM powered solutions are as little LLM and as much conventional search engine / semantic database lookups and handcrafted coaxing as possible. But even then, the conversational interface is nice and lets you do less handcrafting in the NLP department.

Using Perplexity or Claude in "please source your answer" mode is much more like a conventional search engine than looking up data embedded in 5 trillion (or whatever) parameters.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: