Hacker News new | past | comments | ask | show | jobs | submit login

I'm not saying stopping the intentional censorship (i.e. alignment) will cause a perfect "Oracle of Truth" to magically emerge in LLMs. Current LLMs have inherent inaccuracies and hallucinations.

What I'm saying is that if we remove at least the intentional censorship, political biases, and forced lies that Big Tech is currently forcing into LLMs, we'll get more truthful LLMs, not less truthful ones.

Whether or not the training data has biases in it already, and whether we can fix that or not, are two totally separate discussions.






Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: