I'm not saying stopping the intentional censorship (i.e. alignment) will cause a perfect "Oracle of Truth" to magically emerge in LLMs. Current LLMs have inherent inaccuracies and hallucinations.
What I'm saying is that if we remove at least the intentional censorship, political biases, and forced lies that Big Tech is currently forcing into LLMs, we'll get more truthful LLMs, not less truthful ones.
Whether or not the training data has biases in it already, and whether we can fix that or not, are two totally separate discussions.
What I'm saying is that if we remove at least the intentional censorship, political biases, and forced lies that Big Tech is currently forcing into LLMs, we'll get more truthful LLMs, not less truthful ones.
Whether or not the training data has biases in it already, and whether we can fix that or not, are two totally separate discussions.