The LLMs were never designed to be fact checkers, they're text generators, not fact generators.
Additionally you don't need ML to get your computer to show you confidently something that is completely wrong, all it takes is to multiply specific floating point numbers repeatedly, really.
Additionally you don't need ML to get your computer to show you confidently something that is completely wrong, all it takes is to multiply specific floating point numbers repeatedly, really.