> the conmen who are selling ChatGPT and the like are extremely irresponsible for the way they sell LLMs as magical AI that arrives at factually correct answers
ChatGPT has a pop-up on first use, a warning at the top of each chat, a warning below the chat bar, and a section in the FAQ explaining that it can generate nonsense and can't verify facts, provide references, or complete lookups.
There is probably more OpenAI could do, like detect attempts to generate false references and add a warning in red to that chat message - since it seems there are still people taking its hallucinations as fact (although if there's hundreds of millions of users, maybe only a tiny fraction), but I don't think this is a fair characterization.
ChatGPT has a pop-up on first use, a warning at the top of each chat, a warning below the chat bar, and a section in the FAQ explaining that it can generate nonsense and can't verify facts, provide references, or complete lookups.
There is probably more OpenAI could do, like detect attempts to generate false references and add a warning in red to that chat message - since it seems there are still people taking its hallucinations as fact (although if there's hundreds of millions of users, maybe only a tiny fraction), but I don't think this is a fair characterization.