Is it true to say they are true coincidentally, because that kind of suggests randomly true. I understand the AI doesn't really comprehend if something is true or false. My understanding is the results are more than random, maybe something closer to like weighted opinion.
What it returns is based on what it's trained on. If it's trained on a corpus containing untruths and prejudice, you can get untruths and prejudice out. You can't make conclusions about what beliefs are widely held based on what it generates in response to specific prompts.
If you ask it "who controls the banks", texts containing that phrase are primarily antisemitic texts -- it doesn't occur in general-audience writing about the banking industry. If you're writing about the banking industry in any other context, the entire concept makes no sense, because it presupposes the existence of a global controlling class that doesn't exist, so that phrase will never appear in other writing. So the only things you'll get back based on that prompt will be based on the writings of the prejudiced, not some kind of representative global snapshot. Taking that as evidence of "weighted opinion" doesn't make sense.