I fail to see how that matters. You're implying that all reality is cultural, but that seems irrelevant. The same thing would apply to scientific facts, but whales not having a word for science doesn't make it not real.
If we somehow discover LLMs right after Newton discovered the theory of gravity, and then a while later Einstein discovers General Relativity, then GR would not be in the training set of the neural net. That doesn't make GR any less of a description of reality! You also can't convert General Relativity into whalesong!
But you CAN explain General Relativity in English, or in Chinese to a person in china. So the fact that we can create a mapping from the concept of General Relativity in the neural network of the brain of a human in the USA using english, to someone in china using chinese, to a ML model, is what makes it a shared statistical model of reality.
You also can't convert General Relativity to the language of "infant babble", does it make general relativity any less real?
Fan death in South Korea. Where people believe that a fan running while you sleep can kill you.
The book "Pure, White and Deadly". Where we discredited the author and his findings and spent decades blaming fat, while packing on the pounds with high fructose corn syrup.
An LLM isn't going to find some intrinsic truth, that we are ignoring, in its data set. An LLM isn't going to find issues in the reproducibility / replication crisis. I have not seen one discredit a scientific paper with its own findings.
To be clear LLM's can be amazing tools, but garbage in garbage out still applies.
You are describing the state of LLMs from 2 years ago. Which basically means they were just pre-trained on the internet and then fine tuned to follow a particular instruction format. Current models still use this as a first step, but are then trained a lot using reinforcement learning, which has given them much better skills at reasoning and logic than human tainted data ever could. See how Grok 4 for example still eagerly dismisses all those right wing hoaxes, despite being massively tuned to favour right wingers by its creators carefully selecting pre-training data.
I suggest you reed something like the DeepSeek R1 paper, because you and everybody else here seems to have no clue how it works (which is not surprising tbh).
> That doesn't make GR any less of a description of reality!
When you layer the concept of awareness into the mix it does alter reality for an individual or llm. Awareness creates interesting blind spots into our statistical models of reality.
If we somehow discover LLMs right after Newton discovered the theory of gravity, and then a while later Einstein discovers General Relativity, then GR would not be in the training set of the neural net. That doesn't make GR any less of a description of reality! You also can't convert General Relativity into whalesong!
But you CAN explain General Relativity in English, or in Chinese to a person in china. So the fact that we can create a mapping from the concept of General Relativity in the neural network of the brain of a human in the USA using english, to someone in china using chinese, to a ML model, is what makes it a shared statistical model of reality.
You also can't convert General Relativity to the language of "infant babble", does it make general relativity any less real?