some form of bias is inescapable. ideally i think we would train models on an equal amount of Western/non-Western, etc. texts to get an equal mix of all biases.
This assumes our models perfectly model the world, which I don't think is true. I mean, we straight up know it's not true - we tell models what they can and can't say.
I guess it's a matter of semantics, but I reject the notion it's even possible to accurately model the world. A model is a distillation, and if it's not, then it's not a model, it's the actual thing.
There will always be some lossyness, and in it, bias. In my opinion.