I don't think that's the case. It's true that AI models are trained to mimic human speech, but that's not all there is to it. The people making the models have discretion over what goes into the training set and what doesn't. Furthermore they will do some alignment step afterwards to make the AI have the desired opinions. This means that you can not count on the AI to be representative of what people in the same position would do.
It could be more biased or less biased. In all likelihood it differs from model to model.
> Furthermore they will do some alignment step afterwards to make the AI have the desired opinions.
This requires more clarification. It isn't really alignment work done at that point, or anywhere in the process, because we haven't figured out how to align the models to human desires. We haven't even figured out how to align among other humans.
At that step they are fine tuning the various controls used during inference until they are happy with the outputs given for specific inputs.
The model is still a black box, they're making somewhat educates guesses on how to adjust said knobs but they don't really know what changes internally and they definitely don't know intent (if the LLM has developed intent).
These models as we understand them also don't have opinions and can't themselves be biased. Bias is recognized by us, but again its only based on the output as we don't know why any specific output was generated. An LLM may output something most people would read as racist, for example, but that says nothing of why the output was generated and whether the model even really understands race as a concept or cared about it at all when answering.
It could be more biased or less biased. In all likelihood it differs from model to model.