Fundamentally, the choice of training data set, and the biases that went into it's collection.
Also, in the case of statistical models, the crafting of the trained features themselves.
Actually, this is also relevant for neural networks despite the fact that they learn their own features because some amount of "framing" of the raw data often takes place in order to focus the neural network on the portion of the input data the trainer sees as relevant. This removes noise, but also removes context.
You asked about the biases of the people building the model, which is what I answered.
You didn't ask about the biases that occur during the requirements specification stage, or the biases that occur during operational implementation of the trained model.
Those are just as important - and arguably even more important - than the choice of the training data and the technical implementation.
The responsibility for the ethics of using ML neither begins nor ends with the ML engineer who builds the machine, and there are serious questions arising from the application of ML in certain domains that cannot simply be addressed by "better training data".