Doesn't mean that the model is broken or prejudice. It _does_ what it has learned to do.
If you want to fix the model by taking into account the said imbalanced data, that's one thing - but the dataset still remains the same.
Prejudice or biad would be to build a dataset by improperly cherry-picking your data, or other sampling errors.
Don't need to be a Ph.D to see or understand this.
Doesn't mean that the model is broken or prejudice. It _does_ what it has learned to do.
If you want to fix the model by taking into account the said imbalanced data, that's one thing - but the dataset still remains the same.
Prejudice or biad would be to build a dataset by improperly cherry-picking your data, or other sampling errors.
Don't need to be a Ph.D to see or understand this.