Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not a political standard though. There is actual diversity in this world. Why wouldn't you want that in your product?


Fix the data input side, not the data output side. The data input side is slowly being fixed in real time as the rest of the world gets online and learns these methods.


In a sane world we would be able to tack on a disclaimer saying "This model was trained on data with a majority representation of caucasian males from Western English speaking countries and so results may skew in that direction" and people would read it and think "well, duh" and "hey let's train some more models with more data from around the world" instead of opining about systemic racism and sexism on the internet.


That wouldn't necessarily fix the issue or do anything. A model isn't a perfect average of all the data you throw into its training set. You have to actually try these things and see if they work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: