Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>how do we ensure machine learning is trained and verified in ways that don't encode bias

Or, more commonly, people work to ensure their machine learning is trained and limited to not specifically notice any patterns of fact that would be politically unacceptable to notice and make use of, even if it was rational and useful for the task at hand.

The status quo moral system of our society rests on a collection of objectively false beliefs about the physical world. Most humans know enough to avoid noticing these facts because of the social cost and possible societal negative impacts of shaking those moral foundations. It's not even necessarily wrong, since agreed-upon lies can do a lot to reduce conflict and cruelty. But machines need to be specifically trained to not notice these facts. You need to teach the machine not to notice the emperor's nudity.

And what's even better is that this'll never get acknowledged in the literature. It's one of those self-hiding facts which exists if you're willing to notice it yourself, but which no authority will ever tell you (I love these, wish there was a word for them).



If we were able to provide perfect datasets you would be correct. As it is our datasets are imperfect and ML models are only as good as the training data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: