Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well if you're literally researching ways to impose a bias on an estimator then sure your approach will be more susceptible to bias. Ok then, LeCun should amend his statement to say also tell the engineers not to impose a racist prior if they use some kind of engineered regularization term. Actually isn't that what fairness researchers are developing themselves? Ways to attack ML systems in order bias their behavior? Honest question.

I am personally of the opinion that it is fundamentally impossible to advance technology in a one-sided way. Anything with the power to do good can do evil too. Power itself is the danger. There might be a logical proof of this somewhere. Step very far back, and try to describe what a technology like a ML algorithm provides to society: software that can perform tasks as well as a person? discriminate between similar things using noisy observations? extract information that is obscured? The technology which accomplishes this can always be used both ways.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: