Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, ML research and engineering is my job. I actually lean more into the research side of our R&D function, so this isn't coming from a place of disdain for researchers.

I'm not advocating that purely theoretical work of every form should be focused on algorithmic bias. Certain realms of theory (adversarial and robust learning, deployable model theory, computer vision and NLP) lend themselves much more directly to societally-relevant bias than other realms (algorithm and complexity theory, hardware research, performance research, pure statistical learning theory). I work on faster graph neural networks, so I'm actually in latter group.

So? I just want everyone to be on the same page on the importance of work in algorithmic bias. I've seen people dismiss Gebru's work as "pure rhetoric" on Reddit - this is a cause for concern! Acknowledging the validity and importance of similar work is especially important for people who are leaders in the field and who have influence over priorities. Don't people on HN complain about FB's algorithms literally every day? Shall we forget the Myanmar incident?

Let me put it this way: in biology, Watson and Crick were researchers who participated in discovering the structure of DNA. For most of their careers, they were not practitioners. However, James Watson made a huge negative impact on the field by advocating against woman in science and advocating for eugenics (completely ignoring Rosalind Franklin). Setting an aggressive and toxic tone in genetics paid dividends during the Asilomar conference (1975), where reporters and scientists who were critical of big-name organizers got de-badged and escorted out of a conference on ethics.

Our leaders and researchers matter - let's not make the same mistakes. The message should be: "I might no longer tool in TensorFlow, but I care, and so should you." It was easy for Jeff Dean to do that, which I thought was awesome. No senseless purity testing of engineer versus scientist. I think small steps like the new NeurIPS broader impact statement are heading in the right direction.

Edit: I just realized this reads like I'm equating LeCun with Watson... that's not my intention. That would be incredibly insulting, my apologies. I just needed an example of leadership having ripple effects throughout a field. Mea culpa.



Well if you're literally researching ways to impose a bias on an estimator then sure your approach will be more susceptible to bias. Ok then, LeCun should amend his statement to say also tell the engineers not to impose a racist prior if they use some kind of engineered regularization term. Actually isn't that what fairness researchers are developing themselves? Ways to attack ML systems in order bias their behavior? Honest question.

I am personally of the opinion that it is fundamentally impossible to advance technology in a one-sided way. Anything with the power to do good can do evil too. Power itself is the danger. There might be a logical proof of this somewhere. Step very far back, and try to describe what a technology like a ML algorithm provides to society: software that can perform tasks as well as a person? discriminate between similar things using noisy observations? extract information that is obscured? The technology which accomplishes this can always be used both ways.


Do you have a citation for reporters and scientists being de-badged and escorted out of the Asilomar conference (I'm always curious about the history of molecular biology, and this is new to me).

BTW, Watson didn't ignore Franklin- her name is listed in the W&C Nature paper as providing data. What he said in his book was much worse than ignoring her.


Yea, I'll try to find a citation. I recall it from a class I took on Bioethics two years ago with Robin Scheffler.


Very possibly this: https://books.google.com/books?id=TLHGLtwbazAC&pg=PA65&lpg=P...

which adds a ton of color, and also kind of supports the problem with overly enthusiastic science reporters publishing things irresponsibly early (covid reporting is a good example).




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: