Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I guess the way I think about it is this: Yann has a long history in ML. He's probably had to deal with bias problems for decades. Probably pretty experienced with it. Now he's heading up some Facebook ML stuff and on a daily basis he watches hundreds to thousands of engineers work on systems that process and learn from billions of users. I feel like after you do that for a while, you gain enough wisdom and experience to deserve to be engaged with respect and thoughtfulness. She has repeatedly engaged with bad faith, misleading interpretations of intent, and is just sort of really "attacky". Sure, he's a bit condescending (I've seen the same thing and it annoyed me for a bit, then I kind of read about what he's done and realized: he's got tons of experience and data about this and works with it at scale constantly.


My understanding is that this is not the first time they engaged on this type of topic, and Yann has a history of ignoring other people who brought up similar criticisms to him (at conferences, etc.)

At some point you lose the assumption of good faith, and deserve to be called out for refusing to learn.

For what it's worth, I'm well aware of who Yann is, and was at the time as well. That doesn't make him immune to being wrong. (Nor, by the way, do I see any bad faith in her initial tweet. I see exasperation, but not bad faith).


I lost the assumption of good faith already in her first tweet, in particular

>You can’t just reduce harms caused by ML to dataset bias.

She's already attacking a strawman right there. Yann did not deny any harms caused by ML.


Nor did Timnit claim that he was. Her disagreement was about the causes of harms, not the existence of resulting harms.


There was no disagreement. Yann didn't say anybody about harms nor did the guy he was replying to (who talked of dangers). In particular Yann did not suggest in any way a) there are no harms or b) that harms are only related to biased training sets. Yann was commenting on the outcome of a particular research project and how they used a biased training set resulting in the outcome that was observed.

Timnit brought up harms first, then pretended Yann did marginalize such harms and attributed them solely to a biased training sets. And then viciously attacked that strawman. That's a bad faith argument.

I can appreciate that she might have been indeed generally sick and tired as she writes, and can appreciate that sick and tired people will not always manage to be nice or overcome their own biases and assume good faith all the time from the other party; we're all human after all. But that doesn't change anything about her argument being made in bad faith.


> Yann didn't say anybody about harms nor did the guy he was replying to (who talked of dangers)

This feels like unreasonable semantics. The dangers are precisely the danger of causing harm. The harms therefore are concrete results of theoretical dangers manifesting. They aren't different.

> In particular Yann did not suggest in any way a) there are no harms

I agree, and I've said as much.

> b) that harms are only related to biased training sets

He did, insofar as he suggested that the dangers were due solely to bias in the training set, which is implied when he says that if you train the same model on different data, everyone looks African. Like yes, that is true, but it doesn't reduce the harms (or dangers, if you want to be precise). It just creates a different set of biases with a different set of potential harms (which again, are "dangers").

I'm not seeing a strawperson.


and I believe that Yann knows 100X about the causes of harms than Timnit does- so lecturing him is just wasting everybody's time.


You've done a phd. Why on earth would you believe that someone, even an expert in the broad field, would know more about a particular topic than someone who specializes in that area is research?

Do you think Yann knows 100X more about every area of ML then everyone else, or is it just fairness, accountability, and transparency that he happens to be more knowledgeable in than arguably a founder of the subfield?


It's pretty simple: he did made ML work before almost anybody else did, and kept working on during the deep network explosion, and is now running Facebook AI, which has to deal with these sorts of problems with practical solutions on a daily basis, with billions of users. That sort of daily experience counts so much that I would I would place him in the "knows 100X about every area of ML" (excluding rare subfields).

It's rare but I have encountered people outside my field who knew more than I did about my field, because of their daily experiences over decades, or their raw intelligence. Yann seems to have both.


So are you suggesting that Yann has more expertise on what you're working on now (which I know, and would consider to be an ML subfield in the same vein as Timnit's), and therefore would defer to his expertise when he says things that show nothing more than an undergraduate level of the topic?

Because I'm only a dilettante in the AI ethics space (and admittedly ML as a whole), and I can describe the flaws in Yann's reasoning. Blind deference of that level isn't rational.


I've argued with Yann about my field (we share a friend on facebook) and in that case, he did turn out to be technically correct.

I don't agree with your assessment of his "Reasoning".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: