> Every week there's a random here post about "some AI detection system closed my Gmail account / took down my Android app / froze my Square funds", and Hacker News is seen as the semi-official tech support line for companies who have turned to biased AI to cut costs.
I would agree with this if I ever saw these self-appointed AI-ethicists focus on these kinds of harms. But, at least in my experience, is usually focused on the exact same set of concerns that 90% of the time has "intersectionality" somewhere in the criticism.
Yes, I'm being a bit unfair and snarky, but I'd be more willing to pay more attention to some of these criticisms if I felt it included more of the harms you bring up than just what I feel has become a constant bone to pick. I agree with the GP when he wrote "You could code a bot with a lookup table to write their tweets about literally anything AI-related."
>But, at least in my experience, is usually focused on the exact same set of concerns that 90% of the time has "intersectionality" somewhere in the criticism.
You have the choice to avoid google accounts and limit the destruction a google AI system can do to you.
You don't have a choice to not be born black, and not be put in jail for longer just because you are black.
Why don't you care that millions of people will be hurt by these things, and care more that an app developer gets locked out of the app store? Apple hasn't put anyone in jail.
Yes, I used a nerdy example because I figured it would appeal more closely to the computer dork crowd of Hacker News, hoping that, by metaphor and extrapolation, you could imagine all sorts of ways that AI biased against sex or race would be immensely damaging to the fabric of society, and bias gets built into our society. This is already happening, as biased AI is used to estimate how much jail time someone will get [0]. Or pushing rents higher [1]. Or treating people for healthcare [2].
These AI ethicists are complaining about all of this, but of course they yell more loudly about sexism and racism, because, you know, those are fairly serious things that should be addressed first???
I don't think they need to be original, I think they need to bang their drum loudly. "Oh, that women's suffrage movement won't shut up about how they don't have a voice in policy that governs their life, can't they talk about something else for once" isn't an indictment of the people complaining, it's an indictment of the people not listening.
I would agree with this if I ever saw these self-appointed AI-ethicists focus on these kinds of harms. But, at least in my experience, is usually focused on the exact same set of concerns that 90% of the time has "intersectionality" somewhere in the criticism.
Yes, I'm being a bit unfair and snarky, but I'd be more willing to pay more attention to some of these criticisms if I felt it included more of the harms you bring up than just what I feel has become a constant bone to pick. I agree with the GP when he wrote "You could code a bot with a lookup table to write their tweets about literally anything AI-related."