Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ever since I started writing on this issue, I’ve had a number of folks contact me and say they were unwittingly flagged by Google Photos’ ML-based CSAM detection algorithm. The most recent of these told me the photo was sent as a (pretty awful) prank in a WhatsApp group, and their phone auto-saved it to the camera roll. All of these folks lost all access to Google services with minimal options for appeal, and have had their lives disrupted in various ways I can’t imagine.

I have no way of verifying what these people are telling me, and I can’t help them even if I did. And even more than this: my first reaction is to wonder if they’re all actually pedophiles who are just lying to me, and/or if they’re going to text CSAM at me. (Because that’s the power of this accusation — nobody will trust you once they hear it.)

Some of these folks have had their lives ruined and some have told me they were hospitalized and considered suicide, even though they weren’t actually prosecuted. I can certainly imagine being in their situation due to the mistake of having an asshole friend-of-a-friend in a group chat and some bad settings in my phone. (I have neither, thankfully.)

If none of this stuff convinces you, here is a much more carefully-documented version of a similar story in which a dad took a photo of their toddler’s rash and was subsequently investigated for CSAM. This one has the “happy” ending that the accused is entirely cleared by the police, but never gets his Google account back and has to live with whatever other trauma and shame comes from the investigation: https://www.nytimes.com/2022/08/21/technology/google-surveil...



It’s even worse than this because once the idea of mass CSAM-scanning is normalized, a false accusation is immediately credible (they would know, after all…)


That’s an utter nightmare of a situation I’d never considered, or prepared to defend against.

I hope the “prankster” ends up caught and in prison


> I can certainly imagine being in their situation due to the mistake of having an asshole friend-of-a-friend in a group chat and some bad settings in my phone. (I have neither, thankfully.)

This always makes me think how much worse middle school will get: kids are guaranteed to push boundaries, they all have some anger justified or not towards each other or the adults they know, and some fraction will learn how nastily they can weaponize the mandatory reporter system. Even if the truth comes out quickly, there’s so much room for strife first.


I understand how the system can lead to people losing access to their Google account, GP suggested people could lose access to their children. The NYT article is the opposite of what GP claimed - the case was closed before he even knew it had been opened.


Imagine you are in the middle of a custody battle when this news comes down, or that you are jailed before being cleared. Imagine the cops and social workers in your city aren't diligent, or that you already have a criminal record. And further imagine you don’t have the luxury of a New York Times reporter investigating your case and writing publicly about how you’re innocent. The NYT example is a best case outcome for an innocent person who triggers these algorithms.


I guess my point is I don't see how your "jailed before being cleared" example could ever reasonably happen. The NYT is a best case, but also an average case, and in fact any other outcome would be absurd.


I think your understanding of the common vs. uncommon cases here is out of whack. The common case is that parents who have done nothing wrong lose their children for days or weeks (at best) before things are resolved in their favor.

And then there's of course the (less-common, but happens often enough to be troubling) situation where parents lose their children and never get them back, even though they're innocent of causing any harm.

Note that I'm talking about the broader effect of Child Protective Services when they investigate any kind of reported abuse, not just CSAM possession/distribution.


The system isn't reasonable, so your denial based on what you think can "reasonably" happen is just your privilege talking.


That might be the case, but there would still be examples of it happening to other people, wouldn't there?


Re: unreasonable system, would you accept as evidence the 1115 children that were removed from their parents because the Dutch IRS labeled the parents as fraudsters (due to a biased and discriminatory AI system, of course)?

https://www.bnnvara.nl/artikelen/hoe-konden-1115-kinderen-va...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: