Hacker News new | past | comments | ask | show | jobs | submit login

The idea is to combat it on on children's devices so that they are not able to produce CSAM without their parents knowing about it. The proposed solution doesn't combat on the recipient side.



I may be misunderstanding, but isn’t the technique you describe—using general classifiers to interdict new CSAM and adult content on the child’s device, with reporting to parents—already widely deployed on iOS and largely uncontroversial, since parents can easily adjudicate false positives and edge cases? [0]

I thought the technologies at issue involved either breaking encryption to allow authorities direct visibility into all digital communications, or various flavors of centrally-maintained blacklists of known images, with the idea that the device or the service automatically vets all new user data against the blacklist (e.g. [1]). All of which is impossibly tempting to extend to other types of content that the powers that be find antisocial, inconvenient, or embarrassing.

For what it’s worth, in the US at least, there are jurisdictions that prosecute children as child pornographers for sending images of themselves to their romantic partners (e.g. [2]). I can see how, if I were a parent in that situation, I’d want to know about and deal with that behavior without the fear that my child might spend the rest of their adolescence in prison and the rest of their lives on a pedophile blacklist.

[0] https://www.apple.com/child-safety/

[1] [PDF] https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...

[2] https://www.aclu.org/news/juvenile-justice/minnesota-prosecu...


great pdf! thanks for sharing!


I thought the idea of CSAM was more about matching an image to a list of fingerprints/perceptual hashes of known cp, locally on a device. Then that match is sent to Apple/Google/etc. and then they will notify the local authorities that you're a pedo. I also was under the impression that Apple already did that (at one point at least).


> I also was under the impression that Apple already did that (at one point at least).

Apple backed down on their on-device CSAM scanning. It was never implemented.

However cloud image hosts, including iCloud, do scanning (PhotoDNA etc) of uploaded photos and have done so for years.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: