Let’s not confuse the 2 parts that Apple is implementing;
- one is parental control, it’s upon request, uses ML, it’s “local” (that is, it’s sent to the parent)
- the other one is plain hash matching, which does not “save the children” but rather is “catch the viewer” — exclusively. This has no impact on the abuse of the CSAM subjects because it only matches publicly-known content.
I don’t know why NCMEC is excited since stopping the viewer does not stop the abuse; This does not affect them.
Conspirationally speaking, it almost feels like things aren’t the way they’re described and Apple will in fact ML to detect new content and report that.
The threshold thing doesn’t even make sense otherwise. One known CSAM picture should be enough to trigger the report, but it sounds like they want better accuracy for ML detection.
So you’re technically correct, operationally incorrect. When CSAM is detected there is also the possibility of new, unhashed, CSAM being found amongst the suspects other files.
'the other one is plain hash matching, which does not “save the children” but rather is “catch the viewer” — exclusively. This has no impact on the abuse of the CSAM subjects because it only matches publicly-known content.'
Are you claiming victims of child sexual abuse wouldn't care if images of the abuse are circulating freely and being viewed with impunity by pedophiles?
So stop the circulating. This is like trying to stop people from using drugs by searching EVERYONE’S homes and reporting suspicious looking substances.
"So stop the circulating." Good idea. Any ideas how we could do that? Perhaps one thing that would help is to implement a system that hashes circulating images to ensure they aren't known images of child sexual abuse. Just a thought.
A lot of unwanted kids out there that end up in broken homes or on the street. There are also a lot of kids born in abusive families.
Good ideas to stop the circulating would be to increase birth control education and access, increase foster care funding and access, implement common sense policies for adoption, and increase funding for Child Protective Services.
It prevents CSAM from existing in the first place. Much like it would be ridiculous to search everyone's houses to stop drug use, it is far more effective to prevent the causes of drug dependency.
- one is parental control, it’s upon request, uses ML, it’s “local” (that is, it’s sent to the parent)
- the other one is plain hash matching, which does not “save the children” but rather is “catch the viewer” — exclusively. This has no impact on the abuse of the CSAM subjects because it only matches publicly-known content.
I don’t know why NCMEC is excited since stopping the viewer does not stop the abuse; This does not affect them.
Conspirationally speaking, it almost feels like things aren’t the way they’re described and Apple will in fact ML to detect new content and report that.
The threshold thing doesn’t even make sense otherwise. One known CSAM picture should be enough to trigger the report, but it sounds like they want better accuracy for ML detection.