I think the biggest thing you are missing (from my perspective) is that to make those other policy changes from the San Bernardino case would require large investment from the company employees and would be externally visible.
A change to the policy of what kinds of images are scanned is opaque by law, since none of the Apple employees involved can even have any access to the database of hashes they are using. There is also no realistic way for the consumer to understand the true false positive rate, no ability for a third party organization to distinguish false positives from true positives on non-csam images leaving the device.
Additionally, these are just problems in the US. Other governments can and will mandate the use of this tool for other kinds of media they find objectionable in their borders.
The large investment from this system is almost certainly the infrastructure to get it on phones, report the results, and run it in scenarios where it will minimize battery impact. What photos on device it is run on does not strike me as a technical challenge once the tool is built, only one with policy implications. And the easy answer to that will be just to check some flag if the phone is in a country that requires all pictures to be scanned.
> to make those other policy changes from the San Bernardino case would require large investment from the company employees and would be externally visible.
To that point - I generally think that engineers at ostensibly-privacy-minded companies like Apple are competent, well-intentioned, and good canaries. If I were to open Twitter and see a lot of people "seeking new opportunities" from Apple's security team and not able to give their reasons? It's very possible that a backdoor was built contrary to public statements, and they could not condone the discrepancy.
But here, not only is the list of hashes editable with merely a configuration change, but it is fundamentally a list of hashes that is designed to be secret and non-auditable and supplied by a non-auditable supply chain. In fact, the proponents of this program would argue "don't give the Apple engineers and product managers access to the hash list, nor access to whether test images are matched by the hash list, because it could be used for nefarious purposes if they themselves are perpetrators."
So at any time a photograph commonly used to criticize a regime or commemorate a specific event could be added to the list, and there would be literally no way a well-intentioned engineer even inside Apple could even know about it. This isn't just a technology that could be applied with technical effort to make a backdoor, it's a deployed backdoor that opens up all our devices to supply chain attacks, plain and simple. A state level actor would simply need to convince someone at NCMEC to insert something into the un-auditable hash list (whose source images are never to be looked at in totality by design), then compromise any person or computer in the law enforcement-side reporting pipeline to exfiltrate the identity of anyone with the images in question. That's absurdly dangerous.
Matches aren’t automatically reported to authorities. They first go to Apple, and a match on an absurdly non-CSAM picture will be noticed. If it passes through, it will be reported as a tip to the NCMEC, which will also evaluate the pictures; and only then can it be forwarded to the government.
This is many steps removed from the current situation, which is that the feature rolls out in the US and photo hashes are provided by the NCMEC. Please describe how you think the system would work in China.
A change in policy regarding the pictures that are scanned is a change of NCMEC policy, which would be the same kind of high-level court dispute.
I’m not knowledgeable enough to comment about other countries’ policies, but the fact remains that no one has a law enforcement iBoot, as far as we know; and if one exists, CSAM filtering was never the opportunity that repressive regimes were waiting for.
My understanding, is that existing NCMEC policy includes non CSAM photos that were found alongside CSAM. I’m not talking about evil government attacks surreptitiously criminalizing your meme images in the US, I’m worried about boring dystopia where over time the false positive rate is much higher than anyone will admit and Apple employees are reviewing tens of millions of false positive photos at random from people’s iPhones a year.
And a more direct dystopia outside the US where they have already criminalized your memes. What stops China from saying we have our own database of images objectionable to the state that you must scan devices for?
This type of persistent, continuous, opaque scanning for "objectional material" makes prior discussion about whether law enforcement should have the ability to boot into a criminal's phone seem almost quaint by comparison.
I hope that there will be a transparency report for how many reports happened and how many were found to be false positives.
Regarding China, the answer is probably “nothing”, but the system as it exists is not amenable to what I imagine their goals would be: only monitors the photo library (not messages) and requires a certain threshold of matches. The question is, I think, what today prevents China from requiring devices to attempt to match a database of pictures with a mandated match algorithm and a mandated reporting server. I believe (admittedly without any knowledge of the matter) that the same standard would apply to both.
A change to the policy of what kinds of images are scanned is opaque by law, since none of the Apple employees involved can even have any access to the database of hashes they are using. There is also no realistic way for the consumer to understand the true false positive rate, no ability for a third party organization to distinguish false positives from true positives on non-csam images leaving the device.
Additionally, these are just problems in the US. Other governments can and will mandate the use of this tool for other kinds of media they find objectionable in their borders.
And there will be bugs that expose people to the results of these other governments, like the Taiwan flag emoji crashing case. https://www.wired.com/story/apple-china-censorship-bug-iphon...
The large investment from this system is almost certainly the infrastructure to get it on phones, report the results, and run it in scenarios where it will minimize battery impact. What photos on device it is run on does not strike me as a technical challenge once the tool is built, only one with policy implications. And the easy answer to that will be just to check some flag if the phone is in a country that requires all pictures to be scanned.