Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> One of the most powerful arguments in Apple’s favor in the 2016 San Bernardino case is that the company didn’t even have the means to break into the iPhone in question, and that to build the capability would open the company up to a multitude of requests that were far less pressing in nature, and weaken the company’s ability to stand up to foreign governments.

I think that a crucial missing piece is the FBI's argument: Apple is fully capable of developing and signing a "law enforcement iBoot". IIRC, the FBI was even willing to have someone else develop the software and only ask Apple to sign it–which they definitely have the capability to do, and only policy of not signing other people's software stood in the way.

If we agree that Apple was right in 2016, it stands to reason that Apple cannot be compelled to modify its CSAM filter to capture arbitrary contents, or report it at lower thresholds, or expand it beyond iCloud Photos (like to Messages itself). The amount of work they would have to do for it seems like it would be even higher. There are whole infrastructure pieces that just don't exist. What am I missing?



I think the biggest thing you are missing (from my perspective) is that to make those other policy changes from the San Bernardino case would require large investment from the company employees and would be externally visible.

A change to the policy of what kinds of images are scanned is opaque by law, since none of the Apple employees involved can even have any access to the database of hashes they are using. There is also no realistic way for the consumer to understand the true false positive rate, no ability for a third party organization to distinguish false positives from true positives on non-csam images leaving the device.

Additionally, these are just problems in the US. Other governments can and will mandate the use of this tool for other kinds of media they find objectionable in their borders.

And there will be bugs that expose people to the results of these other governments, like the Taiwan flag emoji crashing case. https://www.wired.com/story/apple-china-censorship-bug-iphon...

The large investment from this system is almost certainly the infrastructure to get it on phones, report the results, and run it in scenarios where it will minimize battery impact. What photos on device it is run on does not strike me as a technical challenge once the tool is built, only one with policy implications. And the easy answer to that will be just to check some flag if the phone is in a country that requires all pictures to be scanned.


> to make those other policy changes from the San Bernardino case would require large investment from the company employees and would be externally visible.

To that point - I generally think that engineers at ostensibly-privacy-minded companies like Apple are competent, well-intentioned, and good canaries. If I were to open Twitter and see a lot of people "seeking new opportunities" from Apple's security team and not able to give their reasons? It's very possible that a backdoor was built contrary to public statements, and they could not condone the discrepancy.

But here, not only is the list of hashes editable with merely a configuration change, but it is fundamentally a list of hashes that is designed to be secret and non-auditable and supplied by a non-auditable supply chain. In fact, the proponents of this program would argue "don't give the Apple engineers and product managers access to the hash list, nor access to whether test images are matched by the hash list, because it could be used for nefarious purposes if they themselves are perpetrators."

So at any time a photograph commonly used to criticize a regime or commemorate a specific event could be added to the list, and there would be literally no way a well-intentioned engineer even inside Apple could even know about it. This isn't just a technology that could be applied with technical effort to make a backdoor, it's a deployed backdoor that opens up all our devices to supply chain attacks, plain and simple. A state level actor would simply need to convince someone at NCMEC to insert something into the un-auditable hash list (whose source images are never to be looked at in totality by design), then compromise any person or computer in the law enforcement-side reporting pipeline to exfiltrate the identity of anyone with the images in question. That's absurdly dangerous.


Matches aren’t automatically reported to authorities. They first go to Apple, and a match on an absurdly non-CSAM picture will be noticed. If it passes through, it will be reported as a tip to the NCMEC, which will also evaluate the pictures; and only then can it be forwarded to the government.


…and the Apple employee in China will risk her career by flagging Winnie the Pooh as unobjectionable?


This is many steps removed from the current situation, which is that the feature rolls out in the US and photo hashes are provided by the NCMEC. Please describe how you think the system would work in China.


A change in policy regarding the pictures that are scanned is a change of NCMEC policy, which would be the same kind of high-level court dispute.

I’m not knowledgeable enough to comment about other countries’ policies, but the fact remains that no one has a law enforcement iBoot, as far as we know; and if one exists, CSAM filtering was never the opportunity that repressive regimes were waiting for.


My understanding, is that existing NCMEC policy includes non CSAM photos that were found alongside CSAM. I’m not talking about evil government attacks surreptitiously criminalizing your meme images in the US, I’m worried about boring dystopia where over time the false positive rate is much higher than anyone will admit and Apple employees are reviewing tens of millions of false positive photos at random from people’s iPhones a year.

And a more direct dystopia outside the US where they have already criminalized your memes. What stops China from saying we have our own database of images objectionable to the state that you must scan devices for?


This type of persistent, continuous, opaque scanning for "objectional material" makes prior discussion about whether law enforcement should have the ability to boot into a criminal's phone seem almost quaint by comparison.


I hope that there will be a transparency report for how many reports happened and how many were found to be false positives.

Regarding China, the answer is probably “nothing”, but the system as it exists is not amenable to what I imagine their goals would be: only monitors the photo library (not messages) and requires a certain threshold of matches. The question is, I think, what today prevents China from requiring devices to attempt to match a database of pictures with a mandated match algorithm and a mandated reporting server. I believe (admittedly without any knowledge of the matter) that the same standard would apply to both.


> If we agree that Apple was right in 2016, it stands to reason that Apple cannot be compelled to modify its CSAM filter to capture arbitrary contents

The government or whoever will regularly add hashes to the list. Those hashes can be for anything and it’s not like Apple has any way to verify what they are for. Apple doesn’t really have oversight on what they’re doing if I understand correctly, all the trust is in whoever creates the list of hashes


It's not hard to imagine that authoritarian regimes all over the world will soon have their own special list of hashes of "illegal" images. Seems like a great way to bury any photographic evidence of government abuse.


The government will have to overcome the same issues compelling NCMEC to add pictures to its data set that it would have to overcome with Apple.

Additionally, by compelling the NCMEC, the government cannot increase the scope of searching or lower the match threshold.


It's a private nonprofit that appears to be wholly funded by the government. It's reasonable to expect that if they don't "play nice" with the DOJ, their funding could be affected. Incentives are important.

https://en.m.wikipedia.org/wiki/National_Center_for_Missing_...


But it’s not something that the NCMEC can do in secret. Apple gets the CSAM reports and decides whether something is worth sending as a tip to NCMEC.


I wouldn’t think the government shares the images with NCMEC, wouldn’t they just say “we captured some hard drives, add these hashes to the list”?


My understanding is that the NCMEC actually has the pictures, and they’re basically the only people in the US who can legally hold onto them.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: