Did you learn the same thing—that E2E encryption doesn’t need to be broken for this to work?
The only event in which Apple can gain access to your content is if you happen to have multiple CSAM matches; then they can access only the matching content, and only then if it’s manually confirmed by a human to be CSAM an action is taken.
The issue is if this type of matching is done for other purposes than CSAM; and unfortunately they gave themselves legal permission to do it back in 2019. That’s what we should object to, not CSAM reporting.
I didn't say anything about breaking E2E encryption. Anything a human in the middle can review in any event isn't E2E encrypted. Call it something else.
The issue is the hash algorithm is secret. The decryption threshold is secret. The database of forbidden content can't be audited. People claim it includes entirely legal images. And it's a small step from scanning local only files.
> as long as it's strictly for CSAM check purposes
And that's precisely the leaky part of this setup. Nothing about this system's design prevents that from changing on a mere whim or government request.
Next year they could be adding new back-end checks against political dissident activity, Uyghur Muslims, ... and we'd be none the wiser.
The only event in which Apple can gain access to your content is if you happen to have multiple CSAM matches; then they can access only the matching content, and only then if it’s manually confirmed by a human to be CSAM an action is taken.
The issue is if this type of matching is done for other purposes than CSAM; and unfortunately they gave themselves legal permission to do it back in 2019. That’s what we should object to, not CSAM reporting.