Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It was more misunderstood than stupid.

Ironically when Apple introduced their solution it was actually better than what we have now. It was interesting to watch people lose their minds because they didn't understand how the current or proposed system worked.

Current system everything can be decrypted on the cloud and is scanned for CSAM by all ISPs/service providers.

Apple wanted the device to scan for CSAM and if it got flagged, it allowed the file to be decrypted on the cloud for a human to check it (again, what happens now).

If it didn't get flagged then it stayed encrypted on the cloud and no one could look at it. This not only was a better protection for your data, it has a massive reduction in server costs.

CSAM is also a list of hashes for some of the worst CP video/images out there. It doesn't read anything, just hash matching.

The chance of mismatch is so incredibly small to be almost non-existent.

Even so the current CSAM guidelines require a human to review the results and require multiple hits before you are even flagged. Again this is what is happening now.

Personally I'm against having any agency the ability to read private messages, while at the same time I fully agree with what CSAM is trying to do.

Realistically if countries want to read encrypted messages, they can already do so. Some do too. The fact that the EU is debating it is a good thing.



So what would stop the list of hashes from being extended with hashes of copyrighted media, evidence of corruption (labelled slander or an invasion of the perpetrator's privacy) or evidence of the preceding abuses of the system themselves?

Once you have an established mechanism for "fighting crime", "don't use it to fight that type of crime" is not a position that has any chance of prevailing in the political landscape - see also all the cases of national security wiretaps being used against petty druggies.


Hashes don't really work that way. They don't actually give any high level of a photo's contents. You can't ask a hash to find all photos of a certain document or a meeting or anything like that. They really only detect exact copies, which makes them somewhat useful only for the most basic of copyright infringement (i.e. proving someone has a copy)


As far as I remember, Apple's proposal was to involve https://en.m.wikipedia.org/wiki/Perceptual_hashing which is meant to sidestep this exact problem - and either way, your objection would be equally applicable to CSAM. There is no mechanism that works better for it than for copyright enforcement.


> So what would stop the list of hashes from being extended with hashes of copyrighted media, evidence of corruption (labelled slander or an invasion of the perpetrator's privacy) or evidence of the preceding abuses of the system themselves?

Absolutely nothing. But they are already scanning hashes now on everything.


The problem with the CSAM detection is that it can be used for adversarial purposes as well. For example if someone decides an image is politically inconvenient and pressures it to be blocked by hash, then Apple may have to comply or remove themselves from an entire market. Building the mechanism to do that is not acceptable in a civilised society.

And of course does this really solve the real problem of child exploitation? No it doesn't. It allows performative folk working for NGOs to feel like they've done something while children are still being abused and it is being covered up or not even investigated as is so common today.

Improving policing and investigatory standards is where this should stop. We already have RIPA.

All this does is create the expectation that a surveillance dragnet is acceptable. It is not.


> Building the mechanism to do that is not acceptable in a civilised society.

This mechanism has been in production for many years on all service providers.

Example:

- Microsoft since at least 2009.

- Google since at least 2013


> Realistically if countries want to read encrypted messages, they can already do so.

How? Are you implying adynchronous and synchronous encryption is broken? Because last time I checked since Snowden our encryption is basically the one single thing in the whole concept of the internet that has been done very right, with forward secrecy and long term security in mind. AFAIK there are no signs that someone or something has been able to break it.

Also, the solutions you present do imply that someone already has the private key to decrypt. Sure, they'll say they'll just decrypt if your a bad person, but the definition of a bad person changes from government to government (see USA), and from CEO to CEO. Encryption should and mostly is built on zero trust and it only works with zero trust. Scanning, and risking the privacy of billions and billions of messages by having the key to read them because there have been some bad actors is fighting a fly with a bazooka. Which sounds funny overkill, but, fun fact, it also just doesn't work. It destroys a lot, and gains nothing.

I don't have a better solution for the problem. But this solution is definitely the wrong one.


> How? Are you implying adynchronous and synchronous encryption is broken?

Not at all.

You make encryption a crime. You ban certain apps. It won't stop people using encryption but that doesn't matter. Because just the act of using it makes you a dissident that can be dealt with.

That is currently the process in Iran and Egypt for example.

Even if they can't read the message and it's not illegal you can still be guilty by association. The act of sending a message can be tracked.

There has been countless situations of that even outside the realm of instant messaging.


>How?

A couple of guys with 5$ wrenches can be pretty effective at extracting cryptographic secrets.


Not on scale, though. Plus, this leaves some quite visible traces and leads to backlash.

That is like saying that Guantanamo can defeat religious terrorism. In individual cases, yes, on the whole, absolutely not.


I mean yeah, why break crypto when you can break kneecaps?


> CSAM is also a list of hashes for some of the worst CP video/images out there. It doesn't read anything, just hash matching.

The list presumably contains CSAM hashes. However, it could also include hashes for other types of content.

AFAIK the specific scope at any point in time is not something that can be fully evaluated by independent third parties, and there is no obvious reason why this list could not be extended to cover different types of content in the future.

Once it is in place, why not search for documents that are known to facilitate terrorism? What about human trafficking? Drug trafficking? Antisemitic memes spring to mind. Or maybe memes critical of some government, a war, etc.

This is because, despite the CSAM framing, it is essentially a censorship/surveillance infrastructure. One that is neutral with regard to content.


CSAM scanning has been around for at least 15 years. All service providers are required to do it by law.

You are absolutely correct with your "what-ifs" and this underlines the need for more oversight and transparency.

The process (my knowledge is a few years old) is that service providers or Law enforcement from countries can submit files to the CSAM database.

The database is owned by National Center for Missing & Exploited Children (NCMEC).

Once they receive the files they review them and confirm that the files meet the standard for the database, document its entry, create a hash and add that to the database. After that the file is destroyed.

This whole process requires multiple approvals and numerous humans review the files before the hash goes into the CSAM.

Also every hash has a chain of custody. So in the event of an investigation they know exactly everyone who was involved in putting that hash into CSAM.

So it's possible to submit an image that is not what CSAM is intended for, but the chances of it even remotely getting into the database is next to nothing. To add to this service providers can be sued for submitting invalid files.


> CSAM scanning has been around for at least 15 years. All service providers are required to do it by law.

That is true for scanning in the cloud, but it's important not to conflate this with client-side scanning. The distinction between cloud and local processing is foundational. Collapsing that boundary would mark a serious shift in how surveillance infrastructure operates.

> Once they receive the files they review them and confirm that the files meet the standard for the database, document its entry, create a hash and add that to the database. After that the file is destroyed.

That is already a structural problem: If the original is destroyed, how can independent parties verify that database entries still correspond to the intended legal and ethical scope? This makes meaningful oversight functionally impossible.

Even if centralizing control in a state-funded NGO were considered acceptable (which is already questionable), locating that NGO in the US (subject to US law and politics!) is a serious issue. Why should, say, the local devices of German citizens be scanned against a hash list maintained under US jurisdiction?

> So it's possible to submit an image that is not what CSAM is intended for, but the chances of it even remotely getting into the database is next to nothing. To add to this service providers can be sued for submitting invalid files.

Procedural safeguards are good, but they don't solve the underlying problem: the entire system hinges on policy decisions that can change. A single legislative change is all it takes to expand the list’s scope. The current process may seem narrow today, but it offers no guarantees about tomorrow.

We’ve seen this pattern countless times: surveillance powers are introduced under the pretext of targeting only the most heinous crimes, but once established, they’re gradually repurposed for a wide range of far less serious offenses. It is the default playbook.


> That is true for scanning in the cloud, but it's important not to conflate this with client-side scanning.

From what you say it's clear you never read Apples paper on this.

The client puts a flag on a match. It is only verified on the server both by another scan and a law enforcement.

If the client doesn't flag a file, it can never be decrypted on the server by anyone except the device owner.

The current system just checks everything. If your device never talks to the cloud in both scenarios nothing happens.

> That is already a structural problem:

You seem to have an over simplified view of how it all works. They don't just throw hashes in.

They can verify it by the chain of custody and documentation that is stored about that hash.

> the local devices of German citizens be scanned against a hash list maintained under US jurisdiction?

CSAM is a UN protocol that has 176 countries signed onto it. Including Germany.

Many countries also have their own independent department that works with CSAM. Germany has their Federal police (BKA) that work that role. They work with NCMEC on ensuring the CSAM hashes are correct. Germany is also one of the strictest countries in relation to CSAM.

> the entire system hinges on policy decisions that can change.

Again it's an over simplification. If the US government did do that.

- It would first be challenged in the courts.

- They would not be able to hide the fact they have changed it.

- This would lead to service providers not assisting with the corrupted CSAM.

- As this is a worldwide initiative the rest of the world can just disconnect the US from the CSAM until what is put in is confirmed.

> It is the default playbook.

If they wanted to do that, the CSAM database is the worst way to do it.

I'd recommend you read up on all of it a bit more. Most of your claims are unfounded in relation to the CSAM.


> From what you say it's clear you never read Apples paper on this. ... You seem to have an over simplified view of how it all works. They don't just throw hashes in. ... Again it's an over simplification. If the US government did do that. ... I'd recommend you read up on all of it a bit more. Most of your claims are unfounded in relation to the CSAM.

The posturing about supposed expertise adds nothing. If you want to make an argument, make it. Vague appeals to technical depth are just noise.

> The client puts a flag on a match. It is [...] verified on the server [...] The current system just checks everything.

Sure, that’s how the flagging process works. It’s also beside the point. Listing technical details doesn’t change the core issue: this system performs scanning on the user device, which is what makes it problematic.

> If the client doesn't flag a file, it can never be decrypted on the server by anyone except the device owner. [...] If your device never talks to the cloud in both scenarios nothing happens.

Correct, but not relevant here. No one is arguing that airgapped devices leak information. The issue is what happens when devices are online.

> [On the structural problem of inability of independent oversight] They can verify it by the chain of custody and documentation that is stored about that hash.

What specific documentation would allow actual evaluation? And who can access it? The process is opaque by design: The list of neural hashes is private, matching and flagging happen silently, and escalation logic like threshold levels or safety voucher generation is not open to inspection. Whatever theoretical accountability might exist, it’s irrelevant in a system of systematic secrecy that cannot be independently observed or audited.

> CSAM is a UN protocol [...] countries [...] work with NCMEC on ensuring the CSAM hashes are correct. Germany is also one of the strictest countries in relation to CSAM.

Yes, Germany has police and ofc works to fight CSAM. That doesn’t change the concern: the system design is extensible and unverifiable. If a U.S. administration wanted to expand the scope (say, for terrorism, extremism, drugs, or IP enforcement), who exactly stops them? Not a German agency. Certainly not NCMEC.

> [On the obvious loophole of policy change] If the US government did do that. - It would first be challenged in the courts.

That is... optimistic. What legal mechanism exactly would allow a challenge to a (as an example) classified National Security Letter expanding the hash set? What court has even the standing to hear that? What precedent makes you believe such a challenge would surface in time?

> - They would not be able to hide the fact they have changed it.

Why not? The hashes are not reversible. The list is not public. The matches are not auditable. Gag orders are legal. What in this system ensures visibility or accountability?

> - This would lead to service providers not assisting with the corrupted CSAM. - As this is a worldwide initiative the rest of the world can just disconnect the US from the CSAM until what is put in is confirmed.

The Apple proposal is not a worldwide initiative, but a US-driven proposal involving a handful of US orgs. EU involvement in the whole issue has been comparatively lacking and is often dependent on US lobbying and funding. The idea that the world could or would opt out assumes a degree of transparency and technical independence that simply does not exist on this planet right now.

If you want to argue that the system is technically robust against political misuse, then please do. If there are decent guardrails in place, I'd really truly do like to know about them. But so far, it mostly reads like a wish list.


> Realistically if countries want to read encrypted messages, they can already do so. Some do too. The fact that the EU is debating it is a good thing.

I agree that the discussion is evolving the bill every time and there are always good amounts of feedback and comments.

It’s a bit annoying when tech websites don’t always update themselves with the latest changes, just labelling it ChatControl doesn’t mean it’s the same policy that was discussed 5 years ago. It makes for good click bait titles, but the technical nuances are missing.

For example, one would be interested to read a comparison between the “privacy” of a tool matching photos against a database of signatures vs. say Apple’s performative privacy in the Photos app or the iCloud + chatGPT/Apple Intelligence mix.


> Ironically when Apple introduced their solution it was actually better than what we have now. It was interesting to watch people lose their minds because they didn't understand how the current or proposed system worked.

What, the cloud scanning of user photos was a good idea for you? The private companyt deciding what is good or bad idea? The automated surveillance that could lead to people wrongfully accused idea?

> f it didn't get flagged then it stayed encrypted on the cloud and no one could look at it.

If Apple can decrypt your data when they find a match, they can decrypt ALL your data. Who says it will be used for good? Do you trust a private company this much?


> What, the cloud scanning of user photos was a good idea for you?

That is what is happening before Apples suggestion and is still happening.

> The automated surveillance that could lead to people wrongfully accused idea?

A hash scan is perfectly fine. It can tell you nothing about what is in your file except that if it matches another file that they know is CP.

Even then a flagged item has to be reviewed by law enforcement in case of a mistake and a single file is normally not enough to convict.

The chance is very slim of a mismatch. Facebook for example report a 1 in 50 billion chance of a mismatch.

To put that in context. The chance of a miss is 1 photo every 10 years across all users of facebook (approx 3 billion active users).

> If Apple can decrypt your data when they find a match, they can decrypt ALL your data.

Again. This is what is happening now for ALL service providers.

Apples suggestion was that if a file wasn't flagged it could only be decrypted by the owners device and nothing else. Not even Apple.


Are you OK with private companies basically playing the police with your data?


Let me give you a better answer to your question.

Yes I am OK with how CSAM works.

1. It is not owned by a private company.

2. Hash checking requires a 1:1 match to be flagged.

3. Any match is reviewed by law enforcement to confirm it matches what is recorded in the CSAM. This is checking your file against a descriptive record of what the file is.

4. The chance of a mismatch is so remote that its not even an issue for me. Even if you do get a mismatch it is a human that reviews it.

5. To submit a file to CSAM requires a lengthy detailed process where multiple humans review and approve before creating the Hash.

6. Every hash has a chain of custody. So if in the unlikely chance of something else being put into CSAM, you can see all the people that interacted with the system to put that hash in.

7. Service providers can be sued for content they submitted, so they have a prerogative to ensure what goes in is valid.

This process has been in place for 15 years or so.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: