Hacker News new | past | comments | ask | show | jobs | submit login

Dropbox checks for CSAM too.



Checking for CSAM isn't the issue. The issue is that Apple's system design easily extends to checking all sorts of other things - political dissidents, journalists, etc.

Not only that, it commits a felony when it transmits found CSAM to a party other than NCMEC (i.e. Apple itself).


Scanning all your photos in the cloud easily extends to checking all sorts of other things as well.


It does. However people have a greater expectation of privacy on a device they own than on a cloud-based solution where they have voluntarily uploaded content.


Good thing the device only does this as part of voluntarily uploading the photo to a cloud based solution.


The scanning only takes place if you enable icloud libraries.

Your phone and photos app ask for this and you have to accept it before it is enabled. Sounds voluntarily to me.


> For the conspiracy to work, it'd need Apple, NCMEC and DOJ working together to pull it off voluntarily and it to never leak. If that's your threat model, OK, but that's a huge conspiracy with enormous risk to all participants.

Worth reading: https://pingthread.com/thread/1424873629003702273


My threat model right now is to trust no one except (1) people I know personally, (2) people they know and trust personally, (3) people who have proved their reputation for integrity publicly, and (4) well-designed systems built by either (1), (2), or (3).

I know someone at Apple who knows their head of privacy... so #2 may be in question, given the design of this system and its capability of further compromising the privacy of millions of Chinese citizens on the Chinese government's whim (and any other strongly-authoritarian government).


A few years ago, the hysterical nerd privacy crowd was clutching pearls and waving hands because Condolezza Rice was going to turn you Dropbox files over the the NSA.


Apple’s system design does not easily extend to checking vague subject matter. Every step of the process is tied to hashes of specific photographs.

And do you seriously think they didn’t check the legality of what they built? Really?


> specific photographs

That's the catch. Nothing in the system design prevents them from adding hashes of other types of photographs to that database.

> And do you seriously think they didn’t check the legality of what they built?

IANAL, but the law clearly states that transmitting CSAM to any party other than NCMEC is a felony. Apple != NCMEC. More info: https://www.hackerfactor.com/blog/index.php?/archives/929-On...


“IANAL” is used to avoid liability for offering legal advice in settings where it’s not permitted. It’s not used to hand wave away that you don’t know what you’re talking about.

The very same law you’re citing describes, in detail, the good faith diligence process that requires the service provider to verify the suspected material before transmission to NCMEC. But no, some random blog and you, you two have a handle on legal analysis that the most litigiously sensitive entity on Earth must have missed while designing one of the most litigiously sensitive systems ever fielded by humans. How’d they miss felony criminal liability, right? It’s just too easy to overlook while designing a system whose sole purpose is to gather legally actionable evidence against other people.

As someone who’s built these systems for over a decade, it’s remarkable how one Apple press release can make everyone so hopelessly uninformed and confident that they know the score. Nobody used the acronym CSAM until a week ago except for people like me, those of us haunted by (actual) nightmares of this shit while HN distantly pontificates on the apparent sacrilege of MD5ing your photos of a family vacation to Barbados to see if you happen to be sharing images of children being raped.

Nobody commenting on this has ever seen child pornography. I’d take that to the bank. Did you know the organized outfits design well-polished sites like PornHub, complete with React and a design palette? 35 thumbnails of different seven year olds right on the front page, filterable by sexual situation. Filterable by how many adults are involved. With a comment section, even!, and God help you if you even begin to imagine what is said there. You’re right, though, let’s think about your privacy and the criminal liability for Apple for taking action on something that clearly doesn’t matter to anyone except those stuck with dealing with it.

Get real. Sometimes the lack of perspective among otherwise smart people really worries me, and this conversation about Apple’s motives for the last week or so has worried me the most yet.


> “IANAL” is used to avoid liability for offering legal advice in settings where it’s not permitted. It’s not used to hand wave away that you don’t know what you’re talking about.

It's exactly and only the latter, actually. Consider how useful an obscure-to-normies Web-forum acronym consisting mostly of "anal" is going to be at deflecting liability—should such even be possible—if it comes up in court. Not a bit, right? So how could it be intended for that? If that were the purpose, people would write out the words.


How does client scanning vs server scanning help with any of that?


There’s nothing in the design sure, because new photo hashes do need to be added. But they will will be CSAM images, as explained here: https://pingthread.com/thread/1424873629003702273

People should really stop using this “conspiracy theory” as a reason for Apple to not scan for CSAM in a privacy-preserving fashion. There are way too many “hot takes” that don’t take into account any legal ramifications of their “what if” scenarios.


I appreciate Apple’s design here, and I think that there’s an overreaction to it.

This is probably the best design we have so far for something that everyone else is already doing, and I give Apple credit for going to greater lengths to preserve privacy.

But the “just trust us, we only want to do good things and we will be ruined if you ever catch us doing bad things” rationale doesn’t help. In fact it sends me right back into protest every time I’ve seen it posted.

There will not be legal ramifications for “what if” scenarios. Not enough to prevent abuse.

Especially if these would be the same (weak) legal ramifications that prevent people from being wrongfully accused of murder, arrested for peaceful protest, or bank accounts frozen on baseless suspicion of fraud or terrorism.

From the same government that has treated legitimate political beliefs and entire religions as terrorism.

If the core defense is that I should just trust that NCMEC exists for a single purpose, will never be manipulated or expand outside that purpose, and is completely uninterested in carrying out any other agenda, then that defense has already lost.

Because that exact scenario has already occurred with other government agencies.

And suggesting that NCMEC is somehow at such a disadvantage in power that Apple has a choice to say “no” (and that Apple will do so at even the slightest hint of impropriety) and that alone will bring NCMEC and all of the good work they do crashing down?

I bought this reasoning fully with the Patriot Act and warrantless wiretaps. I had no doubt these things were being used to do good and prevent deaths, and I’m sure they have.

But I have also since seen enough to know that short term good will was paid for by the long term ruining of innocent lives, and racial profiling that continues to this day.

I’m not interested in supporting that again.

I’m good with “Apple is trying to steer this in a more privacy respecting direction” and “this introduces a new avenue in which NCMEC introduces more checks and balances by having third parties double check their work”.

I’m saying this as someone who is torn about this issue.

But the “separate government organization that will absolutely not bend to pressure and will suffer legal consequences if they do, JUST TRUST US KTHX” reasoning already has such strong precedence of being proven false, it only works against your case.


And the manual review would catch other types of photographs.

And the law clearly states that that doesn’t apply if an immediate effort is made to involve law enforcement. Besides, Apple is not transmitting it to Apple, the user is.


That is not correct. They are algorithms that detect photos that resemble specific photographs.


“Resemble”, in the same way that a cropped photo resembles the uncropped original.


And in the same way a human resembles a butterfly. didn't you learn from the case where algorithms matched white noise with some copyright material? There are many iPhones and many images in each iPhones, there will be false matches. Add on top of that that the database of hashes is secret, that it can be updated in secret, that the algorithm is secret and the "threshold" is also secret and you have a lot of suspicion from people.


No, not in the way a human resembles a butterfly. But for the sake of argument, fine: some pictures of you look exactly like pictures of a butterfly, enough to trigger the flagging threshold. These false positives would easily be caught during the review and nothing would happen.


I do not trust the review people, they are not even Apple employees, they are cheap contractors that hire cheap labor,this people are treated like crap so I can see them making mistakes , is not like you never heard of Apple review guy decided X and when other guy checked he decided !X .




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: