Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Apple’s Mistake (stratechery.com)
1273 points by feross on Aug 9, 2021 | hide | past | favorite | 790 comments


This invasive capability on the device level is a massive intrusion on everyone's privacy and there will be no limits for governments to expand it's reach once implemented. The scope will always broaden.

From the article:

Apple’s choices in this case, though, go in the opposite direction: instead of adding CSAM-scanning to iCloud Photos in the cloud that they own-and-operate, Apple is compromising the phone that you and I own-and-operate, without any of us having a say in the matter. Yes, you can turn off iCloud Photos to disable Apple’s scanning, but that is a policy decision; the capability to reach into a user’s phone now exists, and there is nothing an iPhone user can do to get rid of it.

A far better solution to the “Flickr problem” I started with is to recognize that the proper point of comparison is not the iPhone and Facebook, but rather Facebook and iCloud. One’s device ought be one’s property, with all of the expectations of ownership and privacy that entails; cloud services, meanwhile, are the property of their owners as well, with all of the expectations of societal responsibility and law-abiding which that entails. It’s truly disappointing that Apple got so hung up on its particular vision of privacy that it ended up betraying the fulcrum of user control: being able to trust that your device is truly yours.


> It’s truly disappointing that Apple got so hung up on its particular vision of privacy that it ended up betraying the fulcrum of user control: being able to trust that your device is truly yours.

Well said.

It's very easy to understand that content on a physical disk I own will not be scanned by some government-friendly surveillance program, and content I upload to somebody else's servers will. I am comfortable with [subsets of] my files existing on both sides of this distinction, but only because I understand it.

Now Apple is letting the surveillance apparatus reach into my own physical hardware, blurring the lines with some overwrought, proprietary crypto-gobbledygook solution. Given the details they've provided, I get why they might have thought this was in line with their general privacy ethos of keeping everything on-device, but the fact is that it is too complicated to build a reliable understanding of in my mental model of my own data privacy--and that's for somebody (me) who is quite technical and has even dabbled in crypto.

Now that I no longer understand Apple's practices as a steward of my private data, I can no longer trust them with it. It's a real shame.


I feel like there's a third option here, though, in that Apple is very likely trying to get ahead of what's likely to be requests from the government that they won't be able to just not abide by. This already happens to every hosting and storage provider and it's entirely plausible that politicians will start publicly wringing their hands to try and justify an actual invasion of privacy where "the government" will be able to scan devices (and even require it from manufacturers) in the interest of "saving the children". This (and I realize it assumes good intentions on behalf of the actors involved) seems to be an attempt to get ahead of that kind of situation by saying "we'll scan things if we're 99% sure they're breaking the law" but we don't want to violate people's privacy. The iCloud vs Facebook comparison is great and noble and all but how long do we honestly think that'll stand up before the politicians move past that argument and onto the next one.

When it stops being about the children, it'll be about the terrorists. When it stops being about the terrorists, it'll be about whatever the next excuse is. This seems like a compromise between those unfortunate scenarios - upholding privacy (we don't see the content) while still attempting to solve for the problems that privacy inherently allows for (we can still identify CP).


> Apple is very likely trying to get ahead of what's likely to be requests from the government that they won't be able to just not abide by

No. Don't make that excuses for them. If it were true it would be a shocking revelation which might result in thousands of cases being overturned.

If the NCMEC searches by providers are being done as a result of government coercion then the fourth amendment requires there be a search warrant. Since there isn't one, these searches would all be unlawful if there was coercion.

Thus far, tech companies like Google and AOL have gone out of their way to testify that there is no coercion (I haven't found a case involving Apple).

If there were coercion it would be a shocking conspiracy. I wouldn't argue that it's impossible: The USG has done some shockingly unethical and illegal things in the past but it fails occams razor.

Moreover, the excuse doesn't actually make apple look better: Instead of being a corporation compromising user privacy and security for commercial gain, they'd be an active participant in a conspiracy to violate the US constitution.

Apple is doing this because they believe they'll make more money this way, and because they believe the complaints raised here about people rejecting devices betraying their trust are fundamentally inconsequential.

Show them otherwise, don't make excuses. Don't blame the government. The government may well be to blame too, but Apple's actions are apple's fault.


>> If there were coercion it would be a shocking conspiracy.

This comment is as if the 2001-2010 decade never happened. Secret wiretaps. Secret Evidence. Overseas torture facilities. Imprisonment without charges. What makes you think that checks and balances would miraculously appear after having disappeared for over a decade.

NOTE: this is a USA-centric viewpoint


> NOTE: this is a USA-centric viewpoint

Those are USA-centric examples.

I'm sure the viewpoint is just as valid across five eyes and even worse in places like Saudi Arabia, Turkey, China et al.

I'm from Australia, allegedly a 1st world democracy - and secret wiretaps, secret evidence (and even secret trials!), overseas torture facilities, imprisonment without charges - are all public policy, covered my the media, and enshrined in laws passes by both of the two-party political system.


>Australia

Don't forget raiding journalists homes and public broadcasters (using illegal warrants!) for reporting on proposed secret spying and hacking laws.

Nice "democracy".


I watched "Secret City" and was aghast at the secret trials, evidence, etc. Is all that really happening today in Australia?


"Covered by the media" is a stretch: Murdoch has been hard at work ignoring all of it.


I would not argue that the conspiracy is impossible, but if it exists then it makes Apple's actions less excusable instead of more excusable.

If the conspiracy exists Apple is an active participant in secretly and illegally undermining the constitutional rights of well over a hundred million Americans. If the conspiracy does not exist, then they're simply lawfully invading people's privacy in order to improve their profits.


Those are examples of shocking conspiracies that got uncovered. Illegal wiretapping and Guantanamo Bay ended up in the news and we're still talking about them now.


But nobody apologies for them or went to prison. There is little to believe the don’t continue.


Snowden did. (Well, effectively did. He'd totally go to prison if he stepped foot in a country that'd extradite him to the US.)

But you're right, none of the people who broke all the laws Snowden showed us were getting broken has suffered any consequences.


> If the NCMEC searches by providers are being done as a result of government coercion then the fourth amendment requires there be a search warrant. Since there isn't one, these searches would all be unlawful if there was coercion.

Not a US citizen, so I’m not familiar with this argument. As I understand it the government isn’t searching anything themselves so would the requirement for a search warrant apply?

I think what the US government is requiring is that if a service provider comes across CSAM they must report it. Since I don’t see why searching your own service would be anything to do with search warrants either, is it constitutional to make such a requirement for other reasons?

Finally even if Apple could avoid this requirement, which they seem to have largely been doing until now, was it viable for them to continue like that? Even without a legal mandate to actively search for CSAM, knowing there must be millions of unreported CSAM images on their service can’t have been a comfortable position to be in.


If the government requires or otherwise coerces someone to perform the search, then the searching party is acting as an agent of the government and is equally subject to the fourth amendment. Similarly, the government can't create a privileged entity like NCMEC and allow it to search without a warrant and pretend it's not the government doing.

What is permitted is that if a private entity of their own free will and self interest searches and then finds something, the government can require them to make a report when they found something. Just as you say.

But the government cannot require them to search in the first place, not without extending the fourth amendment protection against search.

Apple can decide that the commercial value of reducing the risk that they're accused of helping perverts is greater to them than the commercial value of respecting the customers privacy. Indeed. But lets call that what it is, and not make excuses that they're forced to search: They're not. They are choosing to search.


Take your argument one step further - the government knows this and starts issuing warrants for exactly these situations. Now, Apple is in a position where they have to violate their user’s privacy and share content in order to comply with the search warrants. The new system allows them to comply with warrants without ever knowing or needing to know what the contents of users’ phones are and only in cases where it’s known that illegal content exists. Apple can’t divulge any other content from the users’ devices because it doesn’t have it or know what it is in the first place.


A warrant must name the specific persons/places/things to be searched. A automatic dragnet search isn't required with a warrant.

> it’s known that illegal content exists

A point of clarification: The system divulges the encryption keys to content when the content's "perceptual hash" matches one in the database the specific user is being tested against. Apple claims that the hashes are provided by NCMEC "and other parties". It is already known that the NCMEC databases has at least some hits for non-illegal content. Who knows what these databases might contain in the future. There is certainly no structural requirement in the system that the content is illegal or some specific kind of illegal.

For users in Thailand the apple database could just as well be full of hashes of cartoons insulting to the monarch (which are illegal there, as I understand it) and no one would be able to tell, at least not before apple users started being rounded up and summarily executed.


Yes but the criteria for a warrant can be nothing more than reasonable suspicion. So a warrant could be issued for access to someone’s entire device regardless of whether objectionable content exists on it or not. Theoretically, a government could issue a warrant for every single person’s phone and get what they want. This prevents anything even close to that from happening because users aren’t required to provide the keys and Apple won’t have them unless that content matches what’s on the list.

On your second point, yes… that’s a very real possibility and one that I’m sure Apple has already considered and has responded to.


> Theoretically, a government could issue a warrant for every single person’s phone and get what they want.

For that you'd need a sham court full of rubber stamping judges handing out bogus search warrants for every single questionable and illegal request that comes to them. That seems a stretch of the imagination? What judge would to _that?_

We could call that the FISA Court, right?


No, you need just one: "every phone connected to at least two of these N base stations over this 48-hour window".

Which is quite a modern take on a historically reviled phenomenon, so you should call them for what they are: general warrants.


Glad I’m not the only one.


> Not a US citizen, so I’m not familiar with this argument. As I understand it the government isn’t searching anything themselves so would the requirement for a search warrant apply?

There's a difference between cops paying someone to break into a suspect's house to steal evidence without a warrant, and a 3rd party that, while doing business, comes across material that might be criminal evidence.

The former case, in the US, falls under the doctrine of the "fruit of the poisonous tree"[1]. Evidence collected illicitly won't be allowed in court, and any evidence or argument based on the tainted evidence can be thrown out, as well.

I think the line is blurred with the system in the OP, where it seems like it kind of fits the latter case. It's fuzzy to me when parties are working in tandem with law enforcement, even going as far as running law enforcement's own systems voluntarily, or collecting or giving up information without a warrant when simply asked. To me, that seems a lot more like former case, where the government persuades someone to collect evidence on their behalf without warrants or subpoenas.

[1] https://en.wikipedia.org/wiki/Fruit_of_the_poisonous_tree


> voluntarily

In the US, that is currently where we've drawn the line.

They can cooperate, but it has to be voluntarily and the private party can't be government-by-another-name (e.g. not NCMEC which has a special legislative permission to handle child porn and is overwhelmingly funded by the government), but an actual private party.


Hasn't this already been tested in a US court? I can't remember what it was called, but since you're giving away your privacy to a third party, you've given up right for privacy period.



Thank you. Third-party doctrine was what I was thinking of. So I'm guessing putting things in your phone will soon be seen as crossing into third-part territory if it hasn't already.


It's not hard to imagine that this kind of automated scanning might be used as an argument for that: "Clearly you had no expectation of privacy for information on your phone..."


Bingo. If you chose devices that aren't open, you're chosing to give away your freedoms. I makes the jump to LineageOS earlier this year because I could see the signs that Apple were getting to comfortable with government. Now, I've got the confirmation I was right in my hunch


I’m not making excuses. The government can mandate that Apple monitor content uploaded to its services. All the other major hosting providers do this already. The only difference here is that Apple is trying to maintain encryption while also giving an out. They can’t give more information to the government if they don’t have it.


> The government can mandate that Apple monitor content uploaded to its services.

In the US, if the Government were to mandate Apple search the users content, then Apple would be acting as an agent of the government and the searches would require a warrant per the fourth amendment. This is the unambiguous case law.

If Apple is being forced they should say so explicitly rather than secretly participating in an unlawful conspiracy to violate their customer's fourth amendment rights.

Other providers, such as google, have been unambiguous in their testimony in court that they are not being coerced, for example quoting US v. Miller (6th Cir. 2020):

> Companies like Google have business reasons to make these efforts to remove child pornography from their systems. As a Google representative noted, “[i]f our product is associated with being a haven for abusive content and conduct, users will stop using our services.” McGoff Decl., R.33-1, PageID#161.

> Did Google act under compulsion? Even if a private party does not perform a public function, the party’s action might qualify as a government act if the government “has exercised coercive power or has provided such significant encouragement, either overt or covert, that the choice must in law be deemed to be that of the” government. [...] Miller has not shown that Google’s hash-value matching falls on the “compulsion” side of this line. He cites no law that compels or encourages Google to operate its “product abuse detection system” to scan for hash-value matches. Federal law disclaims such a mandate. It says that providers need not “monitor the content of any [customer] communication” or “affirmatively search, screen, or scan” files. 18 U.S.C. § 2258A(f). Nor does Miller identify anything like the government “encouragement” that the Court found sufficient to turn a railroad’s drug and alcohol testing into “government” testing. See Skinner, 489 U.S. at 615. [...] Federal law requires “electronic communication service providers” like Google to notify NCMEC when they become aware of child pornography. 18 U.S.C. § 2258A(a). But this mandate compels providers only to report child pornography that they know of; it does not compel them to search for child pornography of which they are unaware.


I think you’re missing the point. If the only thing stopping the government is a warrant (which the government itself can authorize) then there’s nothing stopping the government from just issuing those warrants. It may take slightly longer to do so but they will get it and all they need is reasonable suspicion to make that happen. This system makes it impossible for Apple to provide anything to the government outside of anything that is already criminal. If the government entity gets a warrant, they can simply get access to the whole device. This, in theory, prevents that situation because everything is encrypted and only things that are confirmed as being CSAM are provided. They literally can’t provide something they don’t have.


To be equivalent, they'd have to get courts to issue warrants for hundreds of millions of US Apple product users, people who have no reason to suspect they've engaged in any wrong doing. I suspect the courts would take issues with that.

Besides, if they could get the warrants they could get access to all the data in any case (e.g. by getting access to the users backups to steal the users credentials and then accessing the icloud accounts).



Yes but, effectively, the end result is the same without violating users' privacy. If none of what's on their device matches the hashes, then nothing will have been done and Apple would be able to encrypt those backups also which would invalidate the whole 2nd part of your complaint.


Or they could just not invade the users privacy, encrypt the backups, and have no access to the user's data.


They're not invading the users' privacy. They have to comply with a lawful government order. This is allowing them to do that without violating users' privacy.


Apple has not been ordered by the government to scan user's images.

Because the scanning is happening without a warrant, if the government compelled apple to perform the search the search would be illegal. It is only legal for Apple to perform this search because it is in no way compelled by the government.

The government cannot obtain a blanket warrant against everyone. This kind of dragnet scanning has to be voluntarily performed by a private party for it to be lawful.

By all means! please prove to us that the government secretly has ordered Apple and other companies to scan users private data: If you do so it will result in overturning tons of convictions due to the unlawful searches which were concealed due perjury by the government and tech companies who have consistently claimed that the scanning by the tech companies is completely voluntary in in their own self interest.


They could make themselves unable to access any data and still comply with lawful orders. There's no rule that you have to preemptively design systems to make data extraction easier.

This planned system means less violation of privacy in cases where a warrant exists, and more violation of privacy where there is no warrant. That's not exactly an amazing tradeoff.


I don’t follow. This system is exactly what you’re describing. Apple is not able to access any of your data. The most they get access to is the signature of the file (which would already be matched with known CSAM content) and a potential thumbnail of the CSAM content.


I feel like it should be obvious that I'm describing a system where they don't add this code.

And depending on what's in the database, signatures and thumbnails can be really bad.


>Thus far, tech companies like Google and AOL have gone out of their way to testify that there is no coercion (I haven't found a case involving Apple). If there were coercion it would be a shocking conspiracy.

Isn't Google a company with billion dollar government contracts? No coercion required...


If Google is being forced to search their users private data for fear of losing government contracts then they've purjured themselves in court by claiming otherwise.

If that were the case and Google honestly testified that they performed the searches based on the belief that the government would withhold business from them it is likely that the warrantless searches would be disallowed. Courts have found that even covert encouragement can apply. What matters is that the if it's functionally the government's choice if the searching does or doesn't happen, then its subject to the fourth amendment.


>If Google is being forced to search their users private data for fear of losing government contracts then they've purjured themselves in court by claiming otherwise.

Doesn't need to be fear. Doesn't even need anybody to insinuate something threatening.

The conflict of interest is enough, to willingly do anything you're asked and play the good boy out of greed - to keep/get more contracts.


This seems to imply "the government" is some single minded entity. Anyone who has had to deal with "the government" knows that it is rare that the same department can stick to a purchasing strategy.


> Apple is doing this because they believe they'll make more money this way

And possibly, as mentioned in an earlier post on HN, as a defense against Anti-trust litigation.


Yeah - my read of this is Apple wants to increase their e2e adoption while still handling CSAM in a way that gets them an okay from the regulators.

Ben Thompson's suggested approach for them in this article is to just do the unencrypted iCloud model with normal CSAM reporting like FB does and make that the tradeoff.

Apple actually wants things to be more secure by encrypting everything e2e with this CSAM hash approach to still fulfill that obligation (and protect themselves from future regulatory requirements that are more broad).

Reasonable people can disagree on these approaches to policy, but it's not obvious that Apple's move here is strictly worse for end users - if they do make moves to e2e then it's arguably better for user privacy to do it the way they're doing it.


It's been said elsewhere, but it bears repeating: E2EE is useless if the ends are compromised.

Furthermore, since the end is compromised, it's no longer a true end. The true end is the user and a system compromised in this way doesn't offer any guarantees to the user.


I don’t agree with apples approach. But it’s worth being clear that the hash scanning they propose doesn’t fundamentally break e2e, except for files which match the hash. Yes it might be possible to adversarially create collisions, but that is an edge case.

What is actually bad about their plan is the potential for scope creep and might wider scanning in the future. Your own, handwritten messages and personal photos can be secure even while this destroys many freedoms and lives.


Why are you assuming the end is compromised? What makes you say it is in the first place?


The "end" will happily send information to Apple such that under certain circumstances (that Apple and various organizations get to decide) low-resolution versions of the "end-to-end encrypted" messages can be obtained. That seems "compromised" by any reasonable definition.

Edit: thanks dpkonofa, I made the description vague in the hope that it's more correct.


That's not what happens. Read the white paper. The only thing sent is the hash and signature (which, at worst, essentially amounts to a thumbnail of CSAM). No part of the encrypted content is ever sent to Apple.


So Apple might end up with thumbnails of the "end-to-end encrypted" content I send, and you think that it counts as end-to-end encryption even if (a low-resolution version of) my messages can end up in a third party's hands. Did I get that right?


I'm really curious if the potential for the wrong files to be exfiltrated is going to force corporations and law firms to prohibit the use of iPhones and Macs for work.

Personally I've been dying for the next gen of M1 macs to come out. I also wonder if this tech has some magical way of getting around a hosts file and little snitch. If I can't gap my data from Apple it would raise ethical issues with my storing client secrets on the machine.


Yes. The only way they would get the signature and low-res version is if it’s already been identified as CSAM (with a 1-in-a-trillion chance of a false collision). If they’re not getting your content, it can still be E2E encrypted.


> with a 1-in-a-trillion chance of a false collision

I don't think that's exactly where the "one in a trillion" claim comes from. Rather, it's that a single matching hash isn't enough to trigger the reporting; there needs to be multiple matches, and when there are enough of them to cross an unspecified threshold, then the reporting is triggered. There's theoretically only a one in a trillion chance of that threshold being crossed without having actual CSAM matches.

If I understand the white paper correctly, this even goes a step farther than that; they can't decrypt the signatures of the images corresponding to the matched hashes until the threshold is passed, because those images form a kind of decryption key together.

On a technical level, I'm actually pretty impressed. They absolutely could set up E2E encryption and still implement this system, and it largely assuages my worries about false matches of innocent photos (with the extremely big caveat that a false match has a very high potential of ruining someone's life). As the linked article points out, though, the real privacy concern here comes from having this matching capability on-device at all, because once it's there, limiting the data set to just this one provided by NCMEC becomes a matter of company policy. If an agency of any government demands Apple add their data set, they can no longer say, "we can't do that without drastically compromising the way our devices and services work," because it will be public knowledge that this in fact how their devices and services work already.


The 1-in-a-trillion claim was debunked, https://www.hackerfactor.com/blog/index.php?/archives/929-On...:

> Facebook is one of the biggest social media services. Back in 2013, they were receiving 350 million pictures per day. However, Facebook hasn't released any more recent numbers, so I can only try to estimate. In 2020, FotoForensics received 931,466 pictures and submitted 523 reports to NCMEC; that's 0.056%. During the same year, Facebook submitted 20,307,216 reports to NCMEC. If we assume that Facebook is reporting at the same rate as me, then that means Facebook received about 36 billion pictures in 2020. At that rate, it would take them about 30 years to receive 1 trillion pictures.


//No part of the encrypted content is ever sent to Apple.// Except all the data before it is encrypted, if it matches the data Apple wants to see. And don't anyone dare say this is a backdoor. It just scans for arbitrary secret files a 3rd party is interested in, and sends out info whenever it finds something interesting. That is perfectly fine bc shiny Apple said so.


This is wrong. Apple never gets the contents of the file. Read the white paper.


Yeah, good cop, bad cop, game.

Bad cop: I want access to all your information.

Good cop: Oh, these guys are after you. Let me see if I can intervene for you. I can implement a selective limited search, that make these bureaucrats happy and at the same time will protect you. I am on your side, I hate them too. This is the best practical solution.


> problems that privacy inherently allows for

I wonder how we ever survived with the level of privacy we had before the advent of the internet. It must've been a living hell. /s

Privacy is not a problem, it is necessary in order for freedom from tyranny to be possible. Even if all the governments of the world mandate that all communication devices must be compromised like this, criminals will still be able to have private communication channels, but lawful citizens won't.


I don't claim to have an ideal solution here, but to your point: There weren't highly-effective global distribution networks for CSAM "before the advent of the internet" either. This capability has created a market for these people to produce/obtain this material, and thus creates financial incentives for people to abuse children in this way more than in pre-internet times. There needs to be some way of combating this that can keep up with this growth curve, and that answer can't simply be to throw up our hands and say "the criminals will always win, so there's no solution worth doing".


We should tackle this from the other end though. That there are children so abused and isolated whilst ignored by society is shameful. We need solutions that treat children as humans rather than a product or contraband.


This makes it sound like there aren’t entities out there who are interested in such solutions. If you have one that fulfills that need, I’m sure Apple and others are open to hearing it.


This is a great response and covers what I think the crux of the issue is. Technology has exacerbated a problem and, rather than simply bow to that, they tried to come up with a solution. Whether that solution addresses the negatives of the problem without introducing other, worse negatives is to be seen. On paper, it feels like they have addressed the problem in a reasonable way if you trust that they want to make money while absolving themselves of liability. They can’t violate users’ privacy if they don’t have the ability to do so.


//they tried to come up with a solution.// ... they decided to spy on us ^there I fixed it for you. Making me total dictator would be a 'solution' too I mean something has to be done nothing else is working or could possibly work. Me for dictator- I promise to be benevolent, who can argue with that. At worst it is a thing reasonable ppl can disagree about. /s


But I fail to see how icloud backups are a good place to tackle it.

And I doubt this system, which looks for known files, is going to do very much to impact production.


I never said privacy was the problem, so your argument is irrelevant. What I said is that privacy (true privacy) introduces problems that we may want to solve for. You can simultaneously be an advocate for user privacy while agreeing that CSAM is bad and should be minimized. The question is how do you maintain privacy while still being able to act (especially if mandated by a government) on known illegal actions. In the past, that wasn’t possible. Apple is suggesting that this is possible but with the giant caveat that you have to be willing to put trust in the way the system is developed.


I don't necessarily disagree with the "getting ahead of it" thing, and I do see the benefits of this strategy wrt. keeping data private in the face of escalating govt overreach, but I'm simply not comfortable with the surveillance happening on my own physical device. It means trusting proprietary software to work exactly the way it claims to, which in my own experience is often not the case.


I share that concern and completely understand. My only response to that concern, though, is that what you’re asking for is not possible without submitting the content in question to Apple and I would rather they not have it at all and therefore not even have the ability to go down that route.

It’s like a double airlock. If your content never leaves your device, there’s no way for them to provide it to someone upon request. It’s definitely a complicated situation but I have yet to see someone provide a solution for how to achieve what Apple has while never transferring your data to them in the first place.


> what you’re asking for is not possible without submitting the content in question to Apple

This unfortunately appears to be true, and part of me is impressed by [the idea of] the solution they've come up with. But, like I said in my first comment, it means in practice that I have an opaque and unaccountable surveillance program running on my personal physical hardware, which is undeniably an escalation. Whatever they claim it's doing is both completely unverifiable and probably complicated enough, and subject enough to change, that it's practically impossible to form a complete and persistently correct understanding of the attack surface it exposes—even if it were open-sourced and exhaustively documented, which it isn't and presumably never will be.

I am much more comfortable simply knowing that anything I upload to somebody else's servers is subject to snooping, and anything I keep on my hardware is not. That's how the other cloud storage providers do it, and I may not like it, but it's at least easy to account for when I'm thinking about my own data security and privacy. It's admirable that Apple tried to come up with something better, but (I'd argue) from certain important perspectives it's arguably worse.

Software is a chronic disease. Once you let it in, it never goes away, and changes beyond the scope of "fixes" seem to infallibly make it more intrusive and/or harmful. We are like frogs with many bodies, each being boiled in a different pot by a different chef. I have no interest in yet another unaccountable daemon inhabiting my private person.


While I agree, in principle, I think that cat's been out of the bag for a while. The government will continue to mandate things away from end-to-end encryption (they're already trying to pass bills to make it illegal) and this seems like the most reasonable solution I've seen thus far that still allows for end-to-end encryption while addressing the concerns of governments around the world. If you want what's on your device to stay on your device, I think you have to live in a place that doesn't exist - namely, a world without governments or the internet.


> If you want what's on your device to stay on your device, I think you have to live in a place that doesn't exist - namely, a world without governments or the internet.

This isn't true quite yet. It's pretty easy for me to keep some of my data truly private (e.g. on a secure Linux machine with an encrypted disk, if I'm really paranoid) while still participating in modern society, using the internet, and so on.

Apple has simply removed itself from the set of vendors whose products don't preclude uncompromised local data security.


//It’s like a double airlock. If your content never leaves your device, there’s no way for them to provide it to someone upon request.// Except they just built themselves a way to get our data on our devices which very much does transfer our data.


No, they didn’t. Apple never gets any of the content on your device. They get signatures and, at best, a visual representation like a thumbnail.


Whether scanning happens on the iPhone or in iCloud does not counter the problem that the dragnet nature of this makes it easy to weaponise.

Getting CSAM pictures in the iCloud photo library of someone you don't like does not even require physical access sometimes: For example, WhatsApp has a "Save to camera roll" option that's enabled by default: just send a bunch of bad pictures via WhatsApp, those will get synced to iCloud after a little while, and now that person is in big trouble.


>It's very easy to understand that content on a physical disk I own will not be scanned by some government-friendly surveillance program, and content I upload to somebody else's servers will. I am comfortable with [subsets of] my files existing on both sides of this distinction, but only because I understand it.

According to Apple, the only content being scanned are the images you are storing in iCloud. So how is this breaking your rule?


> According to Apple, the only content being scanned are the images you are storing in iCloud

Thats the whole point of this debate. TODAY (if you trust apple is honest) it's only stuff being stored in icloud. But there is no technical reason for this to be true - just a "policy" reason. The scanner is already deployed to devices, we already let in the trojan horse. Tomorrow, the rules could change (and we might not know)


This is breaking my "rule" because the detection is being moved device-side as part of a complicated scheme I don't believe it's realistically possible for an outsider to have a complete and persistently correct understanding of. Thus, I do not trust myself to fully understand the attack surface I'm exposing, and that is something I'm not personally comfortable with.

I thought this was fairly clear from the later parts of my comment.


"The USA PATRIOT act will only be used against terrorist threats." 12 years later, Edward Snowden proved otherwise.

The world is a much different place than when I grew up in the 80s. I can't imagine what it would look like in another 40 years. I hope I'm dead by then, because it doesn't seem like it's going to a place I want to live in.


I guess we all now just realize the truth: you don't control your iPhone. Apple does. You merely use it.

In a way, nothing changed. Apple could've done on-device scanning forever. In another way, everything changed.

I'm not sure what I'm going to do about it. Problem is that the devices are really really good.


Buying an iPhone (and increasingly Mac) is like going to Disneyland: You pay a lot of money to be allowed in so you can pay them even more money. The experience is highly curated and controlled. At the end of the day, you are just a guest there and what you are or aren't allowed to do is determined by the company.

EDIT: And they have CCTV and undercover security officers everywhere. More than you can possibly imagine.


I really don't Apple Hardware is a lot of money. I buy the cheapest MacBook Airs with a ram upgrade and they last me around ~6 years. Same with the phone I buy (iPhone SE). Outside of the battery needing replaced they last me a long, long time.

Everything else with I agree. The eco-system of all these companies is never-ending (cars, food, cities, etc.). It's like those sci-fi movies where one corporation owns everything. The only difference is it's 5 or 6 companies because they haven't yet started to merge.


You can buy an Android phone with better specs (except the camera, where the iPhone objectively the best) for 200$, while an iPhone is minimum 500$ (and for an SE 2 that is an old iPhone 8 with a newer SOC).

For a computer you can buy for the same money that you spend for a MacBook Air a computer with better specs and hardware that you can upgrade. Apple tends not to upgrade the oldest computers, leaving you with an insecure OS where you can't run the latest software. I have my battle Thinkpad that with Linux runs better than most Macbook and we are talking about a PC that has more than 10 years and I bought used on Ebay for 100$.

Really Apple is not that great. I once used to buy their product thinking they were superior, and maybe once they were, nowadays they are just the other crap made in China but sold at (at least) twice the price because they have the Apple logo on it.


> You can buy an Android phone with better specs for 200$ > For a computer you can buy for the same money that you spend for a MacBook Air a computer with better specs and hardware that you can upgrade

If you compare two phones at 200$, sure. But it is not possible to buy a phone with a better CPU than the 500$ iPhone SE 2. Impossible, no CPU can beat it. So no Android phone has better specs.

For a laptop, if the case is considered a spec like it should, then you have a much clearer picture. Apple has perfected its aluminium processes to provide a rigid and shock-resistant casing, and inevitably when people provide legitimate examples of PC laptops to buy, they never mention that the laptop you'll get will have a shitty plastic case with a heavy steel frame. The worst of both worlds, and many friends have come to me saying they need to change their laptop when theirs is fairly recent. The culprit? their case broke or the display hinge did.


I doubt so, maybe the Apple CPU is better in benchmarks (not even sure), but who cares? I care about day to day usage, and the CPU on a 200$ Android is more than capable of doing everything a normal person does with a phone.

> So no Android phone has better specs

Because it's all matter of CPU. To me is more important internal storage, with 200$ you get nowadays a phone with 256Gb of memory plus a micro SD slot. Or it's important RAM, you get 8Gb of RAM (the same amount of my desktop computer), meaning you can have all the apps in background without issues. You get a fingerprint reader, that is more secure and usable than the stupid face ID. You get a USB-C connector that nowadays is everywhere, not the proprietary lightning port. You get a bigger battery that you don't have to charge everyday. You get a bigger screen with an higher resolution. You get an headphone jack. You get a FM radio (a stupid thing, but I don't get why iPhone doesn't put it).

> Apple has perfected its aluminium processes to provide a rigid and shock-resistant casing, and inevitably when people provide legitimate examples of PC laptops to buy, they never mention that the laptop you'll get will have a shitty plastic case with a heavy steel frame.

I don't care about the cool case of the macbook. I care about having a solid case, even if it's ugly plastic. My thinkpad t440p that I bought for 150$ on ebay I can carry around without worrying, I can spill a coffee on the keyboard and it will not break, I can make it fall from a ladder and it will not break.

Also if something breaks on a macbook, the only option is to throw it in the bin and buy a new one. How do you replace the LCD panel on a macbook? You can't, because it's all glued together and you have to replace the entire lid of the computer, that costs more than the computer itself. On my computer? Pop the display frame with hands, 4 screw and you remove the LCD, a 10 minutes job. Want to access the internal components? Two screws on the bottom and you have access to SSD, RAM, Wi-Fi card and CPU.

Want to change the battery? You will have to do so sooner or later, well on the macbook not only you have to disassemble the entire computer, but it's also glued with a strong adhesive, not only it's difficult to remove but also dangerous, since it's easy to damage the battery and since it's a lithium battery it can even explode. On my laptop just unlock the mechanism and the battery comes out, put a new one in and job done, 15 seconds.


Don't forget the garbage trackpads. And screens also tend to be inferior.


> For a computer you can buy for the same money that you spend for a MacBook Air a computer with better specs and hardware that you can upgrade.

Which specs? When I last checked, the latest M1 Airs were dominating the entire laptop market on CPU performance, battery life and not making noise. These are important considerations. I mean, sure, for the same price you can probably find better displays, more RAM, a bigger SSD, a better port selection, and all sorts of other things, but there are always tradeoffs, it's not like any laptop beats an Air wholesale on all specs.

If someone doesn't particularly care about upgradability (most people don't), the Air actually seems a pretty good deal. Perhaps they were overpriced before, but I don't think they are anymore.


Any recommendations for phones or laptops?


> Buying an iPhone (and increasingly Mac) is like going to Disneyland: You pay a lot of money to be allowed in so you can pay them even more money.

Bought the latest SE over a year ago at considerably less than my last comparable Android device. Have spent under $100 in the App Store since.

Phone does everything that I want it to do including some fairly complex multi-app automations 15 minutes before I leave, when I get an SMS, etc...


That's just like going to Disneyland, but packing your own luch and not going on any of the rides. :)


More like going on the rides, but not paying for the Fast Pass. Taking the Fast Pass is like paying for Apple services. I took the shitty Fast Pass for Apple Music and iCloud Drive, but I'm not paying for bloody Ted Lasso!


A Mac is very different from an iPhone. I am writing this in Firefox, I have development tools and commandline scripts installed.


Are you sure? I have a Mac too, and earlier this year all of my non Apple-approved software stopped working because some random Apple server went down.


Yes, and the price these companies can charge is determined by the market. If what they are doing no longer serves a large market, their business will shrink. And if there actions expose a new opportunity for someone else, that company will form and business will grow.


And Disney is one of the most well-known global brands - at least when people think about entertainment and vacationing.


RMS has been saying this stuff for decades, but people ignore him.


This sentiment gets repeated all the time and it drives me crazy.

I've read a lot of stuff by RMS and am certainly aware of his concerns.

I have an iPhone and Mac so I guess you would consider that I "ignored" him. This is a ridiculous conclusion. I considered the risks and trade-offs of non-FOSS software, and regrettably accepted them, because the FOSS alternatives had trade-offs that cumulatively (for me) were much worse. Unlike RMS, I am not a one-dimensional person that only evaluates tech on the single dimension of freedom. Tech is not a religion for me; it's just a tool.

RMS pontificating about FOSS does not magically bring iPhone comparable FOSS software and hardware into existence. If the FOSS community wants consumer adoption, ratio of pontificating about to building consumer products needs to be much lower.


This is a very solipsistic way to look at the world. FOSS has no duty to try to appeal to you personally, and if you accept the end of your privacy over UI niggles, that's your choice to make. The reason Apple can spend so much on UI is because of your willingness to eat whatever they serve you.

After emancipation, there were plenty of slaves who wouldn't leave the plantation, and became slaves-by-another-name in the sharecropping system. I don't think any of them were so up their own asses that they blamed the slaves who left for not making it easier to leave.

Also, RMS didn't talk about "FOSS." He talked about the inevitable future if we continued our trajectory, and suggested Free Software as the solution.

I'm also not going to make lowering your carbon emissions just as easy and convenient as using all of the carbon you want. Take responsibility for yourself, or at least don't actively denigrate people for being right.


But I never asked FOSS to make me anything.

All I said is if the FOSS community wants consumer adoption, the path forward is making things consumers like, NOT blaming us for "ignoring" RMS, because no one ever did that.

Basically everything your post implies (that I think I am entitled to something, that I think everything should be easy, etc...) is not true.

As I keep pointing out again and again, all I'm saying is that people don't use FOSS, but it's not because they don't value privacy, not because they don't value freedom, and for the love of biscuits not because of anything to do with RMS or ignoring him.

The options are use an iPhone (not freedom), use an Android phone (not freedom), or use something like LineageOS. Using something like LineageOS is major commitment that most people don't have room in their lives for, because they have different priorities, which might also be very good.

EDIT: Here's a metaphor, it'd be like if you asked your friend who owns a car with an ICE why he keeps ignoring Al Gore's warnings about the climate. It's like, Al Gore can be right, your friend can believe in Climate change, and he can still decide he needs an ICE car because he's barely making ends meet and is trying to support a family. You can claim that your friend has made bad choices, but there is no argument that he "ignored" Al Gore.


I guess consumerism is at the heart of the problem. And juicy complexity successfully keeps consumers remaining such.

I think humans - having been nomads for the most part of their existence – just aren't equipped to care for their habitat (say: ecosystems). It's rather recent that became important, <8k years compared to 250k before.


> The reason Apple can spend so much on UI is because of your willingness to eat whatever they serve you.

No, it's because FOSS has no economic model.


You have hit on one of the problem permeating modern society. No one wants to take personal responsibility for anything.

it always some one else that is to blame, the rich, the evil corporation, the right wing, the left wing, china,.... etc etc etc


This is easy to say, but it’s clear from decades of cognitive science that people find it extremely hard to avoid the affordances and incentives in their environment. Blaming the victims of a concerted corporate and political effort to constrain individual liberties seems churlish. This is a tragedy of the commons and the only solution is cooperation and dialogue to set common standards and norms.


> I've read a lot of stuff by RMS and am certainly aware of his concerns.

> RMS pontificating [...]

Uh, no. RMS was right all along and you're still not getting it.

You bought devices you can't upgrade, you can't run software on, that oppress indie devs, that tax 30% of the market, that sell out freedom fighters in oppressive regimes, and that now spy on you. And now you realize the problem.

The only way out of this madness is to get the sensible representatives in Congress to move forward with an antitrust case. Breaking up the power monopoly will perhaps get us back to a place where we have rights. The way it is now, Apple is a central point of failure.

Again, everything RMS spoke was truth, and it's here again rearing its ugly head.


It's weird that people seem to trust Congress with this. This is the same Congress that is considering banning end-to-end encryption, remember?


Congress is trash, but they're the institution that runs your life, and you at least get a nominal say about who is in it. If you're arguing for revolution, I won't disagree, but what it really sounds like is the traditional "we can't do this one thing until we do everything else first."


Congress isn't wholly incompetent, despite the fun we have poking at it. The US has a functioning democracy and leads the world in so many good and important metrics.

Our government isn't one person or one party, and that's a strength.

One of my local reps is leading the big tech breakup movement, and I'm incredibly proud to have her representing me.


>leads the world in so many good and important metrics.

Can you help me with a few? Maybe 5? I'm pretty low on the US government at the moment. I can name quite a few things we lead in that are horrible.

>One of my local reps is leading the big tech breakup movement, and I'm incredibly proud to have her representing me.

I think that's great but I doubt that will ever be allowed to happen. Too much money will be thrown at congress and they will either do nothing, or do nothing in a way that makes them look like they did.


Innovation, science, technology, art (music, film, game, etc.), athletics (Olympics, etc.), space exploration, economy (by multiple metrics), energy production, military power, property value, ...

I might agree with parts of your list, but I think that people are way too down on the US and its accomplishments. I wouldn't want to live elsewhere.


That's the crux of it. There are a few edge cases, but quality of life across the whole spectrum of wealth is worlds ahead almost everywhere else. Things aren't perfect, but they are pretty damn good compared to almost anywhere else. People get too hung up on first world problems to appreciate what they've got.


>The US has a functioning democracy and leads the world in so many good and important metrics.

25th most democratic country ("flawed democracy") [1]

Which metrics? Defense spending? Healthcare cost?

[1] https://en.wikipedia.org/wiki/Democracy_Index#By_country


The funny thing is that you are failing to recognize the value of devices that can render web pages, give navigation instructions that won't get me lost, run games, have long battery life, great screen quality, and a bunch of other things.

I have yet to see a phone with the build quality and value add that I need a mobile phone to have from Free Software endeavors. These are the table stakes for the majority of people. Not the list you just rattled off. You aren't even in the running to compete on the other value adds if you can't do the above.


The real-world alternative to Apple is Microsoft, Google, and Blackberry, not GNU. Because almost no one and no money cares about GNU.


You're right. Free Software isn't there yet. So I just took some of my money and donated to a few projects.. Drop in the ocean, but a drop nonetheless.


If anyone is going the purist route, it means no Dropbox, Facebook, OneDrive, etc. since they’ve been scanning for CSAM content way before Apple.


But not on your own device.


so what?


The real world alternative is healthy competition.


Which is definitely absent in mobile phone space.


> The only way out of this madness is to get the sensible representatives in Congress to move forward with an antitrust case

What are you looking for out of this anti-trust case? Apple is not a monopoly. There are numerous privacy-focused phones and Android distros available. I think there just isn't a huge market for a privacy-focused phone (no, 1000 people on a HN and twitter is not a huge market.)


> If the FOSS community wants consumer adoption, ratio of pontificating about to building consumer products needs to be much lower.

Actually it's not like FOSS community is fighting for your adoption, you know? It's you who should do something in order to make good things happen.


He did. He bought Apple products. That's the whole point.


They didn't buy Apple products for any ethical reason, and didn't claim to have done. They bought Apple products because RMS didn't personally make the UI of Free Software polished enough for them.


> Tech is not a religion for me; it's just a tool.

Ok, it is a "tool", but it's like an axe controlled by another person, not you. Maybe you trust that person with such a tool; I don't.


You are doing the exact thing I am complaining about. I DON'T trust Apple and never said I did; I quite explicitly elaborated on this above.

Not trusting Apple doesn't magically bring a trustworthy iPhone competitor into existence.


Trust in this discussion isn't a binary 1/0. You do trust Apple in some regards, else it'd be impossible to use their devices for anything. There's a tradeoff being made, that Apple does not do 'certain evil things'. This one's crossing the boundary for some, opening the floodgate for others. That's the issue at hand.

For example, say US government. I trust the US government not to kill me (Dutch citizen, not a terrorist, not a criminal, etc etc). Does that mean I trust the US government with my personal data? No, that's a different discussion. And if I were to trust them with my personal data, does that mean I believe the US government has integrity? No. I've been very disappointed with the US government with regards to a recent war (I'll refrain from details as its irrelevant), so they need to earn back credibility. In my language there's a saying 'vertrouwen komt te voet, maar vertrekt per paard' which translates to something akin to 'trust comes on foot and goes on horseback'.


> magically bring a trustworthy iPhone competitor into existence

Well, purchase a Pixel 5, install CalyxOS or Graphene, lock bootloader again. Job done. Probably you'll lose some convenience, but you know, you may always ask some FOSS dev to bake a feature for you and pay for it.


> Probably you'll lose some convenience, but you know, you may always ask some FOSS dev to bake a feature for you and pay for it.

The amount of money needed to bring CalyxOS to the iPhone level of polish and to get the same sets of popular apps on there is probably in the millions.


Freedom and privacy are not free.


> Not trusting Apple doesn't magically bring a trustworthy iPhone competitor into existence.

But spending your money on Apple products certainly won't either.


> Not trusting Apple doesn't magically bring a trustworthy iPhone competitor into existence.

I'm afraid that'll never happen, at least, until most people don't care about their privacy and are willing to accept one-sided tradeoffs with phone vendors. BUT, there are alternatives like LineageOS that do a pretty good job. They only need more acceptance by public to be able to compete at the level of a trillion-dollar company like AAPL.


Right. And I think projects like LineageOS are super interesting.

My beef is just with statements like "people ignore RMS," as I have yet to see any evidence of this. Everything I have observed is that anyone who has heard of RMS is fully aware of his concerns, but most of those people just have different priorities.


That much was known upon purchasing the axe - maybe it cuts trees 50% quicker and thus was worth the tradeoff, but now you can re-evaluate your options and purchase a different axe.


TL;DR: You understood the trade-off, and chose to become a product for the convenience. The majority of us here have done the same, because becoming a product is part of the modern life. Nowadays choosing digital purity is like becoming a digital Amish.


Bingo. RMS is not being ignored. People are just making informed decisions that FOSS purists don't like.


Informed decision is a high bar.

I'm pretty sure that most go with habit and convenience. I mean.... Apple could lock MacOSX down tomorrow and charge you $100 for terminal access and "informed decision" people would still argue how good MacBooks are (they aren't that good)


MacOS personally looks nice and functions in a logical way - so I consider MacBooks good. It’s a personal choice while weighing negatives like how MacOS is not ‘free software’ that anyone can fork, change, and sell on their own.


I disagree that it functions in a logical way.

I literally have to have an app to make my bluetooth mouse scroll in the opposite direction than the trackpad, because there is "Scroll Direction" checkbox in both mouse and trackpad... but it's actually one config option. And with every update I hope that they don't screw up that app... OSX has less and less of "less is more".


No one takes you seriously when you describe an interdependent mutually beneficial bur imperfect relationship as "become a product"


I think people don't like to think themselves as a product, but that's what we are. We're a potential source of income, and companies will want to know how's your health, where do you go, what do you buy, how you use your time, etc., in order to realize as much of that income as possible.

It's a mutually-beneficial trade-off, it's just that one party gains individually, and the other gains by quantity. Advertisers wouldn't care a lot what restaurants you like, but they care a lot about what restaurants people in your city like.


Well your sentiment drives me crazy, especially for techies. It is up to us whether a non-feudal digital society will exist or not. It's true that FOSS will never be as good as the monopolists' who are able to pour billions of dollars into polish while tightening their grip over our lives. But if we are unwilling to make sacrifices in this fight then the battle is already lost.


Agreed and a great example of why his message on free software is so important. I would never have been as cognizant of the topic if not for his laser-focused and committed work.



Every year RMS sounds less crazy.


They don't ignore him, they just don't have the same priorities. Having privacy with a phone means giving up a lot of other things which people value.


>Having privacy with a phone means giving up ...

Well it would mean giving up the phone entirely.


who is RMS?




[flagged]


He probably would. He's his own worst enemy.

Doesn't make him wrong.


[flagged]


"Hypocrisy is the practice of engaging in the same behavior or activity for which one criticizes another or the practice of claiming to have moral standards or beliefs to which one's own behavior does not conform."

RMS is as close to the opposite of hypocrite as it gets. He definitely practices what he preaches to the degree that he used a slow Loongson when he couldn't get an Intel machine with free BIOS, for example.

He might not act diplomatic, be way outside social norms, or not people-please, but that does not make him a hypocrite.


Everything has software. If he follows his ideology that everything he uses be open source then he can't do anything except live off completely off grid without anything that has a semi conductor in it.

No electric meter at his house, no internet (even if his router is running open source, the ISP he's connected to certainly isn't), no transportation of any kind, no entertainment of any kind, no shopping of any kind (inventory management, cash registers, logistics). The equipment used to manufacture all the pieces of his precious decade old laptop was running proprietary software (not to mention all the small firmwares like wifi and Ethernet controllers).

You cannot escape it.

So, we should be pragmatic about it. We should push for it, but realize that it's impossible to live in society without interacting with closed source software.


Richard Stallman's positions are actually more pragmatic than that. Here is how he describes them:

> I firmly refuse to install non-free software or tolerate its installed presence on my computer or on computers set up for me.

> However, if I am visiting somewhere and the machines available nearby happen to contain non-free software, through no doing of mine, I don't refuse to touch them. I will use them briefly for tasks such as browsing. This limited usage doesn't give my assent to the software's license, or make me responsible its being present in the computer, or make me the possessor of a copy of it, so I don't see an ethical obligation to refrain from this. Of course, I explain to the local people why they should migrate the machines to free software, but I don't push them hard, because annoying them is not the way to convince them.

> Likewise, I don't need to worry about what software is in a kiosk, pay phone, or ATM that I am using. I hope their owners migrate them to free software, for their sake, but there's no need for me to refuse to touch them until then. (I do consider what those machines and their owners might do with my personal data, but that's a different issue, which would arise just the same even if they did use free software. My response to that issue is to minimize those activities which give them any data about me.)

> That's my policy about using a machine once in a while. If I were to use it for an hour every day, that would no longer be "once in a while" — it would be regular use. At that point, I would start to feel the heavy hand of any nonfree software in that computer, and feel the duty to arrange to use a liberated computer instead.

> As for microwave ovens and other appliances, if updating software is not a normal part of use of the device, then it is not a computer. In that case, I think the user need not take cognizance of whether the device contains a processor and software, or is built some other way. However, if it has an "update firmware" button, that means installing different software is a normal part of use, so it is a computer.

https://stallman.org/stallman-computing.html


>no transportation of any kind

How do you figure this. Cars did not always have computers in them, and you can still find older cars available.

>The equipment used to manufacture all the pieces of

Who actually cares about this? The consumer should only care about the thing produced.


People who can't formulate values from first principles.


This isn't children sewing soccer balls with their teeth type of principles. This is a machine controlled by code the end user doesn't get access to building another thing the user actually buys that has no closed code in it. Who cares how the thing was built using whatever software to control the things to build the product? That software affects the end user in no way other than to produce the thing. That's like in an entirely different universe than running code on your computing device where you should be able to decide who/what/when/where type of questions. Telling a manufacturing company to do something regarding their software in their processes is out of this world insane. It is their software after all, so your freedom to run stuff your way should also reciprocate to them doing what they will with theirs.


He never talks about running open source CPU microcode. That's the ultimate attack vector.


With all due respect, I don't believe you are familiar with RMS positions. I'd argue most people who have not followed him closely and haven't carefully listened to what his PoV is, have a very hand-wavy understanding of the nuances of what he stands for. The fact that you talk about open source is the tell. The term "open source" was deliberately invented to de-emphasize what RMS cares about in favor of a business-friendly justification for why it benefits businesses to share source code with others. Open Source philosophy is not about emphasizing the user rights and ownership over their computers.

And yes, he is concerned about microcode--he still uses a Core 2 Duo laptop that can work without the microcode blobs in firmware, as opposed to more modern processors, for that exact reason.


He borrows people’s mobile phones and uses them.

Like the Amish.

Both condemn certain tech, but then use it. Both are not being honest, but hypocrites.


I really don't see Amish as hypocritical when they hire a car or take the train. They are using the absolute minimum of tech that lets them get on with their life, I suppose Stallman is doing the same when he borrows a phone, otherwise avoiding them to the best of his ability.


To portray RMS as "against tech" is a very misleading characterization of him (he is an OS developer after all). To judge someone as a hypocrite, you have to first understand (1) what they say, (2) how they act, and then (3) compare the two. I don't think he ever said "don't use tech". You are vastly mischaracterizing what his position is.


> To portray RMS as "against tech" is a very misleading characterization of him

It's also a poor characterization of the anabaptists, for that matter - they are not in general against technology but against the impacts of particular technology (nothing to do with RMS's concerns, theirs are about community).

For example, some anabaptist communities disallow cars, some allow them in limited ways, some allow them completely - but this is decided based not on a doctrine but on their communities evaluation on the impact of peoples housing spreading apart...


Both condemn certain tech.

You are vastly mischaracterizing the few sentences I wrote.

In the case of RMS it is non-free software tech. In the case of the Amish it is many kinds of tech and depends (as the child comment mentions) on the community. One community I lived among they weren’t allowed to own phones, but they would borrow them all the time.

To call everybody to stop using non-free software phones while depending on thise same people to own them and let you borrow them in order to live is hypocritical. Sorry if that bothers you.


He refuses to own a cell phone, because they're all closed-down, closed-source systems and that goes against his principles, but he's totally fine with using one when he needs to make a phone call. This comes across as startlingly performative.

I thought I remembered other instances, but none of them come to mind and obviously googling has turned up a ton of opinion pieces without a lot of actual substantative points made, so I'll leave it at that; not enough to argue that he is, by his nature, a hypocrite, but also not 100% pure to his principles in every circumstance.


My point is you have to understand his principles first before calling him a hypocrite: in your cell phone example, the phone call he is making with someone else's phone is not depriving him of his freedom/privacy, but its owner's. He would likely advise and nudge the owner not to carry a cellphone for the owner's own benefit, in his capacity as an advocate, but if they have made the choice to carry one anyway, his use of someone else's cellphone for making a call is no different to him than utilizing a landline phone in his capacity as a user.

In fact, he has stated his principles/policies publicly and clearly, and someone posted it here as well. He never says "You are not supposed to use someone else's cellphone, but it's okay when I use it." That would make him a hypocrite.

Thus your example is a strawman here, with the implicit assumption that he has said "you ought not to use any cellphone" which he never said.

I think most misunderstandings of RMS come from the fact that people project on him what they think he would say and how he would act, not what he actually says and how he actually acts; while assuming he is merely some sort of activist engaging in civil disobedience and boycott, just trying to make a point and show the world a pure example of how to live. The way he should actually be thought of is he is fighting a war against people building proprietary software that, based on his theory, inevitably will lead to unjust power over users' lives through control over their devices. He is very pragmatic in that sense and willing to take any strategic advantage he can get in that fight, not deprive himself of that just to make a point. My understanding is at the beginning he used proprietary UNIX compilers to build GCC, for example.


> the phone call he is making with someone else's phone is not depriving him of his freedom/privacy, but its owner's.

Maybe he's not being hypocritical, but that position -- I will pragmatically use others' phones for my convenience, but I will not retain one myself -- just seems shitty. Not sure why, maybe because it's even less of a universally applicable tenet than "phones are bad, mmkay," it's just "I'll use the systems as convenient for me, no matter how others may suffer under them."


He played Age of Empires II at a friend's house once. Is that enough?

I think he makes a pretty big tradeoff that nobody else would, without completely shooting himself in the foot. Even if he had a fully FOSS phone, the people he called probably wouldn't. He's drawn the line far enough out that I don't doubt his sincerity.


Saying that it's problematic to take RMS seriously because of the things you mentioned is in my opinion one of the problems that led us to the hellish things we're exposed to today.

It's the well dressed, well socially adjusted people with their silver tongues that convinced the masses that giving up privacy is in their interest. People like that are the people responsible for the shitshow we're now observing.

RMS is strongly opinionated, and without that kind of strong conviction, we would be far worse off. I do agree he's taking an extreme position, but when you're standing against giants with money and power, being extreme is sometimes the only way to stand a chance.


> RMS is a hypocrite though

For whatever failing RMS has, hypocrite is obviously not one. There are few people on the world who are so consistent in their message and actions as RMS.


HN loves throwing around accusations of "ad hominem", but dismissing someone's ideas on IP because he picks his toes is a quintessential ad hominem attack.


You make a grave accusation without even backing it up.

Are you sure you fully understand the definition of hypocrisy?


Apples selling point is that they decide what should be running on your phone. From the various app store monopolization threads, people like that apple controls your iphone, and wouldn't buy one if the owner controlled it.


On the other hand, I'm not buying an iPhone due to control issues. If I were able to fully control it, I'd probably buy one.


Why would you choose apple over anyone else at that point?


> people like that apple controls your iphone, and wouldn't buy one if the owner controlled it.

My guess is that this applies to about as many people as the opposite “wants to control their device” stance. Which is to say approximately nobody. Most users probably don’t think about these things much if at all. They want a smartphone that runs the expected smartphone apps and has the expected smartphone features, and do not care whether or not some nerd can install and run some apps they’ve never heard of on his own device.

I imagine, though, that most people would have an opinion one way or the other about phones snitching on their owners.


The iPhone is a status symbol. That's why most people buy it.


Exactly, it's easy with phones - you can have the same phone your favorite celebrity has! Not so with almost anything else. People have just the weirdest reasons. It's also funny to me because I will never ask anything about a persons phone, laptop or other tech gadget if I recognize it from a distance (for example Apple products), but I will ask and converse about brands/devices that I have not seen much of. Not sure why people think Apple products give them any credit whatsoever.


Android is a low-quality platform with limited security updates, multiple app stores, preinstalled crapware, and other debris. I chose an iPhone so I wouldn't have to deal with these issues.


I bought an iphone because I want a phone that'll last five years and get security updates for at least as long. I also bought an iphone because I have a lot of confidence it won't have design flaws that break its gps (like the motorola I had) or have its ecosystem EOL'd (like the Windows phone I had, which was an incredible piece of hardware, and I loved it!)

I wish the android hardware ecosystem had half the longevity you get with the cheapest iphone.


Steve Jobs received a standing ovation when he showed how the new iOS would finally let you change your wallpaper. This broke the RDF for me.


Why is that wrong? I also applauded (internally) when notifications were added because it was a feature that I wanted and I didn't care that Android had it before because I wasn't using an Android phone. I bought all of the Google-branded Android phones when they came out and I still preferred iOS and the iPhone so having that feature was worthy of applause for me. I don't see how the wallpaper thing is any different. As much as I would like to have had the ability to change wallpaper before then, that wasn't as important as all the other things I cared about more.


I've never been to one of Apple's events, but as a viewer of their videos some of those applause breaks have me convinced there are giant "Applause Now" signs. I imagine them being on more than they are off by how over the top their dialog is written. Or they fill the air in the room with some sort of gas that makes the attendees very susceptible to the RDF.

Those are much more fun explanations than just realizing they probably have plants to start the cheering in a room full of fanboys


Life existed before the iPhone. Life goes on even if you have to use other devices, and you'll be able to do more stuff than before iPhones were invented. ppl are very inventive and adaptable. Give Linux a try. Mint is a pretty good noob friendly version. Ubuntu is a very popular one.


There’s not much you can do about it if you want a decent smartphone. On Android Google also can do on-device scanning since forever.


The problem is _not_ on-device scanning, it's on-device scanning and then sending incriminating results to the device maker.


Final result is better than before, because scanning applies only into targets which are ending into the cloud. Content was plaintext in the cloud before (not counting server-side encryption).

With current change, they have "partial" E2E encryption and Apple can decrypt them only if CSAM treshold is reached. If we leave all speculation, this is a great improvement for privacy in CSAM context.

You can opt-out from scanning by not using iCloud.


On Android they can do this?

Source?

The distinction still needs to be made between AOSP and whatever ships on your phone or is included in play services. Especially on HN of all places.


Fair enough, but Android includes Play Services in all notable cases. Yes a handful of people run it without, which does solve that issue, at the cost of convenience and usability.

That's why I specified "decent smartphone". I wouldn't consider the UX of Android without Play Services to be "decent".


phone's are just phones. At this point multiple companies and potential governments are monitoring or have the capability to monitor it.

But computers are another story. So far we still maintain some privacy there thanks to Linux.


One of the most pernicious sins of the smartphone revolution was to rebrand the "computer". We'd evolved expectations as to what we were entitled to on our own machines - yes, even those with a proprietary OS - and then with one weird trick, it all went in the trash. Advertising, surveillance, inability to install whatever you want - before smartphones, this sort of thing used to be unequivocally malware.

The worst part is that now the dam's broken, it's all bleeding back into "computers".


You're blaming smartphones for problem that exists because of the Internet. If smartphones didn't render computers irrelevant to most people, all this "bad app" stuff would have happened on computers.

And the computers that don't have "bad apps" are hacked and encrypted by ransomware. Is that better?

Spyware is older than smartphones.


Spyware is older than smartphones, but back then we called it "spyware", not "ooh, the latest Android!".


Apple is literally going to spy on you... and then report you to the authorities.

I'd rather give all of my DNA to an advertising company.


Fine, "the latest iPhone" then. If you thought that was some sort of Android-vs-Apple remark then you have missed the point.


> The worst part is that now the dam's broken, it's all bleeding back into "computers".

Back in the day, before cellular phones, mainframe and minicomputer vendors tried their best to restrict the software that could be run on “their” hardware. They realised that they could squeeze the most out of their customers that way.


Android is also Linux. Yet some manufacturers don't allow any kind of boot unlocking or root access.

Open source doesn't guarantee we have any type of control over it if the hardware is locked down. The reason is more that the industry hasn't bothered selling locked-down computers yet by default. The tech is there, Secure Boot + TPM. They just give us the options to override in the BIOS. For now.


> Open source doesn't guarantee we have any type of control over it if the hardware is locked down

It can though. See: GPLv3.

IMO the greatest failure of Linux was not upgrading from GPLv2.


Linux has many more stakeholders than you. It’s a failure for you, but extremely important to many others (including me). I’d go further and allege you are in the minority among users of Linux, because think about what a user of Linux is. Linus made the absolutely correct call and Linux would be on the decline if he hadn’t; the money in Linux is in embedding (and SaaS), not people dabbling with free software who by definition don’t pay for it, and until the FSF and its adherents conquer the concept of “an economy” they’re still fighting an uphill, losing battle against the same economic forces they despite.

More than half of YC’s hardware startups, not to mention Android and probably Teslas in their current form, probably would have never happened were they not able to embed Linux in a controlled manner without having to invest in catering to the four total users who will want to build a system image and reflash the firmware on their whatever. (Android has some means to do so and an audience much more interested in doing so but the point stands.)

I’m also interested in the legal framework around a software license that’s able to dictate the architecture and design of components around the software, and I suspect it will not survive if challenged, particularly in Europe.

The greatest failure of free software, to me, is thinking in absolutes and not studying how the world actually uses computers as time goes on. The concepts, ideas, and demands are stuck in 1991 and are basically “man shakes fist at capitalism,” while writing capitalist exceptions into the very Tivoization clause in question under pressure.


> not people dabbling with free software who by definition don’t pay for it

You and I don't seem to share the same definition of Free Software. I personally pay for, and know others who pay for Free Software. Free is about freedom, not price.

> not to mention Android and probably Teslas in their current form, probably would have never happened were they not able to embed Linux in a controlled manner without having to invest in catering to the four total users who will want to build a system image and reflash the firmware on their whatever.

All they have to do is not go out of there way to lock down the ability to flash the firmware. It does not require extra effort to support this. It requires extra effort to block this.


> without having to invest in catering to the four total users who will want to build a system image

They have to release that code anyway. All they have to “invest” in is not implementing measures to prevent people using it.


This is a case of lack of enforcement, not a case of lax licensing. You do have this right under GPLv2 according to Conservancy: https://news.ycombinator.com/item?id=27937877.


Open source also doesn't guarantee any type of control if it's SaaS or anchored to a closed source SaaS system. There's a ton of "but it's open source!" SaaS companies that I shall not name that leverage open source but in reality are lock-in walled gardens. The code is open but the data and network effect are not.


Unfortunately, phones are most people’s primary computer.


> If people could understand what computing was about, the iPhone would not be a bad thing. But because people don’t understand what computing is about, they think they have it in the iPhone, and that illusion is as bad as the illusion that Guitar Hero is the same as a real guitar.

-- Alan Kay, https://www.fastcompany.com/40435064/what-alan-kay-thinks-ab...


Nce theory. And yet smartphones have delivered far more real world computing to users than the "real computing" ever did.


There is no formal definition of computing which is furthered by iPhones.

I have an iPhone, and pythonista is the only potential thing which fulfils the criteria of "computing", everything else is convenience.

Smartphones have done an incredible amount at bringing the consumption of the internet, audio, video and rich communication via social media.

But they have not brought "real world computing", because "real world computing" is any goal-oriented activity requiring, benefiting from, or creating computing machinery. It includes the study and experimentation of algorithmic processes and development of both hardware and software. It has scientific, engineering, mathematical, technological and social aspects.

You might want to read the wikipedia page.

https://en.wikipedia.org/wiki/Computing


ACM tried to define it:

>"In a general way, we can define computing to mean any goal-oriented activity requiring, benefiting from, or creating computers. Thus, computing includes designing and building hardware and software systems for a wide range of purposes; processing, structuring, and managing various kinds of information; doing scientific studies using computers; making computer systems behave intelligently; creating and using communications and entertainment media; finding and gathering information relevant to any particular purpose, and so on. The list is virtually endless, and the possibilities are vast."

I basically would half agree that iPhone brought more computing. Based on practical applications, iPhone has not brought radically more computing. iPhone replaced some stationary computing with ultra-mobile computing.

Cheap feature phones, that allowed people in Africa to make cashless payments, brought more computing to ordinary people... than expensive iPhone - that is owned by people who have/had other computers.

People who formally require computing - engineers(all kinds, incl software and structural), data collectors, music professionals and so on - still rely on other forms of computing. Some have shifted to iPad, which made computing more fun. But then all of the heavier forms of computing - they are still done on a "computer", not a phone or iPad.


Phones have delivered social networks, cat pictures and mobile games to the masses, that's about it, really.


> phone's are just phones

> But computers are another story.

Librem 5 and Pinephone smartphones are computers running GNU/Linux.


Yeah I think the distinction is more with OS than the form factor.

With a close garden and close sourced apps, there are basically nothing you can do (to make sure you're not being monitored). Completely at whim, true for regular computers too, but a lot more choices exist for the latter group.


Unfortunately that is accurate for every property software, not only that but you can easily make a case that all software that you personally don’t maintain including open source has same problem.


I think what you mean is that you don't control the iCloud Photos client, Apple does. Your photos are scanned by the iCloud Photos "client" (which is built into the OS) before uploading them to the cloud.

Would it be any different if Dropbox was scanning files before it uploaded them rather than afterwards? It already computes hashes of your files on your device so that it doesn't have to re-upload existing content, so I honestly don't understand the objection to doing image scanning on your device before uploading your content.


The point is that if the capability for "the iCloud Photos client" to scan your photos before uploading them to the cloud exists, it's only Apple's policy that stops it from scanning all of your photos.


That capability obviously already exists independent from this CSAM-scanning system.


Yes, they were already doing facial recognition of my photos, which I didn’t ask for and can’t turn off.


You can uninstall Dropbox. You cannot uninstall Photos.


While there is certainly truth to this, this is also a defeatist position. Of course Apple could do anything, with or without telling us. And they do some fairly indefensible things, though I would argue those have mostly to do with the App Store.

But this is not the time to throw our hands up and say there was nothing we could do all along. That the situation was never ideal does not preclude there being something worth fighting for.


> I'm not sure what I'm going to do about it. Problem is that the devices are really really good.

The devices are good, but still Apple has a weak value proposition.

Also, handing them your money probably isn't going to make things better in terms of alternatives.


> Problem is that the devices are really really good

their devices are really good, but they come with so much crippling for the sake of the walled garden that their goodness is wasted in some fronts.

There's a large list of things they are (or can be with minimal effort) perfectly capable for, but are forbidden for reasons, like reverse wireless charge of other iphones / airpods.


Meh. Dropbox and Google Drive can run arbitrary queries over your files stored there. iCloud (assuming they finish e2ee transition) will have to push the same hashes to everyone. It's not transparent and we don't have a way to inspect what exactly are they searching for, but at least there's a way in principle to reverse engineer the algorithm and to monitor how often hash database gets updated.

In my book that's a step in the direction of privacy, compared to old status quo.


> Meh. Dropbox and Google Drive can run arbitrary queries over your files stored there.

However, unlike with Apple's invasive on-device scanning, you can encrypt files before storing them at Dropbox or Google Drive. There are even simple turnkey solutions like Sookasa:

"Sookasa acts as a transparent layer over Google Drive to encrypt your sensitive files on the cloud and across connected devices..." https://www.sookasa.com/GD

"Sookasa protects data both on devices and in the cloud, and decouples the data from the encryption keys, meaning your data stays secure no matter where it goes." https://www.sookasa.com/dropbox-security/


Cryptomator is another encryption software which can encrypt files before they're uploaded to the cloud, and it's open source.

https://github.com/cryptomator/cryptomator


How do you know Sookasa isn’t doing the same thing? It doesn’t appear to be open source.


A free and open source alternative to Sookasa is Cryptomator.

https://cryptomator.org

https://github.com/cryptomator


Open source software like rclone can do something similar.


You can irreversibly transform the image data of photos added to the Photos app as well.


Rclone and gocryptfs are other options for cloud encryption.


Huh? You can choose not to use any cloud service if you want. If you buy an iPhone, you can't... Choose not to use the iPhone you just bought.

This isn't a step towards privacy and it sounds (to me at least) like quite the logical contortion to argue that it is.


Not for people that didn't trust the cloud and didnt out their life in there in the first place.


> iCloud [...] will have to push the same hashes to everyone

Why?

What's to prevent them from pushing specific hashes to a specific phone?


There are three primary sources stating that the database will be embedded in iOS. Remote hash updates introduces privacy risk.


This entire scheme introduces privacy risk.

There is no technical reason why they could not push individual hashes to individual phones, only policy ones.


In an alternative universe where they built a remotely updated database, yes. But that’s not this universe.

I’m going to go further and say that people are doing a very bad job articulating why the incremental privacy risk of the scheme is significant, over the always-existent privacy risk of a proprietary vendor updating software they entirely control to scan data uploaded to a cloud service which guarantees no protection from vendor access. A later software update to include more hashes or whatever could always regress privacy.


One is a proprietary third-party, optional service acting against you, the other is your own device acting against you. That's the difference and it should be pretty easy to understand.


So if you could delete the Photos app, you’d believe there is no longer any fundamental privacy risk?


They also could embed the whole database into iOS and activate certain hashes only for certain iCloud accounts. No one would know because the database is encrypted multiple times.


They could do a lot of things. They’ve told us what they do. It’s not this. The FAQ released yesterday specifically says that users cannot be targeted.


They've told us what they do today.


> The same set of hashes is stored in the operating system of every iPhone and iPad user, so targeted attacks against only specific individuals are not possible under our design.

The problem with this sentence is that Apple assumes that they can't target specific individuals because every iPhone and iPad user has the exact same database in their iOS device.

But what if they have a hash in the database where they know that only one person has this exact image on their device? This way you could single out one individual with the same database.


This is a better way to frame the discussion, IMHO.

The conversation is around Apple, which is critical, but we need to compare them to the rest of the industry, and discuss the government/citizen tradeoffs in that light. I.e., holistically, not per company.


True. The constant migration of everything to the cloud has lots of consequences just like this. If the false positive are as common as the fotoForensics guy states, this could also become a new weapon for corporate warfare. A small competitor to a market Apple wants to control happens to have assets stored in an apple cloud? Guess who's offices are getting raided today?


That is indeed what the article does, does it not? It makes the case that storing online with a decryption key that can be used with a search warrant is probably the right trade off, and the way other companies sometimes implement this.

Then you get to choose whether to push your data to the third party or not, given the risks involved.

The author even notes they were opposed to Facebook's end-to-end encryption previously, I assume because as for defaults it sets a precedent and makes it unsearchable, but I'm not sure the specific tradeoffs they weigh and points they consider since it's behind a paywall (and I'm not sure I agree).

> discuss the government/citizen tradeoffs in that light. I.e., holistically, not per company.

Right now the differences are essentially per-company, since we've let our experiences be controlled almost entirely by a small subset of companies. To abstract the implementation from the primary implementer is to obfuscate some of the cause and effect here. We should discuss this as a societal tradeoff, as you note, but we should not ignore that this was spurred by a company running out in front of what was required of it and implementing this system which many see as at the expense of their users privacy.


> Then you get to choose whether to push your data to the third party or not, given the risks involved.

You get to choose whether to push your data to Apple and trigger the scanning with their solution too.

The key escrow option is strictly worse.


> You get to choose whether to push your data to Apple and trigger the scanning with their solution too.

That's purely an implementation detail, and subject to change at any time. That's why people are upset.

One solution is limited to you actually pushing your data off your private device, the other is limited to a list of items you say you want to push off your device, but actually happens on your device.

That's the difference between someone searching a large warehouse you and many others have stored belongings, and someone coming into your house and searching through your items freely as long as they're on the list.

Beyond the difference in privacy that search entails fundamentally, people are very worried that the list itself is limited only by policy, and truly, the search of items on that list has full access to your private details but for the grace of those performing the search and controlling the list.

The key escrow option is strictly worse than the current implementation, but it is also naturally constrained and the exposure is entirely user controlled. If you do not put data online in that situation, there is no way for them to process it without first exfiltrating it, which we already have laws and systems in place to hamper.


> That's purely an implementation detail, and subject to change at any time.

That’s an evergreen complaint. If they want to introduce a general purpose scanning mechanism they can do so at any time. This is not that.

> That's why people are upset.

I don’t think so. I think they are upset because they don’t like the fact that Apple has any power over them and this remind them of that even though it is not in fact an abuse.

I actually agree with this, but I don’t think that claiming Apple’s implementation to be something it is not is helpful.

The key escrow solution is strictly worse in any future. If key escrow becomes established as a norm between cloud providers and law enforcement, then no free alternative will ever be possible.


> The key escrow solution is strictly worse in any future. If key escrow becomes established as a norm between cloud providers and law enforcement, then no free alternative will ever be possible.

I don't think that's true. Systems or programs to encrypt locally before pushing up to a shared platform are possible and currently in use. Those that want that additional security have recourse to get it. Alternatively, people could run their own cloud sync instances (also already available in some forms). This puts the control in the users hands (don't sync to cloud, pre-encrypt to shared cloud, or do some personal sync thing), while also setting a clear precedent of what is acceptable on users personal devices.

The problem here is that this implementation really has nothing to do with cloud sync. Apple has currently linked it to whether you're pushing that data to iCloud, but that's an arbitrary distinction. In the world without iCloud, they could make it scan any media that was sent across the network. The iCloud distinction is entirely arbitrary, which is why people are not satisfied with it. There is nothing beyond promises to keep it that way, and promises are less binding than laws and national security letters.


> The problem here is that this implementation really has nothing to do with cloud sync.

It is built into the photo uploading mechanism and only scans photos in a very narrow way that can’t be twisted into generic scanning.

> they could make it scan any media that was sent across the network.

Definitely false. It cannot match anything except photos in this very narrow way.

> There is nothing beyond promises to keep it that way,

Not true. The mechanism cannot be used as a general purpose media scanner.

What is true is that Apple could add a general purpose scanner in future, but it wouldn’t leverage this mechanism, and their potential to add arbitrary spyware has always been there and is not changed by this.


I don’t keep child porn on my phone (or any where else. I also don’t keep smallpox in my freezer or nuclear weapons on my basement. Noninvasive scans for these things probably make the world a better place in some ways.

The part I object to having my life disrupted by having my personal accounts unilaterally deleted by a fucking buggy bot. Our phones are too important to us to just have them shut off without notice. That’s bullshit.


>Noninvasive scans

Scans on a local device feel quite invasive to me. Phones have become an extension of our person, let's not kid ourselves.


Is a picture of a naked kid going to trigger this algorithm?

In a few places taking pictures of your child naked while swimming is considered child pornography. Other places having children run around naked on the beach is the norm. (I have a few pictures of me, at 3y/o, naked on the beach)

In the world of remote medicine, can a parent take pictures of their naked child to send to a doctor?

How are they going to fit their cultural specifics to the world?

Knowing how Facebook dealt with it - they are going to apply the strictest rules, so no naked child photos are allowed on your iPhone anymore. For no reason.


Apple has said multiple times that accounts are not shut off with any automated system, that portion of the process is handled by humans.


Who do I call (and how do I call them) if the person agrees with the automated false positive and disables my account/phone and reports me to the police? Just being accused of having CSAM is ruinous. What's my recourse if there's an innocent bug in their system that reports me to the police?

I was at Apple for more than a decade and a half. I saw an untold number of Michael Bolton bugs [0]. It's one thing when a bug causes a dropped frame in a video or a menu takes a tenth of a second too long to appear. It's another when it ruins your life and bankrupts you defending yourself.

[0] https://www.youtube.com/watch?v=NnPBSy5FsOc&t=02m24s


Good! Because I trust those even less. If possible, once banned, no one to answer on any of the communication channels, Google style.


So... They will have humans looking at pictures to determine if it's child pornography? Well... That's a dream job for a pedo


It's hard to argue they did this for a vision of privacy.

They did this because they want YOUR phone to be a liability not THEIR servers.


I've heard it argued that this allows Apple to move iCloud toward end-to-end encryption (which would be a good thing for privacy, right?). It seems like the current US government position is "we'll let tech giants use end-to-end encryption for user data as long as they put measures in place to catch CSAM."

by "US government position" I'm including negotiations and proposed/threatened legislation, not just the current laws on the books.


If you really believe that, then I have a bridge to sell you. The exact same safety argument can and will be applied to ALL data whether on your device or Apple's servers.

Why limit to iMessage and photos? Why not Signal on your device?

Why not your backups on Apple servers? Oh wait, already happens.


I think Apple would rather put this CSAM-scanning system in place (which allows them to implement end-to-end encryption for iCloud in the future) than deal with the EARN-IT Act or similar becoming law, which could effectively make all e2e-encrypted services illegal (require a government backdoor).

>The bill also crafts two addition changes to Section 230(c)(2)'s liability, allows any state to bring a lawsuit to service providers if they fail to deal with child sexual abuse material on their service, or if they allow end-to-end encryption on their service and do not provide means to enforcement officials to decrypt the material.

https://en.wikipedia.org/wiki/EARN_IT_Act_of_2020


If that were the case, wouldn't this CSAM scanning system be insufficient to meet those EARN-IT requirements?

You have other Apple services and third-party apps that host material on Apple's servers.

For example, if a user turns on iCloud backups, then every third-party app's Documents directory is backed up to iCloud. Would it be a violation to not CSAM scan that content? What if the contents are encrypted? Would they be required to be decrypted so that they are CSAM-scanned?

iCloud drive is another Apple service that backs up to Apple's servers. Wouldn't its absence from the list be a violation? What if a user hosts encrypted files on iCloud drive? Would the user be required to decrypt them so that Apple can scan them?

It seems that the real intention is to eliminate end-to-end encryption.


>wouldn't this CSAM scanning system be insufficient to meet those EARN-IT requirements?

Yes. My point is that there's an ongoing dance between the tech companies and the government, and through their negotiations and government connections Apple probably views this CSAM-scanning move as making an EARN-IT-like law less likely to be passed. It's overall the less-invasive option. The US federal government is putting pressure on tech companies not to host CSAM, and if tech giants didn't agree to do stuff like this the government could respond by passing stricter laws to effectively make unbackdoored e2e encryption illegal.

Apple has a lot of influence but at the end of the day they're a US-based company that has to follow US laws. Voluntarily implementing CSAM-scanning is in their own interest as a "pro-privacy" company if it prevents more draconian anti-encryption laws from being passed that could effectively outlaw e2e encryption.

I don't view this as Apple singlehandedly trying to eliminate end-to-end encryption; that seems like a pretty radical view of the situation to me but of course you're free to hold that opinion.


I don’t hold the view that Apple is trying to eliminate end to end encryption. I view this as a push by governments to do so and the increasing willingness of the tech industry to work with them.

This is more like Apple giving way gradually and the government happy since in the long run they get everything they want.

Examples: we don’t unlock phones for the government… but we give them all the data if you back up your phone… but you have so much privacy!

We don’t read your messages, oh wait now we do, but only for child abuse, oh wait, we don’t control what it looks for but let’s not talk about that because it hurts our marketing


Hard to believe that a law limiting that would stand up in the supreme court, and Apple has previously indicated they would be willing to pay whatever legal costs are necessary to defend themselves from that sort of attack.

It is weird that Apple would do this in the first place though, it certainly doesn't make me want to use their products.


It moves iCloud to end-to-end encryption by compromising the ends. Not really a reasonable outcome.


> move iCloud toward end-to-end encryption (which would be a good thing for privacy, right?).

"End-to-end" encryption is nothing to strive for if you're destroying the ends. With this change, the "ends" become the users themselves. By embedding an agent acting on behalf of Apple/government, the device in front of end users is no longer a tool but rather an adversary. This is computational disenfranchisement.


Hey if the ends justify the means.

I’ll show myself out.


Feels like nobody here has bothered to read the actual technical specifications.[1]

This already adds new level of encryption into iCloud stored images. They have essentially created E2EE system with a specific access (or backdoor), while preventing the use of backdoor for other purposes than CSAM (so nobody can ask randomly to decrypt something else). They can only decrypt images, when user's account reaches the treshold of CSAM hash count:

> Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images.

While this is not perfect end-to-end encryption solution, it is better than only server side encryption. Now there are two levels of encryption. If someone breaches Apple's servers and they have also access for server side private keys, they still need matching NeuralHash value to decrypt the images.

[1]:https://www.apple.com/child-safety/pdf/Expanded_Protections_...


I think the argument here is that (1) the model is going to have false positives (e.g. revealing pictures of you and your spouse, your beach photos, etc.) that will permit access for non-CSAM (or at the very least, mark your account as suspicious in the eyes of authorities), and (2) the model itself can be updated/replaced for any reason and potentially at any government's demand, so the limits on scope are effectively moot


For argument (1), they are only looking matches from existing database of hashes what NCMEC is providing. They are not developing general AI to identify new pictures, they only try to stop redistribution of known files. Because of that, their claim for 1/1 trillion false postives might be actually close to be correct since it is easily validated on development phase. Also, there is human verification before law-enforces are included.

For argument (2), this might be valid, but yet again, all we can do is to trust Apple, as we do all the time by using their closed source system. Model can be changed, but it is still better option than store everything unencrypted? In case you mean forging hashes to decrypt content.

For the sake of surveillance, it is not strong argument because again, system is closed and we know only what they say. Creating such model is trivial, and is not stopping government for demanding if Apple would want to allow that. System would be identical for antivirus engines which have existed since 1980s.

This is such a PR failure for Apple, because all their incoming features are improving privacy on CSAM area, everything negative comes from speculation which was equally already possible.


Not to mention, this neural hashing can't be a cheap operation. Better to use the processors paid for by your customers than ones you have to pay for in the cloud!


I honestly wonder if the privacy/intrusion backlash wasn't even considered because they were focused on this from the perspective of "isn't this great that these phones now have enough power to do this computation on-device", the same way they were proud to be able to announce that Siri processing would begin to happen on-device.


I thought about that as well. They rushed to publish some unpolished PR pieces and FAQs right after the announcement and at very unusual times (3am in Cupertino) trying to damage control so my guess would be as well that they didn't even consider the backlash. Maybe they were really proud about the technology they implemented. It's a little bit sad if you think about it.


If you think about this, this is a really great innovation, which is left behind bad PR. There was a leak about their system before release (which got a lot of publicity, and was heavily missleading), so that might be a reason for their hurry.

I have been browsing their technical papers for couple of days now, and they have managed to innovate a system which has "kind of" end-to-end encryption, but enables access for specific (and only this kind of) content. They have managed to make encrypted system where they can lock themselves out, and can't answer for example the demands of FBI to show all images of specific user.

On the context of CSAM, this is an improvement for privacy, but being able to understand that, it requires great understanding of different technologies.


I would like to see tests wrt this before I assume it--Apple puts a pretty hefty premium on battery life for customer devices, and while that would probably not win out when placed against this kind of liability minimization, they've got a lot of sharp folks there who crunch this stuff in their sleep.


I have been noticing increased background activity on my phone after it reached 100% during charging, for years now.

Pretty sure they can infer your normal sleep schedule and, after your phone has fully charged (1h to 3h after you fall asleep) they can freely use it as a P2P computing device.

Nothing new really. And they can still keep their promise of a good battery life -- which is quite justified by the way, I am using iPhones for 4 years now and they've consistently won over any of the 13 Android devices that I've used in the years before (and sometimes during).

There are ways to achieve both -- good observable battery life plus ability to use your phone for P2P computation -- and it's IMO obvious that Apple has succeeded in that for several years now.


> which is quite justified by the way, I am using iPhones for 4 years now and they've consistently won over any of the 13 Android devices that I've used in the years before (and sometimes during).

I find this a bit shocking. Were all of these Android low battery devices?

I don't have a good point of comparison, as I've only used Android on phones, but there are a lot of Android devices with really large batteries and very long battery life. It'd be pretty amazing if iOS were beating them in the real world.


There's no magic in iPhone's batteries, they are basically identical to all others.

The secret sauce ingredient is the idle battery life. The Android devices I used -- all 13 of them, without exception -- routinely lost anywhere from 15% to 40% battery when I was out and about without me picking them up even once. We're talking anywhere from 2 to 5 hours with them being in my pocket or a backpack, in the middle of a big city, with excellent 4G coverage, so it should not be a constant radio antenna activity (although if their antennas were of lower quality that might just be it).

I installed all sorts of root apps to try and pinpoint the culprit and inevitably hit a brick wall when all the apps can do is point at a system service without any way to drill down further.

In contrast, the iPhones that I've used -- 6S Plus, 8 Plus and 12 Pro Max -- are all extremely frugal on using battery when the device is idle. I've grown used to picking the phone from my bed at 100% battery in the morning on days where I had to do a lot of stuff outside, be out and about for 4-5 hours and only pick up the phone once or twice, and then get back home and find the phone at 98% battery.


As someone with an iPhone 12 Pro and a Galaxy S21 Ultra, swapping my SIM between each every ~two months; this situation has absolutely changed sometime in the past couple years.

Android's idle power draw, at least on the S21 Ultra, is fantastic. I'll sometimes leave this phone on, with no SIM in it but connected to WiFi, sitting on my desk, using it every once in a while for Tik Tok or whatever, and it'll last a week or more. The iPhone is pretty similar.

But display-on time, apples to apples; the iPhone has fallen behind. With the 12 Pro, most days I'll end at 5-10%. At least once every couple weeks, the day will end with the iPhone having already shut down. In comparison, on the S21 Ultra: I give it ~20-40 minutes of charge every day while I'm working, just unplug my USB-C laptop and plug it into the phone, and that's it. I don't charge it at night. Its battery life is ungodly; by the time I go to bed, its at 50-60%. Wake up, plug it in while I'm showering and making coffee, good to go.


I think you are aware that 4G radio can draw a lot of energy so that might be an unfair comparison with the iPhone.

A more objective test would be for both to have SIM, or both to not have a SIM. Then again, iPhones have objectively weaker batteries compared to a lot of Androids (as in, less mAh).

I am not claiming that there are no better Android devices nowadays btw; not at all. I'd be happy if the OEMs finally caught up with this nasty problem! I am only saying that back in 2017 -- when I finally lost patience with Android -- things were looking pretty bad for it and for me that was the tipping point that made go for iPhones.


Both the iPhone 12 Pro and S21 Ultra have 5G radios (though their usage here is probably 80% on wifi, 20% on non-UWB 5G most days).

My point is comparing their battery lives when they have the SIM in them. So, not which one lasted the longest yesterday, one with the SIM and one without, but rather comparing across time, how did the S21 (my current phone) fair today, versus how did the iPhone generally fair a few weeks ago when the SIM was in it. Not exactly scientific; just what I've observed and felt.

And the unscientific conclusion I've drawn is that the idle time is pretty similar between the iPhone 12 Pro and S21 Ultra, but the "active" radios-on SIM-in screen-being-used-all-day time definitely favors the S21 Ultra.

> I am only saying that back in 2017 -- when I finally lost patience with Android -- things were looking pretty bad for it and for me that was the tipping point that made go for iPhones.

You're 100% right; Android's track record has been pretty darn bad when it comes to standby time. But, I think its actually gotten a lot better. There are some reports that Samsung's Android flavors, specifically, are very aggressive when it comes to background app killing; that may be it (and, frankly, I don't notice any negative side-effects from it. if its happening, its transparent). Or maybe it was something more general in a later version of Android. Or maybe just huge batteries. But; something has changed.


Thank you, those are very valuable anecdotes to keep in mind for the future!


> I am only saying that back in 2017 -- when I finally lost patience with Android -- things were looking pretty bad for it and for me that was the tipping point that made go for iPhones.

I'll respond to your earlier post too, but this is actually the answer that I was looking for. The Android world was a very different place back in 2017. The best battery life that I got out of an Android back then was an LG G2. I think that it was competitive with the iPhone, but many other Android devices of the era were not.


> I've grown used to picking the phone from my bed at 100% battery in the morning on days where I had to do a lot of stuff outside, be out and about for 4-5 hours and only pick up the phone once or twice, and then get back home and find the phone at 98% battery.

Is this true when the device is a year or two old? I've always found iPhone battery life impressive... but only until its about a year old or so. My iPhone 11 plus now requires nightly charging, or I will inevitably hit a Low Battery situation


I held on to my 8 Plus for little over two years and it remained excellent most of the time -- although I had to replace the battery before reselling it because it was at 83% capacity.

I also always charge my phone overnight even if it's at 70% (which it routinely is) because I've been bitten by going out and about with 60% charge and having to spend 12h outside and my phone died. So I just started conservatively charging it each night no matter what I expected for the next day.

But yes, for most people an iPhone can easily last two full days and still be at 15-20% when you finally plug it in on the second day's end.


iPhone batteries are only rated for 500 cycles of 100% -> 0%. Or, to look at it differently: You can charge 1 percent of your battery 50,000 times on your iphone before Apple no longer guarantees under warrantee that your battery will hold 80% of its original capacity.

Caveat: every percent is not equivalent. Charging above 90% is harder on your battery than charging from 40-50%. Charging and even just using your battery while it is very hot (90 degrees or above) or very cold (below freezing) is also very hard on it. Apple defines operating temperatures as:

> Use iOS devices where the ambient temperature is between 0º and 35º C (32º to 95º F)

and claims that using the phone outside those temperature ranges can permanently shorten battery life. Bit silly when you live someplace like I do where 60% or more of days are outside of that operating range, but I guess Apple is really just designing for Cupertino temperatures.

Basically after just 365 days of phone ownership, you're probably already over 50% of your way through your battery and capacity might be reduced by over 10%, up to 19% is still OK under warrantee. I've had a 2016 iPhone SE for over 5 years now, and I've replaced the battery twice in that time. Seems to be essentially required once every two years.


Yes, I agree with that. The two iPhones I had before the 12 Pro Max -- 6s Plus and 8 Plus -- seem to have needed a battery replacement at about the 20th month mark because the capacity was at ~85% at that point. And you being to notice. So yes, doing a battery replacement anywhere inbetween the 1.5 and 2.0 years mark for iPhones is quite expected IMO.


I've used my (original) SE for over 5 years, running it down to the 20% region (often way lower) and back up overnight at least 2000 times. Battery still seems reasonable - health on 79%.


May I ask where in the world you live? I wonder if part of the cause of your incredible battery longevity is a climate that stays safely within Apple's design specs of low humidity, moderate temperature. I've lived in areas that get very hot, very cold, and very humid regularly in the time I've owned my SE, which could contribute to battery degradation.

And above it all, batteries are very much a lottery. Sometimes you get a really great one, sometimes your battery falls apart within a year. I guess you got lucky.


Before I had an 11 I had an SE, and by year 2 or 3 the battery was so weak that I had to carry around an external charging pack for it. This got really awkward (and a little scary) for traveling and was part of the reason I upgraded to a newer model.

Right before I sold that SE to my friend I checked the battery health, and it was in the high 80s. Yet I could not get a full day's use on a single charge. This led me to conclude that that Battery Health measurement was bullshit.


I haven't looked at it before.

Apparently if I enable analytics I will get a report saying how many cycles my batter has lasted. Unless I use my phone for several hours of playing spotify and youtube I don't notice the battery dying during the day.


> The secret sauce ingredient is the idle battery life. The Android devices I used -- all 13 of them, without exception -- routinely lost anywhere from 15% to 40% battery when I was out and about without me picking them up even once.

I think Android and newer chips have mostly fixed this. The Pixel 5 on my desk right now has gone from 100% to 81% in the last 7.5 hours with 1.25 hours of screen on time during that period. Last week, I had a session of 18 hours with screen on of 4.5 hours that went from 100% to 34%. IMO, idle draw is likely still a little higher, but then again I run a lot of little background apps (eg, tasker, the pebble app, wear os connectivity and others).

Most of my previous Android devices wouldn't have fared nearly so well, even back when I cared enough to spend time trying to find wakelock culprits. Thankfully, I haven't needed to do that in years either. :)


It's really good to hear that wakelock hunting is no longer an expected practice. ^_^

I'm super interested in getting a Xiaomi (and modifying it enough so I can be reasonably sure that it doesn't stream all noise around it 24/7) at some point so hearing about how Android got better in this regard is exciting.


Yeah this. I’ll leave my iPad on my bed for 10+ hours and when I come back it’s lost maybe a percent or two of battery life. With my Android tablet I had it had to be on the charger constantly. My phones battery lasts about a day with all my apps on it, where my old android I had to charge mid day.


The ha-ha-only-serious joke when I was at a place developing for iOS and Android, so had tons of test tablets for both, was that the Android tablets would be completely dead after a long weekend (or sometimes just a normal weekend...) in a drawer, and the iPads would still have enough charge to do something useful, and still wake up effectively instantly, if you forgot about them for a month.

As for the phones, I find I get 2-day charge out of a new iOS device. After 2-3 years it's about as bad as a new Android. 10% low-battery warning, and put in low-power mode right at bed time? It'll still have enough juice for the morning alarm and some morning HN reading. I mean, I try not to rely on that, but it does work. Difference has got to mostly be the software & firmware.


> The Android devices I used -- all 13 of them, without exception -- routinely lost anywhere from 15% to 40% battery when I was out and about without me picking them up even once.

That hasn't been my experience.

My current (very crappy) Mi A1 still lasts 3-4 days with moderate use.


I agree Xiaomi's devices are better. I used one for a few months before I switched to Apple and liked them best of all Androids before.


> In contrast, the iPhones that I've used -- 6S Plus, 8 Plus and 12 Pro Max -- are all extremely frugal on using battery when the device is idle.

As a iPhone user (that uses an android device for work) myself, I can't confirm this at all.


I'm sorry, this is ridiculous. I don't even lose 15% of my battery life in 2 hours on my galaxy S3 without using it, and it is nearly a decade old at this point.


There are likely other factors that I can't account for because I was only using those phones in a single city -- with rare exceptions going to a rural area for 1-2 days.

Still, your anecdotal evidence does not nullify mine.


No, but from your multiple posts on this article, I don't know that I can take your android posts in good faith.


You might be biased? I have no axe to grind with any company. I am looking for what serves me best.

We don't have a perfect option, sadly. I evaluated Android quite fairly, for about 5 years, and found it lacking. I find the iPhone lacking as well, but in departments that I care less about.

Do with my posts what you will, I'll only say once that I have no bias either way. IMO both sides of the duopoly suck... but I do need a smartphone. I picked what served me better at the time.


I don't think its fair to say your are speaking in bad faith either. I had similar experiences with some older devices (Nexus One, Galaxy S2, etc). There was a long period of time when Android devices were really bad on battery, even with relatively large (for the time) batteries on board.


I'm currently using an S3, have for nearly a decade. It has never dropped 15% with no use in 2 hours.


Does the expression "anecdotal evidence" ring a bell to you?

I am glad that you got lucky. I had both a Galaxy S4 and Note 4 and both were completely awful on idle battery life; both barely lasted a day in the office with 30 minute commutes in both directions, and no more than 2h of active screen time, if even that. And I had like Facebook, Twitter and Tumblr installed at the time. And Gmail. That was literally all I had installed on top of the stock ROM.

I don't get why are you coming here insisting that your experience somehow nullifies mine. That is what I'd call not arguing in good faith. I didn't do anything more except share my bad experience with Android (which is now severely outdated because it ended in 2017) and never claimed anything more than that.

What's your goal here?


I'm not even talking with you any more.

> Does the expression "anecdotal evidence" ring a bell to you?

Read the guidelines.

> Be kind. Don't be snarky. Have curious conversation; don't cross-examine. Please don't fulminate. Please don't sneer, including at the rest of the community.


I've got an older Moto X4 and it consumes 2% per hour when in my backpack. I guess the only remarkable thing about the configuration is no social media apps.


Same answer for me. No social media apps.


    Pretty sure they can infer your normal sleep schedule and, after your phone has fully charged (1h to 3h after you fall asleep) they can freely use it as a P2P computing device.
They already do, the feature is called “Optimized Batter Charging”. It charges the battery to 80% and only does the full charge a bit before you usually start using the phone. You’ll find the setting in the battery health section.


Optimized Battery Charging isn’t the same as being a node in a bot net.


> they can freely use it as a P2P computing device

Do you have any evidence of this, or are you just making up something that sounds good to you?

More likely it's doing the background on-device work that Apple has actively advertised for a long time, like analyzing photos to attach metadata to them. (There's a lot more to this than is obvious at first glance - I can text "penguin" or "sunset" in my Photos.app library and get a set of matching pictures back, for example.)


You reckon I would know of a way to gather an exact evidence unless I work for Apple? (In which case I'll likely be convicted and thrown in jail as well.)

Obviously I can't know for a fact. I also thought they likely analyze photos and that's my chief suspect even now. But critical thinking demands to keep an open mind -- and that's how I suspect they might use fully-charged iDevices (that are expected to sit on the charger for several more hours) for unsolicited and non-advertised P2P computations.

I don't claim it, I merely suspect it.


> You reckon I would know of a way to gather an exact evidence unless I work for Apple?

Monitor your network traffic.

Lots of people do this already for various reasons and haven't noticed anything like this going on.


Yeah, sure, I can brute force encrypted traffic by eyeballing it in my router's dashboard UI. Come on, dude.

You seem to have an axe to grind here and I refuse to participate. I even objectively admitted that I can't know and that I only suspect yet you (a) offer a non-solution and (b) likely downvote a comment because you don't like it.

Sheesh. Fine. You do you.


I have owned nearly every premium phone on the market and none of them come close to the Sony Xperia for battery life. I try new iPhones every other model or so...battery life has never been outstanding.


Do you disable Background App Refresh? First thing that I do on every iDevice. That feature being on is hugely overrated, you absolutely don't need it because you enter an app and manually refresh it; what's the big deal with that, are people that lazy?... Gods!

(The only useful application of that feature is email apps, I suppose; you do want to have your messages downloaded when you unlock your phone because Apple Mail takes ages to sync with Gmail... and it only gets worse with time.)

With that option set to "on" the idle battery life might quickly take a nose dive if you have stuff like Facebook / TikTok / Snapchat et. al. installed.


To make it even worse they extended the functionality to Mac OSX. WTF???


Do you have references for this? I must have missed this part.


https://www.apple.com/child-safety/, at the bottom lists macOS as target platform.


It's unclear exactly what this means to be honest. Which is not less scary but more.

This sentence:

> Next, iOS and iPadOS will use new applications of cryptography to help limit the spread of CSAM online

seems to suggest that the hash-matching happens only on iOS and iPadOS.

The rest of the text OTOH suggests many things happen across all types of devices and OSs.

Regardless, this seems very bad.


My guess is that they're going to put it into the macOS Photos app. Those images are already scanned for people, animals and objects so it should be easy for them to implement the child protection database there.


Won't implementing this on macOS make it too easy to reverse engineer? Think the hashes are supposed to be secret.

ETA: Apple quote: "unreadable set of hashes that is securely stored on users’ devices". Unreadable, unleakable, no doubt uninterpretable... unless someone reverses them to get a blurry 'visual proxy' collection.


How is this worse? Any Apple device you thought you owned and controlled was never under your control. You were wrong.

Perhaps your understanding of the situation has changed so that you better understand how bad it is. But the situation isn't worse.


That’s an interesting point. Do you know I’d this is the case? That is, by putting this in the phone then Apple’s liability is lessened.


Apple's just really weird when it comes to the control they exert over devices.

they do it with 'privacy' in mind and what they come up with is usually better than the worst case but it can still be pretty iffy

eg. a few months back when their notarization/entitlement verification system was being discussed

it's all just their vision of computing (which has some merits), hyper-controlled, locked down and "safe" it's not going to change if you're not comfortable with it you really shouldn't be using their products


Hate Apple's philosophy personally but I recommend my mom a Mac every single time.

Not everyone needs freedom, she's not downloading fitgirl repacks.


This is a weird mindset. Either you care about privacy or you don’t: what a person does on their device shouldn’t matter. If you only care about privacy when doing illegal things you don’t actually care about privacy you’re just a criminal not wanting to be caught.


>This invasive capability on the device level...

As the article points out, the capability to scan things on devices is already deployed on huge swathes of devices in the from of virus scanners and content search indexing. All the photos on iPhones and many Android devices get scanned for text recognition now anyway. I'm sorry, but 'OMG scanning things" is a genie that got out of the bottle decades ago. So no, scanning content is not a dangerous new capability, it's an extensions of existing capabilities to a new use case.

Now arguably this is a further slip down that slope of on-device scanning, but it is not some new watershed moment where scanning is happening for the first time. Also we can see that the trend is not to repurpose existing scanning systems for new purposes. Virus scanning, search indexing, text recognition and now this are all separate mechanisms implemented and operating independently in task specific ways. The prediction that these mechanisms will be subverted for other purposes has not panned out.

The real issue is what is being scanned and why, and the implementation of this system. That's the topic we need to focus on.


> As the article points out, the capability to scan things on devices is already deployed on huge swathes of devices in the from of virus scanners and content search indexing.

that's a ridiculous counterpoint. 1) no iphone has AV 2) AV and content indexing don't secretly contact the feds.

adding a new "background service that sometimes contacts the US government" is a big privacy regression.


>cloud services, meanwhile, are the property of their owners as well, with all of the expectations of societal responsibility and law-abiding which that entails.

This is already a shifted/normalized perspective.

If I rent an apartment or a house, it comes with an expectation of privacy. I don't know where to draw the line exactly, but a dumb storage-only cloud service should carry the same expectations. Things get more complex if a service is actually doing something with the data.

But the broader point is that with cloud services, we've already given up some established legal expectations of privacy without much awareness or realization, let alone resistance.


> It’s truly disappointing that Apple got so hung up on its particular vision of privacy that it ended up betraying the fulcrum of user control: being able to trust that your device is truly yours.

Disappointing, sure, but not surprising.

Since the introduction of the iPhone, Apple has made it clear that one of their core beliefs is that they alone know better than their users. They have repeatedly demonstrated that they are happy to remove user control wherever possible. Their decision here fits that pattern perfectly.


The one thing that occurred to me is that this is almost seems like this is a cya, Section 230 protection in disguise. There has been more discussions about Big Tech and 230, and this is one way to say "Look, we are compliant on our platform. Don't remove our protections or break us up, we are your friend!" It also shouldn't be too surprising given how Apple has behaved in China. They will only push back against the government up until the point it starts to affect profits.

I'm also curious and I don't mean this in a judging way, but are all the people who are planning to sell their iDevices also going to divest all their retirement holdings from Apple? Stock price is the thing that actually applies pressure. Everyone's comp is based on it. What would happen if enough people divested from these rent-seeking companies? We can't expect the government (though they should) or companies to behave in a way that is pro-consumer, but if we can align incentives we have a chance.


> Yes, you can turn off iCloud Photos to disable Apple’s scanning

To me, it is such a weird implementation. The feature is about scanning on-device data, not data in iCloud, but for some reason disabling iCloud is the way to opt out? Did I miss any technical details?

Apple is always pretty OK with handing over iCloud data to authorities. Not only that they have already stored Chinese users iCloud data in state controlled entity, this was their whole argument regarding the San Bernardino case -- on-device data is user's sacred privacy, but if the shooter's phone uploaded data to iCloud, they were willing to send a copy to FBI in a heartbeat.

This makes me wonder, is iCloud the underlying technical boundary for privacy?


The feature is because it’s illegal for Apple to host/have/posses CSAM (in iCloud).

But Apple doesn’t want to know what they host for you on iCloud.

So they need to be sure that before something is uploaded to iCloud, it’s not CSAM.

So if you don’t intend to upload to iCloud (i.e. you disable iCloud on your phone), then Apple doesn’t care anymore if you have CSAM or not.


Apple's approach opens the door to get full end-to-end encryption for other assets that are currently stored on iCloud, that currently are not.

It is not a guarantee, but it leaves that door open, which currently would face very strong headwinds to get rolled out.


That's mentioned in TFA, Apple relented to the FBI's request that iCloud backups would not encrypted. I can't see that changing since scanning for CP won't help the FBI in the vast majority of their investigations (terrorism, drug trafficking, fraud).


> now exists

Is this true? I've always assumed apple can always do that (if they somehow needed to).


They always had the potential, but now the code is there and running.


It’s like your spouse’s communication has broken down, they’re doing things that don’t make sense, and they just randomly went out and bought a gun without telling you.

Sure, they haven’t killed you in your sleep yet… but they sure are acting weird and recently acquired the means to do so


> being able to trust that your device is truly yours

Yeah, this is essentially impossible today. Even personal computers have processors with potentially malicious firmware. In order to get a trustworthy processor to run free software on, we need to limit ourselves to hardware from decades ago.

It's gotten to the point I don't we can trust any computer we haven't fabricated ourselves. Anyone can download a compiler and make their own software but hardware requires billions of dollars worth of equipment and personnel. Software freedom is doomed unless we somehow develop a way to fabricate chips at home.


It's an unfortunate choice because my understanding is that it's possible to match encrypted files on the cloud with CSAM, assuming that Apple has the public key, and this has done before in a court case as evidence leading to a conviction. Maybe it's computationally prohibitive?


The files that iCloud Photos stores are encrypted to your account and Apple can't access them. Apple scans the images on your phone so that this can continue to be the case, but they're doing a perceptual match - i.e. determining if this image looks like a modified version of the original image (e.g. if it's just been resized or re-encoded or something).

This way Apple can scan for CSAM without having to actually violate privacy by scanning a user's images themselves.


> The files that iCloud Photos stores are encrypted to your account and Apple can't access them.

This is false[1]. Apple can decrypt them, and they can decrypt them and send them to law enforcement should they get subpoenas. They're encrypted at rest with keys that Apple has.

[1] https://www.bbc.com/news/technology-51207744


This article is about iCloud backups. It doesn't say either way whether or not iCloud Photos are encrypted.


Apple’s website clearly states what components are e2e encrypted. iCloud photos is not one of them:

https://support.apple.com/en-us/HT202303


I recall a court case where a police officer was convicted of possessing CSAM on his computer. The files were encrypted so the prosecution was not able to view them, but they were able to demonstrate that the encrypted files contained known images, I believe by encrypting a known image with the same public key and demonstrating that the result was binary identical. Obviously this wouldn't work except for unmodified files including cropping, scaling and editing metadata.


Interesting. Can you link to the case?


> This invasive capability on the device level is a massive intrusion on everyone's privacy

I am a user uploading a photo to the cloud.

Google's servers scan photos after they are uploaded to the cloud, and have done so for the past decade.

>a man [was] arrested on child pornography charges, after Google tipped off authorities about illegal images found in the Houston suspect's Gmail account

https://techcrunch.com/2014/08/06/why-the-gmail-scan-that-le...

Apple wants the device to scan the photo before it is uploaded to the cloud.

Both are equally invasive and both can only be avoided by refusing to use the cloud.


But only Apple can, at least for now, scan content or your phone. Google cannot. Yet. Small, bit crucial difference.


Frankly, it's more of a problem for my data to be flagged after Google has it than it would be for it to never leave my device at all.

People have certainly run afoul of the law when the data Google has about them is misused.

>Innocent man, 23, sues Arizona police for $1.5million after being arrested for murder and jailed for six days when Google's GPS tracker wrongly placed him at the scene of the 2018 crime

https://www.dailymail.co.uk/news/article-7897319/Police-arre...

We are only talking about what happens when you try to send the data to the cloud, after all. Not about all the data on the device.


You are so close to understand the issue. When Google, or anyone else, is flagging it and sharing the result with, e.g., law enforcement you can have a problem. Apple will no just simply refuse to upload stuff that would be flagged in the cloud, it is flagging it already before upload on your device. And share the results with authorities. That for now they claim to only do so with stuff about to be uploaded is of minor concern, the tech is there to scan all the pictures on an iPhone now. No reason to believe it will stop with intended uploads, or pictures.


> Apple will no just simply refuse to upload stuff that would be flagged in the cloud, it is flagging it already before upload on your device. And share the results with authorities.

None of this is true. A sibling comment describes the process:

>1. Only if you're uploading files are the files matched. 2. Only if the matches are very close are they considered matches. 3. Only if you have multiple very close matches is Apple able to decrypt the low-res versions of the images themselves. 4. Only if a human reviewer discovers any of the decrypted low-res images to be illegal content is any of your information shared with anyone else.

https://news.ycombinator.com/item?id=28120598


Really? Apple and Google have been able to scan content on your phone as long as they have sold phones. They control the OS, therefore they have access to the content. The ability has always been there. The only difference is that Apple has now openly announced what they are going to do and how. If governments wanted to pressure Apple (or Google) to scan content they could have done that anytime in the last few years. If Apple and Google resisted that before, why is now different?


From a legal perspective, there is a significant chasm between "you are hereby ordered to develop a whole new surveillance feature for devices you build" and "you are hereby ordered to add these new (potentially unrelated to CSAM) signatures to your existing surveillance network".


Under what legal basis are certain corporations immune to government demands while others are not?

Google does reporting on how many National Security Letters requesting information about it's users that it responds to annually.

Why doesn't it just ignore them?


> Google does reporting on how many National Security Letters requesting information about it's users that it responds to annually.

So does Apple[1]. Apple gave data on over 31,000 users/accounts based on National Security Letters in the first half of 2020 alone[1].

> Why doesn't it just ignore them?

In the first half of 2020, Apple provided data to the US government (non-FISA or NSL) about 9,000 times, and responded to requests for data with the data about ~85% of the time, and 92% in cases of "emergencies"[1].

[1] https://www.apple.com/legal/transparency/us.html


I don't understand your question. Google, Apple, Facebook et al comply with their legal obligations.

Apple does not have a legal obligation to install spyware on your phone (at least not in the US, not yet).


I think so too but from a different angle. It sounds like the federal government is strong arming Apple to spy on their users. Apple would rather use on-device hashing to preserve some amount of privacy rather than give unfettered access to the feds to their cloud.


On the other hand, Google can run arbitrary queries over your data (published to their cloud) while Apple uses the same algorithm and hash database for everyone (which probably will get reverse engineered and audited by next BlackHat).


What makes you think that Apple will use the same database for everyone?


You can't realistically reverse engineer a hash to source image.


1. Previous submission (http://www.hackerfactor.com/blog/) claims that PhotoDNA is reversible to 26x26 grayscale

2. What’s more important is you can reverse engineer algorithm that’s used locally to compare these hashes. Unlike with cloud-based searches other providers run, this puts significant limitations to what Apple or LEA can do on your phone. E.g. they can’t extract strings from PDF or grep through image metadata or request all images of certain users without knowing what these images would depict in advance. The frequency of hash database updates even if opaque will still provide more insight into what is being searched for compared to current status quo where public has zero insights because all search / aggregation queries are run on the server.


My understanding is they only scan for content if it's being uploaded to iCloud. Scanning on the phone, yes, but only if the image is to be sent over the network.


And they seem to be doing that because they both 1) want to enable e2e encryption for iCloud images, soonish, and 2) don't want to become the world's most popular child porn storage and sharing platform.

I get the arguments for why the fix for #2 is too dangerous to be worth it, but also get why they'd be inclined to choose it.


Apple can already find photos of my cat, of ice cream, and of the sky directly on my phone using AI


A webpage does authentication by downloading a list of usernames and passwords and comparing your inputs in JavaScript in your browser. Another sends a hash out to an endpoint in the cloud.

To the user - using your line of reasoning - they're both the same, right?

This isn't supposed to be a technical comparison, I just want to point out that the implementation matters as much (or more) than the customers' experience.


I don't use the cloud. I don't even sign in an email address on my android phone. That makes all the difference to me.


>Both are equally invasive and both can only be avoided by refusing to use the cloud.

Technically Apple can do a scan regardless of wether or not you posted it to iCloud if they so desired to, or were ordered to.


Technically, Google can sell the entire trove of data they have collected about you to anyone willing to write them a check.

Do you think it's likely they will do so?


The difference is you don't put your data in Google's cloud, they can't sell it. With this Apple change on your local device, you don't have any recourse other than get a different phone.

Also, Google sells your data already don't they? Probably not a great counterpoint.

https://www.tampabay.com/news/2021/05/07/google-selling-user...


Again, the only thing different here is that when you upload a photo to the cloud, Apple scans it on device before the file is uploaded, while Google scans it on their server after it is uploaded.

Nothing is happening to files you don't attempt to upload to the cloud.

Also, if Google is selling everyone's data to third party data brokers, it's news to me.


>Nothing is happening to files you don't attempt to upload to the cloud.

I am aware, but because the scanner now exists on the device, they have the capability of scanning images on your device regardless of whether or not you upload it to the cloud if they so choose to OR ARE ORDERED TO. Do you not see the difference in capability?


“Google has the capability to upload images from your device and could choose to upload every image IF ORDERED TO.”


By that logic, Google is every bit as vulnerable to being ordered to scan all files on Android devices.


So you are saying Google can access your device at any time and pull all data from it, even if you didn't upload it to their servers!? No thanks on either. I'm not sure "other companies can do it too," is a great counterpoint either.


This is classic what-aboutism, yes they are both equally bad scenarios. Apple is just one-step closer with capability already deployed.


There are many photos one could take without uploading or saving them in the cloud. The question is can I safely store things locally on my device without a multi-national corporation being able to set the policy of what's allowed there. The article cites examples of banned memes/images by the Chinese government.

There is also the risk of algorithm mistakes. Without the trust + safety team or any cloud verification, does this just get immediately forward to the FBI?

What is the process to be made non-guilty if the Apple algorithm thinks you have something illegal and you don't if there is no human in the loop involved before it's forwarded to authorities?

At least if you upload to a service someone on the trust + safety team can verify the algo worked properly. I would be very nervous building this kind of system of the risk of false positives! Maybe even so nervous to introduce many more false negatives - which is in itself terrible in the case of something like CSAM.


Here's how it works, in short:

1. Images which are going to be uploaded to iCloud are matched against known hashes. A match has to be very close in order to actually be flagged as a match. The results are uploaded to iCloud Photos along with your image. 2. If your account hits a sufficient threshold of matches (i.e. you have to have multiple/many images matched) then Apple is alerted and they are able to decrypt low-resolution versions of the relevant stored images. 3. Apple's team performs a manual review of the images to ensure that they are in fact inappropriate images and not false positives. 4. If any of the matched images are actually illegal, then Apple reports you to the authorities.

In other words, there are several safeguards in this process:

1. Only if you're uploading files are the files matched. 2. Only if the matches are very close are they considered matches. 3. Only if you have multiple very close matches is Apple able to decrypt the low-res versions of the images themselves. 4. Only if a human reviewer discovers any of the decrypted low-res images to be illegal content is any of your information shared with anyone else.

In my opinion, this is a vastly better system than literally any other cloud provider at the moment; it allows Apple to find and report even more CSAM with effectively no risk of reporting false-positives to law enforcement, while also preserving your privacy through encryption in transit and at rest.


> The question is can I safely store things locally on my device without a multi-national corporation being able to set the policy of what's allowed there.

Didn't Microsoft just start removing people's torrent clients from their computer without permission and announce that you wouldn't be allowed to use your own PC running Windows 11 Home without using an online Microsoft account to log in?


I think a lot of people are either stuck with images of Apple under Steve Jobs or Apple under Tim Cook's marketing.

It is at still nice to see people are finally asking these sort of questions.


The thing no one seems to be discussing here as this topic arises again and again is that the problem being addressed is Apple's problem, not the problem of anyone who purchases an Apple computer. Apple's "solution" is to create a new problem for all Apple purchasers.

The majority of Apple hardware purchasers are probably not engaged in illegal activity. Yet all Apple purchasers are being forced to pay the price for Apple's mistake. Apple's hands are not clean.

No Apple purchaser ever intentionally requested that Apple copy their files to servers in Apple datacenters, or that Apple perform encryption intsead of the user, store and forward the user's personal messages through servers in Apple datacenters. Those were business decisions that Apple made to benefit Apple.

Technology allows, all computer users are quite capable of storing their files on their own portable media, and many are capable of encrypting and then sending messages directly to others, peer-to-peer, without leaving a copy with a third party. Technology allows, Apple has choices. No purchaser ever demanded that device storage capacity be reduced or ports for removable media be removed. Nor did any purchaser ever demand that her message be stored with Apple. Those are choices made by Apple, not users. Users should have choices too. Instead they are corralled into a user-hostile vision of personal computing that props up the world's wealthiest companies who horde money offshore while their domicile country falls into chaotic decline.

As the parent comment states, "One's device ought to be one's property..." Apple has actively and aggressively sought to overcome this basic tenet of common sense. As people online argue about "misinformation" please stop and consider the "lies" that Apple and the companies they compete with are telling the public about what exactly it is they do. Apple has taken Jobs' idea of the "Reality Distortion Field" too far.

Computer users need quality hardware. Apple can design and assemble it and be the best at what they do. Users do not need to be manipulated by a hardware vendor after purchase. Where did that idea come from. The sad fact is the "new" Apple is not competing with other hardware vendors. No other vendor even comes close. Apple has set its sights on competing with "tech" (middleman) companies that do nefarious things and cover up the truth. Apple wants user data after purchase. Why. To compete with "tech" (middleman) companies. Bad decision. Users pay the price.

The easiest way Apple could have fixed this problem with storing illegal files would be to stop storing user files and messages! Be a hardware company. Let purchasers use portable storage or arrange their own "cloud storage" with a more "competent" provider (let another company deal with the inherent flaws of the "cloud" concept^1). Let users discover how to encrypt their own data and use peer-to-peer transmission. Let them use the Apple computer to write programs without needing "permission" from Apple. Apple has only tried to hide these basic capabilities and competencies from the majority of new computer users, strengthening the "Reality Distortion Field".

A long line of bad decisions from a hardware company we once admired. They just keep getting worse. And all for what. Greed. Immense untaxed cash reserves held offshore. Why not be a hardware company and stop trying to compete with "tech" (middleman) company surveillance. Why keep dumbing down and locking down what a former CEO once called a "bicycle for the mind".

1. "Cloud" is a synonym for "someone else's computer." "Cloud storage" is another way of saying "storing files on someone else's computer". Perhaps "Apple" is also a synonym for "someone elses's computer". If Aple can exercise control over a computer after they sell it, how it is not still theirs. Purchasers are just "users", not owners.


Here's an idea, make devices which make it easy for people encrypt their own stuff


Funny you mention that I had the same idea; why is there never any discussion of this problem

Let's say the user wants to encrypt a file on her "smartphone" (portable computer bundled with a cellular phone)

Using her own choice of open source encryption program and her own key(s)

Are there even any "apps" that do this; theres an app for that; not this time

AFAICT its easier to encrypt the file using a different computer running an UNIX-like OS then transfer the file (back) to the phone

From there it might be copied to someone else's computer, i.e., a "tech" (middleman) company's data center; this could happen, e.g., when sending "private messages" or when "syncing" with "cloud storage"

When the user encrypts herself with no tech (middleman) company assistance, only the user has the keys and only the user can decrypt

"But users will lose their keys!" - Big Tech industry enthusiast

Tech companies never, ever lose keys, right


I wouldn’t kind making one but I know that I’ll probably get in trouble by law enforcement agencies immediately.


Google could’ve announced the same solution for Google Photos as a 3rd party app. I don’t understand what privileged position Apple as OS maker has used to build this solution. It’s a feature of their Photos application when used with their cloud photo service.


It's more accurate to think of it as a feature of the iCloud Photos client software (which is built into the OS). If you don't use iCloud Photos, you don't use the client software, so nothing changes.


To be installed by default and not removable on one of the most used devices in the world.


It’s a pretty weak argument to say that a clearly communicated change in behavior of a single application compromises trust in the entire operating system. You’d basically need to argue that Apple should not be allowed to install any default applications or services.

The Photos application can’t be removed, but this behavior can be turned off.


It does because they control the entire os, ecosystem, now and in the future, and have demonstrated with PRISM and this they don't play for our team.


> they don't play for our team

I'm no apple apologist but apple must follow the laws of the countries they operate in. They can't advocate for us because we are not the ones setting the rules.


Sure. If the law tell them to locate all gays so that they can be jailed in the emirates, would you agree with that ?

I could do worse, but Godwin called.


Of course not. I even said "i am not an apple apologist" so these lame "gotcha" questions are kinda silly


The lame gotcha question is to underline than for many, the immorality of the choice they make override the legality of it.

But the previous comment was more fun.


I think the bar is a little higher here on HN. save it for facebook


You can remove the Photos app and disable iCloud Photos.

At no point are you forced to use either.


Are you suggesting that people who want to opt out of this should stop taking photos on their iPhones?


The way to opt out of CSAM scanning on an iPhone is identical to opting out of it on Android: disable the platform's cloud service to synchronise your photo galleries.


You can take photos on your phone without using the Photos app.

You can use Photos without using iCloud.


And you can use iCloud without using iCloud Photo Library.


This is an interesting comment. If you had an Apple device it was never truly yours to begin with.


If you own a device with builtin software that synchronizes part of the data on it to the cloud then it was never truly yours to begin with**

Fixed it for you. I hope you are not implying that Google, whose bread and butter has been selling personal information, has resisted the temptation for all these years?

I use Apple and it's an informed choice. I am pretty realistic about their practices. And realistically we don't have a choice. Either you use a smartphone and it's a Swiss cheese of security vulnerabilities -- worst of which are embedded in the device on purpose! -- or you don't use a smartphone at all. This is the era we're living in.

I hope that custom ROM Android communities can win and give us back some freedom but I am skeptical. Still, I am open to try some of the de-bloated Xiaomi ROMs at one point. Or maybe put LineageOS / PostmarketOS on them.


I didn’t mention Google at all, not sure why you brought it up


Because I dislike the implication that only Apple is doing it. No. Everybody is doing it. Apple was just the first to publicly admit it. Doesn't make it better, sure, but let's not delude ourselves at least. No company can resist using all that valuable personal data on the smartphones.


I didn’t imply only Apple is doing it either


Let's not argue over this all day. :)

The fact that you only mentioned Apple makes it a safe assumption that you only meant Apple. I chimed in to specify that I believe everyone is doing it, and let's stop at that. There's no prize to fight over here.


It’s an article about Apple so I only mentioned Apple lol. While we’re at it we should mention Microsoft probably does this too. Probably Snap as well.


Pretty sure they do, yeah. :)


> there will be no limits for governments to expand it's reach once implemented. The scope will always broaden

Speculating it is slippery slope arg. https://wikipedia.org/wiki/Slippery_slope


As the article says:

"The strength of such an argument depends on whether the small step really is likely to lead to the effect."

> It is true that a computer, for example, can be used for good or evil. It is true that a helicopter can be used as a gunship and it can also be used to rescue people from a mountain pass. And if the question arises of how a specific device is going to be used, in what I call an abstract ideal society, then one might very well say one cannot know. But we live in a concrete society, [and] with concrete social and historical circumstances and political realities in this society, it is perfectly obvious that when something like a computer is invented, then it is going to be adopted will be for military purposes. It follows from the concrete realities in which we live, it does not follow from pure logic. But we're not living in an abstract society, we're living in the society in which we in fact live. If you look at the enormous fruits of human genius that mankind has developed in the last 50 years, atomic energy and rocketry and flying to the moon and coherent light, and it goes on and on and on -- and then it turns out that every one of these triumphs is used primarily in military terms. So it is not reasonable for a scientist or technologist to insist that he or she does not know -- or cannot know -- how it is going to be used.

-- Joseph Weizenbaum, http://tech.mit.edu/V105/N16/weisen.16n.html

To pretend it's inherently unknowable right up to the point until it's "too late" is kinda getting old.


From Wikipedia[1]:

> Argument from fallacy is the formal fallacy of analyzing an argument and inferring that, since it contains a fallacy, its conclusion must be false.

[1] https://en.wikipedia.org/wiki/Argument_from_fallacy


I've made no prediction as to if it will happens (contrary to the parent comment). I think I got downvoted because people interpreted this as a support of theses changes (it's not).


It's a very real slippery slope, however.

What if the Chinese government could search every device for images of yellow umbrellas, as part of the HK revolution?


I’ve been thinking carefully about this situation.

I just don’t see the intrusion.

Firstly, you are syncing your photos to iCloud. It’s equivalent to developing your photos at an old timey film shop.

iCloud happens to have lots of security guarantees, much like the film shop.

But the moment you develop CP, all bets are off. I think most of us lead pampered, selfish lives. Perhaps this is a “think of the children” cry, and perhaps this has been used for many evils over time.

If Apple were to get their lists of hashes from China (a situation they’ve opened themselves up to in the future), then sure, that’s awful. Or if they were doing these checks and alerting law enforcement without iCloud sync, then that’s awful too.

But this doesn’t seem to be a slippery slope, yet. They are targeting a specific case: you uploading your photos, and your photos being of known child sexual abuse.

Yes, there are false positive photos in the database. Maybe there’s even a flower in there. But if you got flagged as a false positive, the moment law enforcement looks at your photos (which, remember, you’ve surrendered to iCloud), you sexually abusing a flower won’t cause any problems.

Or will it? Please, my mind is open. I’d love to change it.


>But this doesn’t seem to be a slippery slope, either. Any photos you’re worried about can trivially circumvent the hashing system: adjust their brightness. Boom, done.

I've been briefed about it. Apple says explicitly that this is not the case. They mention that cropping, transforming, or even desaturating an image won't result in a hash that's so different from the matching one. The similarity will be high enough to detect the photo anyway. They don't seem eager to explain in detail how that actually works, though.


The reason they don't explain how it works is that the CSAM hash matching is an algorithm from NCMEC, which was apparently originally designed by Microsoft.

If you want to learn about the algorithm/hashes, you have to, among other things, sign an NDA about it, so it's likely that Apple, legally, cannot go into detail about how it actually works.

Microsoft has a page on it here if you're interested: https://www.microsoft.com/en-us/photodna

For obvious reasons, they don't go into a whole lot of detail.

I've learned a lot about how the process works from this blog post: https://hackerfactor.com/blog/index.php?/archives/929-One-Ba...

He gets a few facts about Apple's approach wrong, but it's an incredibly interesting read.


Apple designed a new system, which is different than PhotoDNA - they call it neural hash, and the specifics are not disclosed.


crosschecking hashes is only possible if they use the „same“ implemantation for the hashes. it would be safe to assume that the implementation/algorythm is the same regardless of being called photodna/neuralhash.


Sorry, I edited that part out one minute after commenting. I realized it was a weakness in the argument.

Thank you for pointing that out though. I had no idea.


This is about Apple scanning the pictures on your phone, not the pictures that have been synced to iCloud.


A good point to keep in mind, there is no "scanning". There is a hashing, then a check of the hashes - not the pictures - and if a hash matches the on-device NCMEC hashes list a "security voucher" is generated. Said voucher will be uploaded to iCloud photos along with the picture it refers to, without anyone in the chain being able to know that it happened.


Do they do device local scanning for this purpose? Apple’s original announcement said that it was to give law enforcement answers about photos synced to iCloud.

If so, then I’ll immediately change my mind and join the protest. Device local scanning is equivalent to someone checking your personal possessions at your home.


Yes, you can find more information here: https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...

From the overview:

> Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child-safety organizations. Apple further transforms this database into an unreadable set of hashes, which is securely stored on users’ devices.


Yes, but my understanding was that they do this for iCloud-synced photos.

If iCloud is off, do they still do this? Your quote actually doesn’t contradict that, which is my main hangup. If you turn on iCloud, you forfeit certain expectations.

I’ll read through it carefully now.

EDIT: It was the very first sentence of the intro:

> CSAM Detection enables Apple to accurately identify and report iCloud users who store known Child Sexual Abuse Material (CSAM) in their iCloud Photos accounts.

I don’t get it. It’s their platform. Other image platforms do this matching. Old film shops used to do this matching. Why is this evil?


The sticking point is that the matching happens on-device, not on their servers. Sure, it only happens for photos that will be synced, but it’s still your device doing the matching and snitching.

There’s also the fact that the “only scans local photos marked for upload to iCloud” is a technically thin barrier. A switch that could very easily and quietly be flipped in the future.


If you scan in the cloud, photos not synced to the cloud are 100% not going to be scanned - they are JUST NOT THERE. If you can on device, you are one "if(" condition away from "Accidentally "scanning other photos. See the issue now?


Do you have any general objection to client software scanning content it uploads before it uploads it, rather than afterwards? Scanning it on your phone means that unencrypted versions of your data don't have to leave your phone, which IMHO gives more privacy than any other option.

Do you have an objection even though it means that the content (in a form readable by anyone else) never leaves your device? Would it be preferable if Apple scanned files that were synced to iCloud, but no longer encrypted them in transit/at rest, or encrypted them in a way that they are able to decrypt?


they already scan all the images on your phone. that's how i can search for "dog" or "flowers" and find results in my photo library.


Local scanning/indexing is fine. It’s reporting the results of the scan to Apple that people have a problem with.


I'm not sure that's true. A lot of people seem to have less of a problem with Apple scanning for CSAM and more of a problem that they're doing it on the phone (and looking for specific images, I guess?).


It’s the fact that the phone is scanning your images and relaying the results to Apple that’s the issue. If this feature were just an extension of existing on-device scanning and stayed on the device, perhaps informing the owner “Hey, it appears a few of your images may be CSAM. Would you like to review them?” then I don’t think people would be very upset.


Apple used to be able to say 'we don't have that capability' if, for example, a Turkish judge ordered them to scan phones for a certain picture of Erdogan. What are they going to say now?


> . Yes, you can turn off iCloud Photos to disable Apple’s scanning,

so, Apple now has the ability to reach into every law abiding citizens personal phone - report them to the FBI when the algorithm makes mistakes (and it will - nothing is 100%, particular in image recognition), AND THEN THE PERPETRATORS CAN STILL GET AWAY BY JUST TURNING IT OFF??!??!?!

My goodness, I couldn't imagine a stupider way to fight CSAM.

> Facebook made 20.3 million reports of Child Sexual Abuse Material (CSAM) in 2020.

Were there 20.3 million arrests of offenders? I don't think so. I doubt they caught 20 perps from that data. While it may be a better algorithm, it should be a good warning of how often your' very legitimate and non-CSAM data will be leaked to a "verifier", and worse, to the authorities.

Take a look at how often YouTube channels are taken down by "auto triggers" that are then "reviewed" and you will get a good idea of how often the government is going to screw law abiding citizens. It's not zero, and imho, it will be well above a range of "acceptable fallout".


> Were there 20.3 million arrests of offenders?

I don't know of any data from the US but in Switzerland the Federal Police pre sorts all reports from the NCEMC and around 90% are unusable [1] and cannot be acted upon. The remaining 10% are then forwarded to local authorities. There is no data I know of what the final conviction rate looks like. But from what I have gathered from local news paper these departments are usually short staffed.

I suspect that the situation is similar in the US.

[1]: https://fedpol.report/en/fedpol-in-figures/fight-against-pae...

Edit: Fix typo: persorts to pre sorts.


That YouTube example hits home. I uploaded my daughter's videos on there when she was little to share with family. Then my account got flagged for a copyright notice and the channel got suspended. Been trying to get YouTube/Google to re-instate the channel - it's been six years, no luck. I keep trying. I hate it.


There are lots of googlers here on HN. Message all of them. Message them all the time. I am sure you'll get your issue resolved.

I know it would be rude to keep messaging them but in the end they represent Google and they are suppose to help you, at least morally.


It's the general problem that if the violations and negative PR hurt more than false positives, then they will err on the side of being wrong. Which means more people demonetized/banned. We're all just collateral damage.

As for the 20.3 million reports, Facebook probably decided it was better to bury the authorities in a massive pile of paperwork than to see a day in court. Plus they get to claim they are doing something.


> AND THEN THE PERPETRATORS CAN STILL GET AWAY BY JUST TURNING IT OFF??!??!?!

Not trying to defend Apple here, but we should keep in mind that the goal for Apple is to prevent such photos from being uploaded to iCloud and detecting those that do. If someone turns off iCloud sync, then Apple would argue that it's not their concern to scan photos saved on devices (and thank god it doesn't otherwise the repercussions would have been much more significant).


From what I've heard Facebook is one of the better places to go to groom children.

Better would be educating people: don't use your real name or upload pictures of yourself to the internet, be very careful talking to strangers. Don't let younger children use social media services.


Hasn’t the ship sailed already. What is TikTok but everyone’s little kid dancing around. We are here for a reason I think, we let rampant exhibitionism go so far via social media that teenagers and preteens are mimicking the compulsive sharing of the adults who first seeded these platforms.

Now they literally need to check everyone’s phone for illegal content.

Man, have I become my parents or what.


I don't know about your parents but mine signed me up for Facebook when I was a preteen (thankfully I never used it and stuck to forums/IRC/wikis that all used psuedonyms.)


This only makes sense for non-CSAM data and/or premptive surveillance/policing. Proper criminals/terrorists will evade this by either ... uhm ... disabling this, network filtering this, using Android or no phone at all for their criminal activities... at the very least once you put this pressure on the "market" for a while.

Few people have access to the databases. I would bet quite a bit, this will be used for marking possible offenders of all sorts of crimes and "crimes", e.g. having ISIS propaganda material on your phone, or leaked data. Sometimes you may not even know files to be on your phone. Try exploring Telegram's nearby groups with image autodownload for groups activated and see what happens to your "share recent files" dialoge... It's all porn and nazi memes now!

Overall, I am happy Apple admitted this ultimately, because I was conflicted about buying into the ecosystem for the recent hardware appeal. Not conflicted anymore. Not at all. It's Linux/ASOP or ~~die~~ get stressed out. No M1 benchmark or fancy watch health features can make up for the chronic knot in my stomach using Apple's products; die Schere im Kopf . Hope I find a low latency pen input tablet runnig FOSS Android or Linux, too, as I feel uneasy about journaling or drawing on my iPad now. Thanks Apple, I truly hate you too <3


It's automated, weaponized doxxing. We used to worry about teenagers calling the FBI on us, now we have to worry about someone AirDropping you a photo album that will get you arrested.


Not sure why the downvotes - the more automation like this we have, the more one can make another's life a hell, fighting the false positives, and then the resulting checkered history of it.


I think people underestimate the black mark of even being investigated, much less charged by the police. Even if they find you didn't do something wrong, even if they drop charges, even if you are found innocent after a trial, the mere record that you were ever suspected of something can follow you the rest of your life.

The standard is supposed to be that criminals are those "convicted" of crimes, but in reality the nature of risk mitigation for most people and organizations means that they will avoid anyone with any type of history with the criminal justice system. No one will care that there was an overzealous prosecutor, corrupt cops, etc. The burden falls on you to prove you're innocent and even with convincing evidence it may not be enough.


I agree. Even being considered of doing something bad can easily be life-changing - going by the example of celebrities, for one. One of the secrets of the nothing to hide argument is, I think, that people haven't experience a thorough search on them, by some authority. The situation where one "lives or dies" according to what they find, and especially, on how they interpret it. If not exposed to this, it's much easier to think that it won't happen to them, that they're not the type to be exposed to things like this.


Given that it’s already scanned server side, you could literally do this to someone now. Both cases (client or server side checking) simply require getting a user to download it to a location that auto syncs to iCloud.

If this was as big an issue as people think, we’d have seen more of it.


> My goodness, I couldn't imagine a stupider way to fight CSAM.

That's the point. It satisfies the letter of the demands from the crazies, while not actually accomplishing anything, and creating discourse in the rest of the population to work towards realizing that we shouldn't be bending to the whims of the crazies at all.


But that doesn’t work, because the crazies will simply take this victory and move the goalposts forward. They’ll do so emboldened by your capitulation and shame you for not acquiescing earlier.

The only winning move is not to play.


Not really. Without not playing eventually any Big Tech Corp that sticks to its guns will face opposition.

What Apple should do is educate people. (Of course then we have the problem with corporate propaganda.)


It was becoming clear that the goalposts were going to move forward towards outlawing end-to-end encryption entirely. This provides malicious compliance to prevent worse.


Apple's goal is not to fight all CSAM on iPhones; Their goal is to keep it out of their servers. A criminal could bypass the iCloud-related checks, but they still have to contend with similar measures taken by video- and file-sharing services.


I think this is something that has not really been discussed much:

How useful is this feature? If the goal is to protect children, there must be good evidence that this does, in fact, protect children.


I think the goal is to protect Apple.


When I saw that - 20.3 million reports - I was like... hold on. Apple's smaller reported numbers are probably almost completely accurate, whereas that 20 million number is almost guaranteed to be mostly wrong.

Who really made the mistake? The one reporting cases that are almost definitely true? Or the one reporting so many false cases that authorities are buried in false reports?


Listen to the story of any former Facebook content filtering employee and you would not be surprised by that figure. They have horrific jobs that leave them with PTSD and drive some to suicide. Also it's worth mentioning that 20 million flagged images does not mean 20 million unique offenders.


For any population you test wherein the true positive rate is very low even a very low false positive rate will produce overwhelmingly false positives.

Also you could trivially have 1 bad guy with thousands of photos being accounted as 5,000 reports instead of 1 report listing 5000 images.


> Apple's smaller reported numbers are probably almost completely accurate

Where are the hundreds of prosecutions resulting from apple's reports? I can't find them: https://courtlistener.com/


> JUST TURNING IT OFF

Or just apply image filters to the data, that will kill any neural network detection (since there is an infinite number of filters, specially the deep fake ones).


The on device software is not an algorithm, it compares hashes of an image vs a known list of hashes.


I'm unclear how this is distinct from an algorithm. Can you clarify?


> These efforts will evolve and expand over time.

I have no doubt they will. Pandora’s box has been opened, and okay, today the scanning is only for images uploaded to iCloud, and only for CSAM. But that is not auditable and subject to change at any time without notice.

And today it is only looking at known hashes but the devices already have a capability to analyse images and even run OCR on them, this is not a matter of of these things will be weaponised but when.

Can’t believe this is the same Apple that publicly defended the right to privacy for known terrorists by using the slippery slope argument themselves. The mind boggles.


Australia has already moved to make legislation [0] that is basically an expanded version of it, with incredibly vague terms ("material... may be harmful"). Weaponising it has already begun.

[0] https://www.zdnet.com/article/canberra-asks-big-tech-to-intr...


So it took about a day for the slippery slope to kick in a slide right off a cliff.


I mean, Australia has been putting forward insane tech bills to outlaw E2EE and demand backdoors for everything for years now. I don't think they're doing this because of the Apple news.


As an Australian I can only cringingly verify this as true.

The level of IT stupidity of BOTH our major parties hurts me when I think about it.


Makes you wonder if this is a case of the creator getting ideas from the uproar about Apple's CSAM implementation. Or, optimistically viewed, perhaps someone trying to prove just how slippery the slow is?


> today the scanning is only for images uploaded to iCloud, and only for CSAM

This, to me, would be exponentially less privacy invasive as I’ve come to assume all major cloud hosting providers implement something like this (look at Google Drive), but Apple has said that the scanning is done on-device, meaning whether or not you upload your photo library to iCloud, your local photos will be scanned with an on-device database of hashes.

Essentially iOS photos now implement a direct API call to the feds with some vague “human verification” layer if you go above an unknown threshold


Right, this is what all the articles on this matter are getting wrong, unknowingly or otherwise. It’ll be one flag switch to change from scanning uploaded images to scanning local images.

This is an erosion of fundamental human rights under the guise of “think of the children“ so that anyone who stands up against this tyranny can be labelled a “pedophile”. 1984 wasn’t like 1984 but 2021 is surely looking that way.


It’s a delayed concern for us in the West. First the bait and switch will happen in all parts of the third world, autocratic, and dictatorial governments. After all, our list of hashes will be different from what their agencies will provide. Across the globe Apple will assist in this.

Then one day we get another Trumpian President who adds more categories of hashes in a place like America.

What Apple is not willing to do is have that philosophical and ethical discussion with its customers. It has simply made the decision.


> I’ve come to assume all major cloud hosting providers implement something like this

They have, for the past decade.

>The system that scans cloud drives for illegal images was created by Microsoft and Dartmouth College and donated to NCMEC. The organization creates signatures of the worst known images of child pornography, approximately 16,000 files at present. These file signatures are given to service providers who then try to match them to user files in order to prevent further distribution of the images themselves, a Microsoft spokesperson told NBC News. (Microsoft implemented image-matching technology in its own services, such as Bing and SkyDrive.)

https://www.nbcnews.com/technolog/your-cloud-drive-really-pr...


> but Apple has said that the scanning is done on-device

Yes, but only for images being uploaded to iCloud.

> meaning whether or not you upload your photo library to iCloud, your local photos will be scanned

Not in the currently proposed implementation, if I understand correctly.

Disclaimer: All my information about this thing is from news articles; I might be misunderstanding the details.


So child molestors will disable iCloud upload and innocent people will have their privacy violated?


That is one plausible outcome here, yes.


Where does it say OCR is used?


the gp comment doesn't say that OCR is used. It says that the devices have the capability. Which is the case for recent iPhones : https://9to5mac.com/2021/06/10/how-iphone-live-text-ocr-work...

There is so much rich information in your photos, from memorable places you’ve visited to handwritten family recipes. iOS 15 uses secure on-device intelligence to help you discover more in your photos, quickly find what you’re looking for, and relive special moments.


My bad, I don't know how I missed that


I am sure the product people at Apple are seeing $$$

1) Scan devices for copyrighted music, to play your music you need to have it purchased on Apple 2) Scan devices for copyrighted videos, to play your video you need to have it purchased on Apple 3) Scan devices for copyrighted photos, purchase an Apple license to use this photo 4) Scan devices for copyrighted text, purchase an Apple license to use this text

Apple - Think Licensing


The only problem is that you think a private cooperation and not the government YOU elect should be in charge of your privacy. Insanity this whole discussion is symptomatic with the whole tech bro movement. Everyone expects that the government THEY elect have the worst for them in mind. If you are really scared of your government you have to change the people that govern you and not build arbitrary tools to "secure" your privacy.


In the United States there exists a slim majority for much of the country's legislative houses. That indicates at least that there's a good chance if you don't have to right now that in your lifetime you will have to treat the government as adversarial.

In all honesty I think everyone should treat the government as an adversarial force when it comes to respecting your privacy or rights. Much of bill of rights was introduced as a response to the concerns of the government's overreach. Built in protections to impede totalitarianism.

In that regard, you can't simply rely on the government to legislate towards your interests and assuming that they will is foolish. Creating tools to secure your privacy is essential, not arbitrary.


The government has already done plenty of things without our knowledge, PRISM and the Snowden revelations. No matter who you elect the government will keep doing these things and will only expand the scope, and 99% of people won’t care, and the rest are labelled “the screeching voices of the minorities”.

Apple made privacy their selling point (Privacy. That’s iPhone), and that set the expectations of privacy people have with their products.

“Tech bro movement” what?


> Everyone expects that the government THEY elect have the worst for them in mind.

In basically every election, almost half of the voters voted for the loser.


Everyone should be at all times scared of government power, and no saying "just elect better people" is not a solution.

Government should be limited in power and scope, never allowed to expand beyond that power and scope, unfortunately since the 1930's we have allowed the US government to expand to a size that is untenable and is at odds with the concept of liberty itself


Imagine if home builders, under pressure from law enforcement and government bodies, included built-in devices in your home that monitored and analyzed your behaviors and interactions, just to 'keep people safe'.

And you couldn't uninstall them, even though you own the home.

And you couldn't know when a human was reviewing 'suspicious behavior' telemetry, which included images, video and audio of you and your family.

And you couldn't control anything about when and where this happened, except by moving house.

And all of this was framed as a 'safety' problem, and if you complained about this, you were considered one of the 'screeching voices of the minority'.


Not during construction but people sure do put devices in their homes that tick all those boxes and you know what, they mostly don't care.


They voluntarily install those cameras in their homes and it isn't for the express purpose of government surveillance.


Just like they voluntarily buy iPhones …


Yes, but they don’t buy iPhones for their surveillance capability. In fact, many buy them because Apple tells them that they, as customers, will NOT be surveilled.

Apple sells this idea so strongly that they say they treat privacy as a fundamental human right.


What devices are you talking about?

specifically this box "And you couldn't know when a human was reviewing 'suspicious behavior' telemetry, which included images, video and audio of you and your family."

and this box: "And you couldn't uninstall them"


How do we make them care? Or more politically worded, show them it is in their interest to care?


lobby, or run for their office.

edit: er, i think i misunderstood your question.


1984


This is (rightly) causing enormous damage to Apple's brand and undermining one of the key messages they are using to promote the iPhone. What is more I'm sure Apple's executive team would realise that this would be the case.

I can't help but feel that someone has wielded a 'big stick' to get Apple to do this. If not it's a huge misjudgment.


How do you know this damages Apple's brand? The HN community is very very different from the public regarding to privacy.

Maybe most of the public do not understand privacy in this absolute sense? Maybe even the public support scanning photoes to find child sex abusers?

It could be a fact that the outrage is only among a very small number of individuals. I won't be suprised if the number is small even among tech persons.


Yeah, I am pretty certain if I asked my friends what they thought of Apple's new photo scanning privacy concern they'd be like "What? Not heard about that" and that would be about it, move on continue loving their iPhone.


privacy is something Apple markets a ton to regular folks. even on billboards!

Apple likely did market research before they chose their ads and decided that privacy was of mass interest.


Apple is a trillion dollar business and will find novel ways of either burying the feature or whitewashing it as something we all need. I imagine if they do e2e encryption of your photos they can spin this as being a necessary feature "to keep you and your family safe".

I doubt they have any worries about tech news being critical of their policy. People will generally trade privacy for safety, I've seen this as a constant theme since 9/11. There is a lot of apathy around who has your data these days. I talk about it with friends and more often than not the attitude is "well, google/facebook already know everything about me, doesn't matter anymore".


The real question is whether they can keep their pro-privacy image intact with this change.

Also, if they don't then does it even matter at this point?

It's possible that even if Apple dropped all pretense and started working with authoritarian governments to compromise reporters the general public in the west still wouldn't care.


Can't see how 'we test all your photos' on your phone works with the whole iPhone protects your privacy message but we'll see.


Hmm not sure it is that simple. This is how it started also with Google or Ads. First technical people complained.

But then you know when you talk with a friend that is not technical and they just ask: what do you think about the new iPhone/new Apple launch?

Well so far the answer was: yeah, great you will have privacy.

But from this on the answer might be: oh, be carefull they are scanning your phone. If this goes into a kind of self-repeting meme, then the details will be lost, but the key phrase that Apple is looking into personal photos will remain.

This could, potentially, do a lot of brand damage on the long run.


/r/apple is absolutely furious.


I keep on reading that taking steps to prevent CASM is doing "huge damage" to their brand. Or that blocking minors from sending nudes if they are on a child account under their parent is doing huge damage.

Really?

You know some late night comedian is going to do some jokes about pedo's having to switch to android. You think that is damaging to apples' brand?

My guess is some android folks do a follow eventually (as usual).

I'm a parent. Even for those of us who are into privacy etc (yes, I did the early PGP key signing parties, EFF / ACLU stuff etc) I'm having a hard time seeing how this damages Apple's brand. I don't want this crap being sent to my children - PERIOD. If they are on a child account PLEASE screen it.

Folks - pay attention to the kind of laws that will get passed and do get passed. Most folks will throw away a lot of civil rights for these types of issues.

I found the arguments against this surprisingly uncompelling. I saw an HN article about how apple is committing felonies etc - it just didn't seem well founded. And everything is over hyperbolic over the top its insane.


I agree that the idea of the AG bringing felony charges against Apple employees because of their approach to dealing with CSAM is pretty far-fetched. But the article you're referring to [0] is correct to point out that Apple will be silently exfiltrating unencrypted data from its users' devices to be reviewed by its employees, meaning that they'll be viewing arbitrary content from Apple users' devices without their knowledge or consent. That's not speculation, it follows directly from Apple's explanation of how the system will work, with the only safeguard being Apple's purported one-in-a-trillion false positive rate for the on-device scanning that causes the data to be sent to Apple.

[0] https://www.hackerfactor.com/blog/index.php?/archives/929-On...


You are confusing encryption with access to encryption.

Apple ALREADY has access to all you icloud photos. Period. How do you think they offer them to you via website and various sync services.

So they can encrypt them at rest or in transit, but their KMS or whatever gives them access ALREADY to these very same photos.

This illustrates I think how bad this convo has been from the HN side. A lot of bad info out there on this which makes the totally overblown responses even worse.

Heads up - your phone ALEADY uploads photos to iCloud if you let it, and those photos are accessible by Apple ALREADY.


While logical to assume, that's not a given. I think some of us want to believe that the keys to those said images are uploaded encrypted to the cloud. And the ability to sync between devices just means adding more keys to your account. So unless you know something we don't, it's not 100% that they sit unencrypted on Apple's servers, or that they have the keys to decrypt them.

It is sound judgement tho, to assume that they do.


They are explicit about this.

"iCloud content may include email, stored photos, documents, contacts, calendars, bookmarks, Safari Browsing History, Maps Search History, Messages and iOS device backups. iOS device backups may include photos and videos in the Camera Roll, device settings, app data, iMessage, Business Chat, SMS, and MMS messages and voicemail. All iCloud content data stored by Apple is encrypted at the location of the server. When third-party vendors are used to store data, Apple never gives them the encryption keys.

Apple retains the encryption keys in its U.S. data centers. iCloud content, as it exists in the customer’s account, may be provided in response to a search warrant issued upon a showing of probable cause, or customer consent."

So unless you doubt apples own guide - they maintain the keys and will provide this info in response to govt requests.

They handled requests for 31,000 accounts in the last 6 months based on their reporting and provided data in 85% of those situations.


The AG bringing felony charges isn't far-fetched its ridiculous. Does anyone think a trillion dollar company with a giant legal staff hasn't vetted this in a hundred different ways? They are probably going country by country to validate legality. It will not be implemented anywhere they don't find that it is 100% legal.


The irony - my guess is many countries will require this or block e2ee. The idea that govt or public is against this seems unlikely


You may be conflating two features. 1 - Messages fix. 2 - scanning your entire Photos library and sending hashes to an unaccountable, no oversight nonprofit (NCMEC) to be compared with CSAM hashes - with no way to contest what you send.

1 is arguably a good feature (though it's very intrusive - it has a benefit).

2 is a monstrous invasion of privacy. It has no benefit to you, the user, only a massive potential threat.

Finally, the NCMEC is founded by someone who admitted, if he was judged by the own law he helped pass, he'd be considered a sex offender when he was dating his then-girlfriend.


> scanning your entire Photos library and sending hashes to an unaccountable, no oversight nonprofit (NCMEC) to be compared with CSAM hashes - with no way to contest what you send.

This is simply incorrect. It's clearly documented how this works. 1) only files about to be uploaded to iCloud are scanned, 2) Apple maintains the hashes for comparison in their servers (not NCMEC), 3) only after multiple photos match is the account flagged for review, 4) they are reviewed by Apple to catch false positives, 5) only after match is confirmed is the account disabled and the user reported to NCMEC, and 6) you can still appeal to Apple to have your account reinstated.

https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...


> scanning your entire Photos library

The parts of it being uploaded to iCloud, not the whole thing, yes?

> and sending hashes to an unaccountable, no oversight nonprofit (NCMEC)

As I understand it, the whole point is that the hashes are computed on-device and compared to the NCMEC database on-device. Is my understanding incorrect?


> The parts of it being uploaded to iCloud, not the whole thing, yes?

This is policy, not capability. The capability is each and every image can be scanned.

> As I understand it, the whole point is that the hashes are computed on-device and compared to the NCMEC database on-device. Is my understanding incorrect?

You are incorrect but also the concern is NCMEC’s database’s accuracy. Also, going by other NPO’s the government has taken an interest in, NCMEC is very exposed to government control. They can easily decide to allow the government to slip a few extra hashes in their database and tell no one. They can do this in exchange for protecting their NPO status, a national security letter, anything. We will never know and if we find out, we will have no way to hold them accountable. Ever.

Anyway, the NMEC database is NDA’ed and a secret. It’s not open to review or auditing. It’s also grossly inaccurate and nearly useless.

You are taking these points entirely out of context and in a vacuum. It reads a bit bad faith and win an Internet argument at any cost.


> This is policy, not capability. The capability is each and every image can be scanned.

Are you familiar with the implementation? If it's in the uploader, then the capability for each image to be scanned sounds like it would involve a bunch of code changes.

But let's posit that the implementation just checks the "uploaded to iCloud" flag and hence this is "just policy". I agree that this is concerning, but I think it's important to distinguish between "could do X" and "is doing X" when describing a situation.

> You are incorrect

I would love to be enlightened here. Can you please point me to an explanation of how my understanding is incorrect?

> but also the concern is NCMEC’s database’s accuracy

I absolutely share this concern.

> You are taking these points entirely out of context and in a vacuum.

No, I don't think I am. I think there is enough heated rhetoric going on here, with people mis-stating what is actually going on to justify how they feel about it, that it's worth being very clear about what the problems here really are. Otherwise it feels like people are arguing against strawmen and makes it too easy to dismiss concerns that are very pertinent.

The original article for this thread, by the way, does a good job here.


Agree to disagree on the rest, I don’t feel there’s more to say that would convince you or vice versa.

But I will respond to this point:

I said:

> Anyway, the NMEC database is NDA’ed and a secret. It’s not open to review or auditing.

NMEC itself would never go for the entirety of the database sitting on each phone. It’s a jailbreak away from “Them” having the database and being able to check their images before uploading them places. (This overlooks the fact that it’s almost universally accepted by everyone outside of NMEC that deals with NMEC that “They” have the database several times over.)


> This is policy, not capability. The capability is each and every image can be scanned.

That capability is in every software/app that has access to your photos and to the internet, on any device.


> The parts of it being uploaded to iCloud, not the whole thing, yes?

If you leave your phone on default settings, that’s all of them. In fact you need to turn off a lot of things. Turn off backups, Photo Stream, Files, Mail Drop, album sharing (make sure you don’t accept any invites to shared albums or you will get flagged) and I’m sure there are more iCloud integrations I’m not aware of.

It’s actually quite hard to not use iCloud, by design.


Thank you, that is useful context!


Your understanding is incorrect. The NCMEC database is unavailable to your device. The hashes are checked by communicating with a server. In particular, only the server ever learns if there was a match, not your device.


Wrong.


No - the amount of bad info in these discussions has gotten incredible. The more overblown the claims the more trashy the data it seems they are based on.

It's funny, it is literally in the first paragraph of the system overview in terms of how the database of hashes is stored - "which is securely stored on users’ devices"


You misunderstand what that quote is referring to.


No, he's not. Read the documentation. The hash database your photos are compared to will live on your device, and scanning happens entirely on-device.

"Apple further transforms this database into an unreadable set of hashes, which is securely stored on users’ devices."


Read further down:

> the blinding step using the server-side secret is not possible on device because it is unknown to the device. The goal is to run the final step on the server and finish the process on server. This ensures the device doesn’t know the result of the match, but it can encode the result of the on-device match process before uploading to the server.


This is a confusing paragraph, but the "final step" is not matching per se. The final step reveals whether a match occurred - but it's clear that the database exists on-device and the database lookup occurs on-device.

Note the last part: "... but it can encode the result of the on-device match process before uploading to the server." The result of the match already exists before it is uploaded to the server. This is also made clear in the diagram.

Depending on how you define "matching," it either occurs on-device or no explicit matching actually occurs. The server gets a payload for each uploaded image and attempts to use its server-side secret to derive its decryption key from the header. If (and only if) the image was a match - as determined by the blinded on-device comparison - it will now have a key to decrypt the rest of the payload (for review). Note that neither client nor server explicitly compare the real database hash to the image hash. The comparison is absorbed into crypto "properties."


Do you agree with these two statements?

1. For every image that's scanned, whether matching or non-matching, some data will be sent to the server.

2. Your phone never learns and has no way to tell which images match and which ones don't.


Ugh. Feel free to expound any of your vague replies in this thread with actual explanations that make sense under any reasonable interpretation of "on device."


My expectations from Apple would be scaled way back if they weren't preaching fairly consistently that they treat privacy as a "fundamental human right". They've marketed this as not only a competitive difference but also a moral imperative.

To claim that you treat privacy as a fundamental human right is an extraordinary claim that requires significant effort and action to back up. To me, "fundamental human rights" apply to everyone - all humans - including children.

Their commitment to this was already called into question for several reasons, including their partial commitment to E2E as well as their actions in other countries (i.e. China).

When you give your users privacy with numerous conditions attached indicating all the times they don't have it, you aren't giving them privacy. Full stop.

It's like going to someone's home, having them tell you that they are champions for privacy and then going to the bathroom and seeing a damn camera attached to the wall. "Oh, that, don't worry. I only look at the footage if something gets stolen."


Apple's brand won't be sullied until they give-in to a government order to flag other types of illegal images (gay porn in Saudi Arabia, say, or pictures of the Tank Man in China) and are caught doing it.

Apple's rejoinder is that they will simply refuse to do that. And that's great, until you consider that all the iPhones are made in China and China is more than willing to apply immense pressure including but not limited to shutting down Foxconn if they feel strongly about it.


Most of the major tech co's have already adapted to China's requests.

That in general means hosting data about chinese nationals in china, usually using state govt controlled datacenters, and making sure the keys to decrypt content are also local and accessible to those state controlled employees.

You should have little to no privacy expectation in china as an example.

For example AWS is careful to use this language about it's china regions: "Amazon Web Services China (Ningxia) Region operated by NWCD"

They used to block KMS services in China as well.

Apple has said it will (generally) follow the law in the countries it operates in. Until we say apple can make its own laws that is probably what it'll have to do.


Apple already acquiesced to China in that manner, yes. They store all iCloud data in China and they gave the Chinese government the encryption keys.

This is a bit different in that they're scanning files on client devices, /not/ solely in the cloud.


Also a parent, and I couldn’t more strongly disagree with you.

We cannot protect our children by building a dystopia for them to grow up in, and normalizing this kind of invasive spyware on every device is pretty much guaranteeing that.


> I don't want this crap being sent to my children

You can always not buy your child a device.

Parents will recoil in horror at the suggestion, being told what to do and limiting their child’s freedom ! Indeed, welcome to our world, where we suffer huge affronts on our freedom and privacy in the name of “the children”.


>I found the arguments against this surprisingly uncompelling

I think it just goes to show that people don't actually care about privacy and civil liberties. You can't argue against "think of the children" without being labeled heartless or a pedophile so no one with true influence will argue against it since nobody wants to die on the child porn hill. This is what happens when your thought leaders are all cynical and value money and power above all else.

While I'm a privacy advocate I could see that the arguments were fruitless. The popular conception of the constitution today is that it is a joke. People mock liberties like freedom of speech so you just know privacy is something people do not care about.


People do care about privacy, but not in that absolute sense. You think privacy is 100% or 0%, but vast majority of the public don't think so.


exactly. This seems a pretty low intrusion way to deal with a somewhat important issue to a lot of folks. They already had access to these icloud photos on icloud, this was in a way something to keep the scanning OFF their servers.

I really continue to doubt this is going to have huge damage to apple's brand. More likely - others will rush to copy this (or be forced to by govts elected by people who want this).


So you would be in favor of installing cameras in all homes that detects child abuse, murder, and rape in an automated fashion? It can use microphones to detect people in distress. This would prevent many more heinous crimes than what apple is proposing we do and it will only be used for detecting horrible acts.

By your logic this would be acceptable and we could all still claim to value privacy.


We are discussing a specific set of technologies.

That said, camera's are already spreading pretty quickly (check out ring doorbell and friends) and homeowners are voluntarily registering these with local police departments.

So yes, people don't mind if they are recorded going into and out of their own house and voluntary let police review this footage.

Again, I think folks are overstating the "huge" damage to apple's brand.


People haven't had ring forced on them inside their homes yet.

Like I said in another comment, the principle is whether forcing surveillance on people to prevent crimes is ok. The set of technologies is irrelevant, Apple is going through personal information that has not been voluntarily shared with the public by the content owner to detect criminal activities. I don't see how that is different from installing microphones and cameras inside people's homes with the passive ability to detect crime. Why can't a landlord install this kind of technology in their tenant's apartments?


I don't think so. This is a case by case reasoning. There is no straight logic leads to jumping from scanning iCloud photos to installing cameras in homes.


The principle is that it's fine to install surveillance if we are preventing horrible things from happening.


> You know some late night comedian is going to do some jokes about pedo's having to switch to android. You think that is damaging to apples' brand?

No just the start of a whole stream of attacks from WhatsApp / Facebook:

https://www.theverge.com/2021/8/6/22613365/apple-icloud-csam...

Also just read the full range of comments.

The fact that you're saying elsewhere that everyone is misunderstanding and it's all overblown is sort of making my point that it's affecting the brand.

Plus you're saying other firms capture images so that's fine - no! Those other firms haven't made privacy a central feature of their brand.


There is a big stick coming from the EU (and someone said the same is coming from the UK).

My guess is that Apple designed this privacy protecting system so that they could deliver the solution on their own terms - and perhaps leading the way on how this could be done before they are hit with a cookie-banner-popup-level solution from bureaucrats.

Regardless on your opinion on whether they should scan or not, both the EU and the UK now have a reference design that protects the privacy of people, and still manages to either identify the people that own that material, or make it more inconvenient to own that material.

As we like to say "Deplatforming works", and in this case, a good useful tool for people that own those pictures is no longer available, and they have to resort to jumping through hoops and relying on more inconvient solutions. The later might not solve the root problem, but introduces friction that gets in their way.


A few months ago, everywhere you looked people were promoting apple as the privacy company. It was an amazing amount of viral goodwill and trust


As I, and a small minority have pointed out for years: Apple's privacy marketing has always been fake; this is a PR failure more than it is a technical failure. I had nearly given up on hacker news because this idea had been so controversial here in the past, and I'm not sure I should have stayed since it took this long for everyone to notice.


> This is (rightly) causing enormous damage to Apple's brand

Is it, really? I think they feel this will blow over.


I think that so long as there is an argument for arresting people that possess child pornography that the negatives will outweigh the positives in the eyes of the people in charge. Preventing child abuse was more important than everyone's complete privacy in Apple's opinion. Maybe it was because one is directly related to the outcome of a human life and the other is a nebulous construct to most people except those in the technology sphere. But those were ultimately Apple's (or the government's) values in this case.

I don't think that the phrase "giving up some amount of privacy to stop child abusers" will sound unreasonable to enough other people that don't really understand privacy. The damage to Apple's brand seems to be coming mostly from technologists and the privacy-conscious. Even if only a handful of people are justifiably arrested because of this change, making the tradeoffs not worth it in their eyes, there are still many other people who would believe that catching even a few more child abusers was the right thing to do.

Reducing the reasoning to "but think of the children" ignores the fact that there are still reasons that CSAM is declared illegal. It seems that most people on these threads are focusing on the fact that this is a privacy disaster - which is absolutely is - but until there is a viable argument that on-device privacy is more important to the general public than shutting down a market that actively produces evidence of child abuse, I'm pessimistic that this will be walked back.

What is needed are studies correlating the spread of CSAM with actual CSA, but they do not exist. At this point it seems to be taken for a fact. The taboo around the subject appears to have disincentivized the creation of such studies.


I can think of excellent arguments for arresting people that are planning terrorist attacks using Signal.

What I can't think of is why scanning everyone's private messages just in case they might be those terrorists is a worthwhile trade-off in any kind of free society.

There are lots of ways to catch the bad guys, and we'll never catch them all no matter what we do. In countries with a concept of individual privacy before the state, there need to be limits on how much the state gets to snoop.

There are plenty of countries without any such privacy, so it's not like we have to guess what the far end of the slippery slope looks like.


You’re right; this feels like a “do what we tell you or we go after you for <insert any of a thousand things Apple is guilty of>” situation.


> This is (rightly) causing enormous damage to Apple's brand

Apple’s brand according to HN has already been damaged by too many trade offs: walling off the App Store, the MacOs system disk, availability of stand-alone OS updates and more.

Extreme temperatures and wildfires are to climate change as tangible customer data leaks are to Apple’s implementation of security and privacy around its ecosystem.

That is, CSAM scanning does nothing to prompt a 114 degree day in an iPhone user’s mind.

Until Americans are hauled into custody the way Belarusians are right now, this will not enter the customer perception of Apple’s brand.

And if that future lies ahead, it will be way too late to matter.


I'm not sure about the brand, but their stock seems to be doing fine. Barely a dent.


I was actually going to switch over to their eco system on the next update. (mostly for the watch, partly for blue bubble on texting-- sadly yes it actually matters in the dating market...)

I'm now certainly not going to and am loudly telling all my friends why.


Surely sticking with Google is not the logical reaction to this. I hate that Apple is doing this too, but Google is still in a league of its own when it comes to violating your privacy.


Definitely all ears for options that still allow TFA apps like authy/okta/google authenticator needed for work.

ie, afaik pureism is not an option


IIRC those authentication apps are just TOTP which you can run on anything (the algorithm is trivial, there are even bash implementations.)

There's both ungoogled android or if you like Alpine Linux a lot there's PostMarketOS.


awesome, thank you for the leads. I'll do more learning :)


although I'm a HN reader, I'm in the camp of "since this doesn't affect me personally (unless apple's algo's mislabel my kids photos as offending), apple is helping make the world a better place" - not enough to make me switch away from the gold standard of today's "smartphone experience"


These features are coming later this year in updates to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey.

The Elephant in the room is "this will be on Mac OS". So the fact that the user cannot have root access to an Apple mobile device is well established by now. But to have my desktop/laptop computer do things on behalf of third party, whatever the "logic", is total madness. This is not some SaaS app. When I buy a car I expect that I am in control. When I buy a computer I care about my control over my property.

What is this madness? Is due process and innocent until proven guilty bed time stories now?


To be honest, this entire episode is starting to radicalize me.

Everything stallman has said about proprietary software is slowly coming true, and I must admit that now I’m reevaluating a lot of the technology in my life.


As a long time iPhone/Mac user, this change has me seriously looking at Android and Linux for my next devices. Only a couple of months ago I was wondering whether I was too invested in Apple's ecosystem, and what it would take to push me to a competitor. Got my answer surprisingly quickly.


> Next, iOS and iPadOS will use new applications of cryptography to help limit the spread of CSAM online, while designing for user privacy. CSAM detection will help Apple provide valuable information to law enforcement on collections of CSAM in iCloud Photos.

The CSAM scanning is only getting added to iOS and iPadOS. If it does getting added to Mac at some point it will be part of Photos.app, not the general OS.


People are forgetting here that MacOS is already scanning all your unknown files (not only images) and sending hashes into the cloud.

https://sneak.berlin/20201112/your-computer-isnt-yours/


Not forgetting. "Now, it’s been possible up until today to block this sort of stuff on your Mac using a program called Little Snitch (really, the only thing keeping me using macOS at this point)."

I use Little Snitch from version 2. The last MacOs on our company (and personal) computers is Catalina 10.15.7. Switching production to Linux is planned for the near future. We don't care about "how powerful and beautiful is M1" or how "iPhone 13 will break sells records".

The normalization of digital surveillance and digital police state is everywhere. As people that create consumer technologies we take a stance and inform our customers about this and help them make rational decisions for the future.

Apple, Google, Microsoft, Facebook, Twitter have historically unprecedented power over people.

Governments are caring more about using this power over people instead of protecting human rights.


OCSP isn't "scanning". It checks if a certificate is revoked.

https://blog.jacopo.io/en/post/apple-ocsp/


My source was bad, scanning is related to antivirus and is better explained on here: https://eclecticlight.co/2020/08/28/how-notarization-works/


Read the actual quote from Apple; https://www.apple.com/child-safety/ These features are coming later this year in updates to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey.* End please, don't be naive. We all are guilty of projecting emotions over corporate marketing tricks and behaving like a children when given a shiny toys.


No one knows. They could make it part of the os. They could even hide the fact that it is there. Like the intel ME stuff. I am switching to Linux for desktop.


will i be able to avoid this if i put linux onto my mac?


Presumably yes, this is still OS-level stuff, not Kernel-level. Provided you can actually install Linux; I don't think there's a single stable distro that runs on M1 Macs yet.


> One of the most powerful arguments in Apple’s favor in the 2016 San Bernardino case is that the company didn’t even have the means to break into the iPhone in question, and that to build the capability would open the company up to a multitude of requests that were far less pressing in nature, and weaken the company’s ability to stand up to foreign governments.

I think that a crucial missing piece is the FBI's argument: Apple is fully capable of developing and signing a "law enforcement iBoot". IIRC, the FBI was even willing to have someone else develop the software and only ask Apple to sign it–which they definitely have the capability to do, and only policy of not signing other people's software stood in the way.

If we agree that Apple was right in 2016, it stands to reason that Apple cannot be compelled to modify its CSAM filter to capture arbitrary contents, or report it at lower thresholds, or expand it beyond iCloud Photos (like to Messages itself). The amount of work they would have to do for it seems like it would be even higher. There are whole infrastructure pieces that just don't exist. What am I missing?


I think the biggest thing you are missing (from my perspective) is that to make those other policy changes from the San Bernardino case would require large investment from the company employees and would be externally visible.

A change to the policy of what kinds of images are scanned is opaque by law, since none of the Apple employees involved can even have any access to the database of hashes they are using. There is also no realistic way for the consumer to understand the true false positive rate, no ability for a third party organization to distinguish false positives from true positives on non-csam images leaving the device.

Additionally, these are just problems in the US. Other governments can and will mandate the use of this tool for other kinds of media they find objectionable in their borders.

And there will be bugs that expose people to the results of these other governments, like the Taiwan flag emoji crashing case. https://www.wired.com/story/apple-china-censorship-bug-iphon...

The large investment from this system is almost certainly the infrastructure to get it on phones, report the results, and run it in scenarios where it will minimize battery impact. What photos on device it is run on does not strike me as a technical challenge once the tool is built, only one with policy implications. And the easy answer to that will be just to check some flag if the phone is in a country that requires all pictures to be scanned.


> to make those other policy changes from the San Bernardino case would require large investment from the company employees and would be externally visible.

To that point - I generally think that engineers at ostensibly-privacy-minded companies like Apple are competent, well-intentioned, and good canaries. If I were to open Twitter and see a lot of people "seeking new opportunities" from Apple's security team and not able to give their reasons? It's very possible that a backdoor was built contrary to public statements, and they could not condone the discrepancy.

But here, not only is the list of hashes editable with merely a configuration change, but it is fundamentally a list of hashes that is designed to be secret and non-auditable and supplied by a non-auditable supply chain. In fact, the proponents of this program would argue "don't give the Apple engineers and product managers access to the hash list, nor access to whether test images are matched by the hash list, because it could be used for nefarious purposes if they themselves are perpetrators."

So at any time a photograph commonly used to criticize a regime or commemorate a specific event could be added to the list, and there would be literally no way a well-intentioned engineer even inside Apple could even know about it. This isn't just a technology that could be applied with technical effort to make a backdoor, it's a deployed backdoor that opens up all our devices to supply chain attacks, plain and simple. A state level actor would simply need to convince someone at NCMEC to insert something into the un-auditable hash list (whose source images are never to be looked at in totality by design), then compromise any person or computer in the law enforcement-side reporting pipeline to exfiltrate the identity of anyone with the images in question. That's absurdly dangerous.


Matches aren’t automatically reported to authorities. They first go to Apple, and a match on an absurdly non-CSAM picture will be noticed. If it passes through, it will be reported as a tip to the NCMEC, which will also evaluate the pictures; and only then can it be forwarded to the government.


…and the Apple employee in China will risk her career by flagging Winnie the Pooh as unobjectionable?


This is many steps removed from the current situation, which is that the feature rolls out in the US and photo hashes are provided by the NCMEC. Please describe how you think the system would work in China.


A change in policy regarding the pictures that are scanned is a change of NCMEC policy, which would be the same kind of high-level court dispute.

I’m not knowledgeable enough to comment about other countries’ policies, but the fact remains that no one has a law enforcement iBoot, as far as we know; and if one exists, CSAM filtering was never the opportunity that repressive regimes were waiting for.


My understanding, is that existing NCMEC policy includes non CSAM photos that were found alongside CSAM. I’m not talking about evil government attacks surreptitiously criminalizing your meme images in the US, I’m worried about boring dystopia where over time the false positive rate is much higher than anyone will admit and Apple employees are reviewing tens of millions of false positive photos at random from people’s iPhones a year.

And a more direct dystopia outside the US where they have already criminalized your memes. What stops China from saying we have our own database of images objectionable to the state that you must scan devices for?


This type of persistent, continuous, opaque scanning for "objectional material" makes prior discussion about whether law enforcement should have the ability to boot into a criminal's phone seem almost quaint by comparison.


I hope that there will be a transparency report for how many reports happened and how many were found to be false positives.

Regarding China, the answer is probably “nothing”, but the system as it exists is not amenable to what I imagine their goals would be: only monitors the photo library (not messages) and requires a certain threshold of matches. The question is, I think, what today prevents China from requiring devices to attempt to match a database of pictures with a mandated match algorithm and a mandated reporting server. I believe (admittedly without any knowledge of the matter) that the same standard would apply to both.


> If we agree that Apple was right in 2016, it stands to reason that Apple cannot be compelled to modify its CSAM filter to capture arbitrary contents

The government or whoever will regularly add hashes to the list. Those hashes can be for anything and it’s not like Apple has any way to verify what they are for. Apple doesn’t really have oversight on what they’re doing if I understand correctly, all the trust is in whoever creates the list of hashes


It's not hard to imagine that authoritarian regimes all over the world will soon have their own special list of hashes of "illegal" images. Seems like a great way to bury any photographic evidence of government abuse.


The government will have to overcome the same issues compelling NCMEC to add pictures to its data set that it would have to overcome with Apple.

Additionally, by compelling the NCMEC, the government cannot increase the scope of searching or lower the match threshold.


It's a private nonprofit that appears to be wholly funded by the government. It's reasonable to expect that if they don't "play nice" with the DOJ, their funding could be affected. Incentives are important.

https://en.m.wikipedia.org/wiki/National_Center_for_Missing_...


But it’s not something that the NCMEC can do in secret. Apple gets the CSAM reports and decides whether something is worth sending as a tip to NCMEC.


I wouldn’t think the government shares the images with NCMEC, wouldn’t they just say “we captured some hard drives, add these hashes to the list”?


My understanding is that the NCMEC actually has the pictures, and they’re basically the only people in the US who can legally hold onto them.


I think HN readership is so used to live in the digital world that many forgot we can live without internet as humans.

ANY connected digital media can become a 1984-style spying device if you don't have full access to it or even if you have that but you are not an experienced electrical engineer.

The only defensible platforms are nondigital media and airgapped computing. I, for one, wouldn't shoot a nude with any digital device nowadays. Only exception would be a camera that gets connected only to airgapped computers.

But we are losing that too. In a short time we won't even be able to pay for that Polaroid or to buy a Librephone without being traced in some digital form.

We are losing all the battles but we need at least to prioritize. The cloud is lost, connected devices are lost. We need at least to keep cash payments, and to pressure the government to break up digital monopolies.

I don't want to sound like desperate luddite but I really think that warrants requirements are unenforceable in the digital world and we really really need to keep important parts of our lives in the analog one.


>we can live without internet as humans.

Sorry, but I'm not going back to pre-internet days to accomplish my life goals. You might as well tell people in 1900 that they could live just as well without electricity.

If you want to be a monk or Amish, sure. If you want to be a normal human in pursuit of self-actualization and helping the world, internet is a de-facto critical piece of your life.


If my job wouldn't be in the IT field, I would be certainly be able to live without the internet. You can live without the internet, as does billions of people in the world.

My grandma never had a computer, or a smartphone, she doesn't even know how to say the word internet, and still she lives fine, and she knows more informed than I that use internet, reads newspapers, on paper, without stupid ads, listens to the radio, watches TV, calls her friends with the landline phone, and most importantly, a thing that nowadays is lost, she goes outside and talks to people.

It seems like we no longer can talk to people, we are constantly busy with our devices, back in the days if you were on a train what could you do, read a book, or talk with the people sitting next to you, nowadays you use your phone, listen to music with headphones, and isolate from others.

Internet was a great thing, but the internet of the beginning, of the '90 and first '00, where it was a cool thing, where you could have used it to learn new stuff, the internet of IRC chats, the internet of forums, communities.

The internet was good because it was something alternative to the real life, something you did in the evening when you didn't have other things to do, nowadays internet became a substitute of the real life, on the internet you share things you do, you talk with people that you know in person, and then when you meet what have to say since everyone knows everything others have done thanks to social media?

Also in the modern internet privacy is destroyed, back in the days one rule was never use your real name on the internet, and never share personal information. Now it's the opposite, Facebook requires you to use your real name, YouTube wants your ID to watch age restricted videos, they encourage you to share to them all your personal information, that is bad.

I wouldn't return to the days before internet, but I would return to the '90, where internet was something good, and just a cool tool you could have used, but not something fundamental for your life.


As a person who comes from a country where a large percentage of people live without internet, I'm not sure what your point is. These people don't do it out of choice, they do it out of necessity. They're the poorest of the poorest, and have no access to healthcare, banking and employment as an almost direct result of the lack of internet access.


That we lived most of our life without internet. And nowadays it seems that if the internet is not working is the end of the world.

> and have no access to healthcare, banking and employment as an almost direct result of the lack of internet access.

Yes, because we didn't have all that things before internet was invented. Or people that doesn't use the internet, like all the old people that doesn't where to turn on a computer, doesn't have access to them.


Technically speaking, yes, anything with firmware can be corrupted. But I refuse to meekly accept that level of tech nihilism because smartphones are an invaluable tool in this modern era. I'm not going to stop having one, although the vendor may change, because the important things in my life are better stored as digital. Photos and videos of friends and family who have since passed. Physical photos will be lost in a fire but with cloud backups, they can be reprinted. Analog works, until, realistically, it's impractical. The cloud isn't lost yet. Don't give up. Please help us fight for it.

Donate to the EFF.


The internet I think is critical nowadays, and therefore I'd like to see a part of it open, or having open alternatives. Email, for one, is still in a fine place, considering the myriad of proprietary and closed services.

It's technically true what you write, that we can live life without the Internet. But we can live life in a million other sucky ways too, and that doesn't mean that we should. I find it arbitrary how people draw the line, and they are fun to argue, but no amount of good reasons make the line not arbitrary. Abuse has a long history and the older ways of living had their share too.


This is a great example of people being in a tech bubble. In a short while nobody except techies and molesters will remember that this is happening. Some significant number of KP men will be caught, but not the clever ones. Most iPhone users will put it out of their minds and continue using iCloud.

The bigger authoritarian countries (China, India) will demand, and receive, the ability to match against illegal images and memes, like Tank Man, Winnie the Pooh, illicit tweet screencaps, etc. No one with power in the West will really care.

The slippery slope is actually iMessage scanning, not iCloud Photos. Real time, ML based, it is fully capable of scanning for keywords, the same way as NSA analysts used X-KEYSCORE (in the ancient historical period, before they switched to ML). (Fortunately, NSA (or others) probably won't let it's ML classifiers be distributed, because they could be reverse engineered to see how much NSA knows.)


this is just a show of loyalty to the government to prevent any sort of monopoly action against Apple. The government actually loves monopolies because it makes monitoring citizens easier. The Feds would hate for Google, Facebook, or Apple to be broken up. Much better to have everybody using a small number of platforms with government backdoor access

Wouldn't be surprised if this was agreed to behind closed doors in exchange for other assurances from the government. The government passing laws forcing Apple to do this would be very bad PR, better for Apple to appear to do it voluntarily


> The government actually loves monopolies because it makes monitoring citizens easier.

You make it sound like the government is a single person with a plan. I don’t think people and departments in a government are very coherent and aligned, it’s a huge number of people with a zoo of mandates and everyone is fighting to be seen as somebody who makes things happen


In this case the incentives align. Much easier dealing with one or two parties than many.


>The government actually loves monopolies because it makes monitoring citizens easier.

Care to cite a source for that claim?


I wouldn’t stop where Ben Thompson drew the privacy line between “your device versus the cloud”.

Your content should be your content regardless of where it lives.

-Photos -Texts -Social Posts -Notes, Reminders

These are all what make up a person’s life. At one point in the not so distant future all of our data will live in the cloud, are we then property of the cloud providers?


I think it's a kind of fine distinction... for now. My primary concern with Apple's proposed implementation of CSAM detection hinges almost entirely on the fact that it is performed on device. I use iCloud photos right now knowing fully well that it is subject to these scans.

I think ideally things would truly be E2E encrypted, in which case Apple or any other cloud provider doesn't have to trouble themselves about what's on their servers, since no one but the user could ever look at the contents of it. In this case a big blob of data up to the storage limits of the person's plan is the only thing they'd have to worry about.

That is also the other issue with Apple's proposed implementation. It completely circumvents E2EE and makes it entirely pointless. It doesn't preserve privacy in any respect whatsoever.

And to what end? You might capture a few people that are syncing their child porn on iCloud without encrypting it themselves first, but does this really make children any safer? It certainly endangers anyone using an iPhone given the technology doesn't discriminate against what kind of hashes it provides and increases the scope of surveillance on everyone. These sorts of things are extremely hard to undo.


Not sure if I fully agree with "Your content should be your content regardless of where it lives." - the cloud operators own the hardware that hosts this content, they could want to impose limitations on it, irrespective of whether they have any legal obligations to do so. An analogy to me would be: "Yes, you can park your RV on my lot, just don't cook meth in it.".

You can still manage your own data so that it is opaque to the cloud providers - encrypt it and don't share the key with them. They'll have no insight into the blobs of data that you're uploading to them.


To use your own analogy- as an RV lot owner, should you have the right to install cameras in every RV to ensure no one is cooking meth?


I can certainly ask every RV owner to agree to it and they are free to accept my terms or look for another lot. Or maybe I can say that every RV will be sniffed by a dog and in case it detects something, the RV will be entered to check for compliance with the rule.

Edit: to be clear, what I'm trying to say is that there are 2 parties entering an agreement here, the owner of the infrastructure also has some rights because we recognize the rights to private property.


That's a pretty idealistic view. I guess the issue boils down to whether you believe there are things in digital realms that are illegal to possess.

For comparison, it's pretty obvious that there are certain things in physical realm that citizens shouldn't have. So it would be wrong to claim "my things are mine and no one should be able to inspect them or take away from me, no matter what those things are". Examples for things citizens should not posses: weapons of mass destruction and slaves.

Apple et al argue that CSAM should be digital content that nobody ought to possess. So while you're obviously technically capable of storing it, it shouldn't be legal.


> I guess the issue boils down to whether you believe there are things in digital realms that are illegal to possess.

not at all. I'm pretty sure nobody here thinks CSAM should be legal. The topic at hand is whether that means everyone should be subjected to blanket surveillance by default.

Just because there's child abuse out there doesn't mean you can walk into a random persons house and rummage through their drawers, and by the same logic Apple or anyone else should have no business digging through my phone.

It's not idealistic at all, it's simply opposition to surveillance of people who have done nothing to warrant such suspicion. I don't know about the US, but here in Germany we have the so called Briefgeheimnis. (literally: secrecy of letters). Privacy of correspondence is almost sacrosanct and can only ever be violated under extreme circumstances, not casually or systematically, that's what the Stasi used to do.


The problem with rummaging through someone’s drawers is that this is done by people who in that process will inevitably find out much more about your life than binary “has CSAM / does not have CSAM in their possession”. There’s no viable way of limiting scope of such rummaging and there is no way to make individual officers forget what they’ve seen.

Apple’s system on the other hand provides such guarantees, so I don’t see how it can be compared to these hypothetical rummaging operations.


Apple was in a very bad damn if you do damn if you don't situation.

Their CSAM reporting looked awful. They needed a solution. They came up with something that's definitely more elegant and more privacy-oriented than scanning all iCloud Photos.

They went for the more complicated solution, which incidentally is one that less people will properly understand (or care about), therefore limiting the size of the inevitable dent on their privacy reputation.

I think they are also preventing the consequences of some upcoming legislation in the US, UK and other parts of the world to make CSAM scanning mandatory in the Cloud, which would carry the risk of weakening Apple's privacy infrastructure even more.

So, if you're given the choice to go for a haute cuisine gourmet sandwich with a side of shit salad, and a huge shit clam chowder, what would you rather go for? In Cupertino they picked the sandwich.


> more privacy-oriented than scanning all iCloud Photos

But they ll be scanning all icloud photos


They don’t. It’s technically not a scan, but a hashing of all content. It’s a non trivial difference.


is there any difference in effect?


Scanning in this context implies understanding what is in the image. Hashing doesn't. A hash of a tree only tells apple "This is not a CSAM image", it doesn't say "there is a tree in this image."


ok i thought we were talking about "scanning for CSAM images". I dont understand why we derailed to different image identification techniques. The point is, whatever apple is doing on the phone could be done exactly in their icloud servers


I believe the difference is that on device they can do it prior to encrypting the image. On the cloud it is encrypted.


Why are they 'preventing' the upcoming legislation?


Sorry, bad phrasing. Clarified.


One of the best protections against invasion of privacy and feature creep is to require an independently obtained warrant that is served to the person in question.

Firstly, this allows the general public to get an idea of how much governments or police are interfering in people's lives but secondly, the enforcers have to convince someone (usually a judge) that their case has merit before they start defaming someone or taking away their liberty.

We shouldn't be as scared about lacking privacy as we should be about the fact that our privacy can be invaded with little oversight and sometimes without us even knowing about it!

Also, we need to speak up for those who are much more likely to be affected by privacy invasions (black, Muslim, other minorities etc.) even if we are white and don't think it will be a problem for us personally.


I recently got a new iPhone for the better camera. Did this rather than getting a camera. Now I'm wondering how long it'll be before cameras will also have CSAM scanning.

It also leads me to this question about CSAM. If it only recognizes existing photos, then it's of no use for photos that I take. So perhaps CSAM on a dedicated camera will never be a thing, and people will gravitate back to such cameras.

Also leads to this question. If Apple isn't scanning my photos ( for which there is no CSAM fingerprint), what photos are they scanning? Things that showed up in my browser?


Most of the comments ignore that this is likely to be well received by many consumers, because it sounds good on the surface: Apple is using tech magic to fight bad guys. Hooray!

Worse, it’s nearly impossible for a competitor to market against without risking the general population believing you tacitly approve CSAM. How does Google market Android without coming across like a safe haven for pedophilia?


In an era with so many people screaming, "Defund the police!" I can't see how reporting more stuff to the police automatically is going to go over well PR-wise.

Even before that phrase, we were already in era of "Don't talk to the cops without your lawyer, especially if you're innocent." Giving those same cops more information is not in your own best interest.


> In an era with so many people screaming, "Defund the police!"

This is a very vocal, very small minority. The most ardent police reformers I know believe that law enforcement is necessary. Defunding has devastating consequences on those who are most vulnerable.


"Defund the Police" is a terrible slogan but your take on it is inaccurate.

It's not about removing law enforcement, it's about repatriating the excessive funds that have gone towards aggressive militarization of the police....and putting it towards other social care workers who are trained and better suited to serving the most vulnerable.

It's not about abolishment, it's about reform and restructuring to better support society


The decision shoots CSAM policing in the foot. Before, people trusted their iphone, and when a phone was conphiscsted as apart of an investigation orgs like the FBI had methods to break them due to their singular popularity. Now, those with actual intent to create and distribute are going to be far more cautious and use more secure systems. The actual perpetrators of abuse slip through the cracks once again while we pat ourselves on the back that we maybe will catch a few idiots that only consume it.

Same happened with Fosta, same with the war on drugs. The actual perpetrators don't get caught, the abuse continues, but the metrics make it look good.


> It’s truly disappointing that Apple got so hung up on its particular vision of privacy that it ended up betraying the fulcrum of user control: being able to trust that your device is truly yours.

These devices haven’t been “truly yours” for a decade. Hence, jail-breaking. Maybe this is just the first time you‘ve encountered friction against the walled garden and become widely aware of its existence, but that doesn’t mean it wasn’t there before.


This entire analysis is based off a flawed assumption - that it’s a matter of proactiveness and not a nature of platform differences that causes the reported CSAM numbers disparity.

A social networking site/platform is one where previously created media is shared. iMessage pictures are not uploaded to iCloud Photos by default (they are part of the usually E2E content with the standard online backup caveat), only photo albums or photo rolls are - and those are overwhelmingly first or second party just-created content.

This means that if comparing against known CSAM hashes, it’s extremely likely for Apple to find orders of magnitude less content in the first place. The only thing that their system can catch is first-party images that end up being distributed and registered with the various hash databases.

Regardless of whether we are talking CSAM or anything else, it is the norm for the amount of content “created” to be significantly (as in several orders of magnitude) less than the material consumed.

I don’t think scanning on the device vs scanning in the cloud is going to change any of that.


Will a neural hash of every picture be uploaded to Apple?

If so, any abusive pictures that are discovered during pedophile ring arrests could be traced back to the phone that took them.


I don’t believe any non-matching hashes are uploaded (that’s part of the crypto work that has been done) but the device will periodically rescan previously ingested content, as I understand it.


All are uploaded but only when the secret sharing threshold is met are any of them decryptable, and then it’s only the ones that were a match that can be decrypted.


Do you have a source for that? I was under the impression that content is checked against the database locally, i.e. Apple never sees a hash that doesn’t match.


It's true that Apple never _sees_ a hash that doesn't match, but the encrypted hash is included in the safety voucher. That is to say, all hashes are uploaded, but only the matches can ever be decrypted, and that's only if there are enough matches.

From the technical summary [0]:

The device creates a cryptographic safety voucher that encodes the match result. It also encrypts the image’s NeuralHash and a visual derivative. This voucher is uploaded to iCloud Photos along with the image.

[0]: https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...


In reality, these devices are so complex, we really have no clue what they're already doing, and we've only found out about some of the more recent absurd things they're doing due to flaws with their implementations, such as MacBooks calling out to Apple before you can login after wake causing abnormally long wake times when network conditions are less than ideal or you're connected over international VPN, or calling out to their servers every time you launch an app.

They could have not told us this at all, and we wouldn't have found out until some hacker playing around with network interfaces noticed unusual traffic going to Apple's iCloud servers from the bird daemon.

What Apple is doing is distasteful to me, but I'm afraid the market is so poor in comparison, I don't know what else you could purchase besides Dell XPS systems running Ubuntu.

But there's no phone version of that, to my knowledge.


If you don’t mind custom ROMs etc, Pixels with GrapheneOS or similar?


Yeah, my thought was the latest Pixel, flashed with something. But I'm not up-to-date on that world. I've never had an Android device as a personal device.


I also want to add that anyone that says “you can just turn off iCloud photos” can just see themselves out of the conversation. Apple iPhones ship with such low free space after OS and apps and they refuse to support SD cards (although it would be trivial for someone with their resources to abstract away the UX problems) and each photo is now 8+ MiB while videos quickly reach a GiB+ that users really have no choice - especially given how badly iPhones double as mass storage USB devices to allow for manual curation - but to enable iCloud photos.


You can use other cloud services besides iCloud


This whole business makes me want to return my iPhone and iPad for full refund. The terms of ownership have changed significantly since time of purchase, and frankly it feels like a privacy bait-and-switch.

I'm curious if there any possibility of a small claims case forcing such a return.


When you signed up for iCloud, did you read the terms of service? They specifically require you to not to use them for illegal purposes and I’m sure they’d argue that everyone in the industry scans cloud storage and this change is only shifting when they make this check. I would really not expect that claim to go very far.


well now they can argue everyone in the industry already scans locally on device for illegal content, not a huge step

a few years from now: "it's been such a success we're going to do it everywhere for everything now"


This makes me appreciate the tough spot that Apple is in:

> both the UK and the EU are moving forward on bills that mandate online service companies proactively look for and report CSAM. Indeed, I wouldn’t be surprised if this were the most important factor behind Apple’s move: the company doesn’t want to give up on end-to-end encryption — and likely wants to expand it — which leaves on-device scanning as the only way to satisfy governments not (just) in China but also the West.


If Apple really cared about privacy, they'd instead have spent some money to lobby against those bills.


There is no company on Earth who is going to take the side of child pornographers.

Most of the arguments against this ignore the fact there is a legitimate chance that this type of technology will only be used to combat child abuse and not more nefarious purposes.


What about terrorism, hate speech, etc?

Most people would agree these are very bad things. Problem is there is no objective definition of what constitutes it.


What makes you think they didn't? (I don't know whether they did or not, but I would absolutely expect such lobbying to (1) be happening and (2) be kept very quiet because it's too easy to spin it as "Apple supports pedophilia" or some such.)


Even without a requirement to proactively scan, Apple is surely expanding its ML stack on photos to categorize and describe photos and persons in them.

Is that not the simpler explanation for this?


They can already do this now for every iCloud photo and the protocol described is elaborately protecting a small number of hashes. It seems unlikely that the PR brushfire from this is better than simply pointing that classifier on all of the photos already on Apple’s servers.


That's not what I'm saying. Apple is required by law to report images if they have "actual knowledge" of CSAM. Courts aren't too kind about superficial technicalities like "that output label wasn't trained" when Apple is doing on-device ML classification of pictures to identify content. By proactively scanning for selective hashes they can say they're doing something without being required to report a less-accurate ML output.

That's also why this only applies if connected to iCloud; otherwise they don't have "knowledge" of it.


As the recent NSO/Pegasus scandal has proved, there is no need to add backdoors into our cellphones.

If someone is suspected of a crime a warrant can be issued by a judge and the suspects phone compromised under lawful control of law enforcement.

What Apple is wanting to do is pre-crime where everyone is considered suspect until proven otherwise.


This is just a side-step away from complying with regimes that will want to use this capabilty to attack political speech or outlawed literature.


Apple is a company that has gone completely out of it's way to protect their users from end-to-end encryption and refusing to introduce backdoors into their products to get data from suspected terrorists.

These slippery slope type arguments tend to assume that there is no middle ground between sides of this argument.


“Unreasonable search and seizure” in the US has traditionally meant you needed a warrant from a judge stating specifically where you are going to search and what you are searching for. We now seem to be just tossing that out of the window. The government can’t do it, but it can deputize a private organization to do it and that makes it now ok?


I have been saying this for years now. The gov’t is basically outsourcing violating the constitution to companies. Free speech? Just have Twitter, Facebook and Google censor unacceptable speech. 4th amendment? Have the private sector violate it. Easy peasy.


I like this fairly subtle take (which I agree with the author that Apple's cloud product is essentially a betrayal of their privacy stance).

The subtle take I got is that devices should be completely private... given that is what you pay for. This drives Apple's phone system into that "unremovable bloatware" which I pretty much want to escape from on Android devices without necessarily setting up all the control myself. I wanted to trust Apple's hardware for privacy (and already understood the limits of their privacy in their cloud), but now I feel betrayed as this product can be eventually used for more sinister practices should a government pass laws necessitating it, as highlighted in many other comments.


This is just really sad. Apple used to be the beacon of hope. Maybe it was false hope all along.

We must go full FOSS from hardware to software to preserve humanity. Everyone needs to get on board, not just the savvy minority. That's the goal.


I never thought of Apple as a beacon of hope, but I was willing to give them the benefit of the doubt that perhaps they could succeed at giving non-technical people the majority of their digital rights. Build systems that are secure from everybody-but-Apple, use top-down control to police bad actors, and allow privacy/security/freedom to flourish within their walled garden.

Sticking government spyware onto people's phones is a direct repudiation of this philosophy. I'm not surprised this would happen eventually, but I am surprised it has happened this quickly. It has been what, a mere 5 years since the San Bernardino affair? The authoritarian delusion is a strong siren call, but I had expected it to result in ever more App Store restrictions and perhaps a few breaks where the government compelled signing trojan horse targeted updates - not a full rejection of the very idea of user privacy by placing a government agent in every phone.

With this development, Free Software is once again the only option if you'd like to preserve your own digital rights. Its story for mobile historically kind of sucks (Android with varying amount of bad bits stripped out), but that is changing with things like PinePhone. IMO inexpensive devices are a necessity to get iterations into many people's hands, as opposed to the waterfall-feeling model of OpenMoko (etc).


I agree with you. Ultimately though, we can’t rely on Technology companies to make the right decisions. Businesses exist to make profits, if there aren’t regulations against doing this someone was ultimately going to come up with a way to do it. Even with regulations many corporations fail to be compliant.

I feel like we almost need a “Constitution for the Digital Age”, which would eg guarantee privacy rights.


This is shaping up to be a monumental mistake for Apple, which by now should result in scrapping all client-side scanning.

If not, it will be an interesting test of how many users Apple will lose over this.

Those Apple devices i might upgrade will now be put on hold, it is simply unacceptable what Apple is doing.


>If not, it will be an interesting test of how many users Apple will lose over this.

Lose.. to whom? PinePhone? Not trying to be flippant, but... there is no longer any option for the common user.

No, my elderly mother is not going to download and flash a custom de-googled ROM for her OnePlus 9 Pro. Pure fantasy that Apple will lose users because of this. This is a nonstory to everyone who lives and works east of Tahoe.


As far as I know Android devices do not implement the scheme that Apple is considering.

But you're right that some users absolutely do not care (or can only use Apple devices), they will now give up their right to privacy.

But you don't have to look that far to see that a lot of users disagree with what Apple is doing here.


>As far as I know Android devices do not implement the scheme that Apple is considering.

No, instead they just scan everything you upload to their service: https://support.google.com/transparencyreport/answer/1033093...


Which is at least transparent and importantly: it leaves the user with a choice of not using Google services, e.g. Gmail, Drive and Backup to name a few.

And scanning for CSAM is certainly commendable, but the whole issue with Apple's implementation is that you lose your privacy on your own device.

That is a step too far.


You can just not use iCloud Photos and the scanning won’t happen.


Even if Apple decides to roll this back they've made it clear to me that I can't afford to be too locked into their ecosystem. With more open platforms you have options if you disagree with decisions made by any one vendor. With Apple you have no other options.


It's interesting how everyone used to be all in favor of Apple's strategy of doing all photo analysis on the device instead of on some remote server, and now it's reversed. Suddenly Google's approach is better?


The advantage of doing analysis on your device was supposed to be that your data never left your device. Now you get the worst of both worlds, with your device doing the analysis but your data leaving anyway.


iCloud photos always left your device.


Nothing changed. The issue is the same for both situations: possible communication with remote servers.

If the scanning here was happening and merely notifying the user, there wouldn't be a problem. Or course, it would be completely useless to catch criminals.


Apple could have solved the issue by making iCloud photo sharing its own separate App.

The App could have done the scanning before sending and I would be free to not install it on my phone if I don't use iCloud photo sharing.

Instead, we now have an ever-present spyware engine embedded into the OS itself that can be abused by policy, as the author points out.

But I guess having a separate App would potentially cut into Apple's bottom line as it would have lessen the opportunities of pushing for iCloud subscriptions.


I don’t know where have you got your information, but it works literally as you said. If you don’t use iCloud Photos sync, then in-device scanning is not applied. Is it app or not, does not really matter because you have to trust its existency or option selection anyway.

https://www.apple.com/child-safety/pdf/Expanded_Protections_...


There is a huge difference between an OS-level feature and an App-level feature.

Once it's embedded in the OS there is basically nothing stopping the extension of "features" to also eventually scan everything else at some point.

As an App-only feature, you basically disable that whole functionality if you don't have the App.

If you're an iCloud user, it's not much different. But if you're not, you're basically safe from that particular feature creep of extending the scan to other parts of the system (like whole photo-roll, documents, data from other apps, etc)


This is what you see, but the whole system is a black box. All this speculation exist already. All you have is the trust. Same "if" argument is as strong as before and after this feature. People don't seem to understand, that the way they see how the system works does not mean that it works that way.

For example if Apple releases iCloud Photo app, how do you know that it is actually removed when it is removed? Same party is responsible about the OS and the app. Again, you have to trust their word. If they are saying, that they only scan photos which are going into cloud, then it is the same thing. You have to trust what they say and final functionality does not get any different.

> If you're an iCloud user, it's not much different. But if you're not, you're basically safe from that particular feature creep of extending the scan to other parts of the system (like whole photo-roll, documents, data from other apps, etc)

If you read their protocol, you would see that they can't send data alone. They have 30 pages long whitepaper which explains how scan analysis is cryptografically embedded into voucher with original data, and is not possible to send alone. Again, all you can do is to trust their word.

If you look at the past, Apple has invested a lot of things, that it can lock itself out. There are no history of misuse in the past, yet this is speculated all the day.

"Scanning other parts of the system" is like only 10 lines of code without this new feature, and many don't realize that. Antivirus engines has existed since 1980s and if Apple wants to allow some government to scan your files by changing their model, it has happened already. Once they allow that, then we should be mad. We should not be mad about speculation everything.

For example giving private keys to China government to access their iCloud data is not an example if someone tries to mention that. They haven't change the underlying model in their operating system. They gave out the data, which was result of their operating system unchanged.


Nope, Apple is opening this up to 3rd party app devs as announced today.


I may consider getting a stand alone camera now.


Apple is going to scan images you download too, not just ones you take with your camera. Leaving their ecosystem is way more important than just getting a standalone camera.


Oh... you just reminded me of the upcoming VPN service that will act as a proxy of sorts for internet browsing. If they do MITM HTTPS then they will basically scan evertything about you and your internet habits.

When did it became like that? Their only saving grace was that they were okay-ish with user privacy.


Do you mean this would apply to Macs too? So if I used some non-Apple photo app, and uploaded photos from a camera, they would be similarly monitored?


> Do you mean this would apply to Macs too?

From [0]

'These features are coming later this year in updates to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey.'

So yes.

Also by the OP:

> iMessage encrypts messages end-to-end by default; however, if you have iCloud backup turned on, your messages can be accessed by Apple (who has the keys for iCloud backups) and, by extension, law enforcement with a warrant.

So if you use iMessage and iCloud Photos then they will be scanned by Apple. Won't be surprised to see the on-device scanning feature more prevalent on M1/M1X Macs.

[0] https://www.apple.com/child-safety/


So if you use iMessage and iCloud Photos then they will be scanned by Apple--> it is a lie. This is definitely not the case.

Yes, with iCloud backup turned on, messages can be accessed by Apple, that hasn't changed, has been the case for a decade. It is specifically written in the technical document that they are not scanning the iMessages photos in the cloud the same way that they would scan your iCloud photos.


Ah thank you.


Even if it did, iCloud is less of a necessity with a Mac than it is with an iOS device. I have a little more agency and utility on Macs.


No, they wouldn't. It's only scanning photos you're uploading to iCloud Photo Library via Apple's Photos app.


One thing that might happen is that they could MITM your internet connection by default while maskerading it as a privacy feature.

https://www.theverge.com/22573519/apple-private-relay-icloud...


If we are going to be totally paranoid then a lack of pictures on your mobile phone could be seen as an indication that you have something to hide!


Sure, then let them play their authoritarian hand in attempting to follow through on that. Maybe at some point, it gets ridiculous enough that some critical mass of people get fed up and we are able to reverse course.


Nothing to hide right? As an otaku who collect figurines I can't wait to have the police knocking at my door because of a false positive triggered by a 1/8 scale statue of Megumin casting a fireball.


Cameras in the future are not immune to what we may see on upcoming smartphones. For example, Sony revealed an image processing sensor with compute capabilities built-in. It's able to not only send photon measurements but send metadata on what's being captured in the image. https://techcrunch.com/2020/05/14/sony-shows-off-first-combi...


Remember when scanners would allow you to scan banknotes, even those with a Eurion code?

I suspect we'll be seeing more of this kind of thing in digital stand-alone cameras. GPS-enabled? You just lost the ability to photograph demonstrations. Trademarked logos i the background? Maybe the shutter won't click.


Scanners with banknotes remind me of my kid's LeapFrog book. It's basically a book and a pen which can read whatever you tap against. It has an optical sensor at the tip of the pen and the print just has a bunch of microscopic dots barely visible to the naked eye. https://en.wikipedia.org/wiki/LeapFrog_Tag


The way it works is people report CP to a third party organization that fights CP. A hash of some sort is recorded in a central repository that is used by many services including every major social media company. Those companies then use it to delete/ban/etc users who post images matching those hashes. Generally, they also work with law enforcement.

In the current iteration they are not using AI or anything to identify "bad" content. They are matching hashes for files on on your device against known bad hashes/content.

I'm not trying to convince you one way or another about it, just clarifying what is happening.


> In the current iteration they are not using AI or anything to identify "bad" content. They are matching hashes for files on on your device against known bad hashes/content.

This is misleading as these are not what most technical people would think of as a hash. These are "perceptual hashes" that can compute a distance, instead of a boolean equal/different, leaving space for false-positives instead of extremely rare hash collisions.

https://www.hackerfactor.com/blog/index.php?/archives/929-On... https://rentafounder.com/the-problem-with-perceptual-hashes/


How do we know this isn't a three letter government org silently pressuring apple into compliance?

I see Apple getting a lot of flak but I'm surprised as anyone that they have decided to suddenly buck their trend/perception of being pro privacy quite so dramatically.


This article has what I would consider a major error. It attributes Facebook's CSAM reporting to "scanning."

In truth, the issue is that photos are put on Facebook to share with others and other people will be looking at them and reporting them. Facebook does look at content as well, but the reporting numbers are guaranteed to be vastly different simply because one is a sharing platform and the other is a means to back up your private data.


Apple,

That winnie the pooh with text over it sure does look like a problem. Can I haz alert to whoever has that?

k. thx.

— Xi Jinping


That ship has sailed.

iCloud in China is controlled de facto by the CCP.

Apple is taking something from its users in the West which its users in China never had.

I walk by an iPhone billboard fairly frequently, all it says is "Privacy".

I can come to only one conclusion as to why Apple is doing this, and it isn't because they're acting on their own free will in the interest of their business and users.


Fair point. The sad part is that all the bad things need to happen for the populace to revolt.

Most will forget about this intrusion and move on with their lives. And yet, i cant help but shed a tear for the liberity we all, in the good countries, are losing.

bit by bit. rung by rung.

but at least the phones keep getting better…


I'm optimistic, but only because things are getting so bad, so quickly, that I expect catastrophic change before the decade is out.


FYI, all Chinese phones have a mandatory app installed for years, which scans everything on device for these "problems" (as well as CSAM).


Not an unreasonable request, and don't be surprised if it affects phones outside of China too. https://www.wired.com/story/apple-china-censorship-bug-iphon...


I'm trying to think of another situation where a company has a default starting position that their customers are breaking the law during normal use of their product.

If car manufacturers installed a breathalyzer ignition interlock device on every car (new or currently owned) that would save some lives, but I'm pretty sure very few people would tolerate it.


U.S. Senate bill seeks to require anti-drunk driving vehicle tech

https://www.reuters.com/world/us/us-senate-bill-seeks-requir...


Truly dystopian. I can't wait until someone has to drive to the hospital but has been drinking and can't.


There is no drunk driving exception based on where or why you are driving.


Should they be driving though? They can always call 911


> it ended up betraying the fulcrum of user control: being able to trust that your device is truly yours.

The more opaque proprietary services a device runs, the less yours it can be. I'm surprised it took this long to remember this concept.


Goodbye All Apple Devices

Hello All Linux Devices


I keep seeing Apple take a beating. But I can’t help but think this is politically motivated. I’ve seen private-government collusion first hand; create paranoia, goto government with a solution, get some custom legislation or exec order; profit.

Politicians began hating on tech corp and promising changes. We’ve seen attacks on section 230 in the press and bills popping up.

This is not going away. My guess is it’s a hedge against government who decided to pass some messages along.

“Why won’t Apple help us fight this?” being suggested by political grifters in both parties. Cable news will hassle them.

They’ll do this to avoid that.


For whoever thinks they are fine and have nothing to hide, think about the clock cycles that the scanner consumes on your phone’s cpu slowing it down and draining the battery if it scans when not plugged in. Second, this will classify and extract metadata from your own private photos, receipts, photo of your router’s password on it and so on. This is not only a privacy concern but abusive move from Apple. Windows does all kindof of shady things under the telemetry pretext, now Apple’s doing this. What I hope is more smartphone opensource options.


So in short:

Apple respected its Users' privacy and therefore only reported a low number CSAM cases since it didn't monitor and scan anything.

Facebook, because it doesn't respect its Users' privacy, wants to monitor and scan everything so it has to report large numbers of CSAM cases (in the millions).

It seems to me that the "number of CSAM cases" is a nice (perverse?) corollary to how much a particular service respects its Users privacy.


Sounds like a great way to attack a target and destroy their life. Most people have no idea how many or what photos are on their devices.

1) Either get photos on their phone with the correct HASH, or get some photos that are already on their phone into the HASH list. 2) FBI will be quietly informed by Apple. 3) FBI shows up one day at the targets home or work. They confiscates all electronic devices and ruins that persons life.


The decision of Apple is bad for the simple fact that it assuming that everybody is suspected to be guilty until you have been scanned, inspected and proven clean. Until now you would have to be under an investigation for your data to be monitored/inspected. But now everybody is a suspect and has to be under scrutiny, this is really bad and will lead to abuse of the system in the future.


I think the real test would be the sales of the upcoming iPhone 13 . If Apple sees a material difference in their sales, they might consider changing that policy.

However as things stand now, with Apple trying to move more of their revenue to services and the general stickiness of the Apple's platform, this change would be palatable to most users.


I probably won’t buy another iPhone at this point. I’m in a tough position because I have said the same about Android for other reasons.

In the end, all I can say for sure is that I have lost my enthusiasm for consumer technology.


Yeah I hear you.

I started playing with computers in the early 80s. It was so much fun I made it my career. Computers and the internet were what was exciting and what I spent my time and energy with.

Now I’m retired, and the exciting promise of all this technology has taken such a dark turn that I’m honestly less and less involved with it all. Better to spend time in my garden, or in nature, or eating and drinking with friends.


I dont blame you. After all this change is retroactive to all iPhones, and you might see the iPhone 14 removes the scanning. But then iPhone 15 re-adds it. You're at their mercy.


> If Apple sees a material difference in their sales, they might consider changing that policy.

I think the problem is that it's too late; they expended the effort to build the engine into iOS 15. As far as we know the FBI couldn't compel them to write code like this (see the case with the San Bernardino shooter), but it's easy to believe they could compel Apple to maintain this backdoor in the future.


Historically, taking a position against measures which (even if only nominally) stop CSAM because they have abstract, negative privacy consequences is difficult. Try to explain to the average person why Apple's decision here is ominous, and the first question you'll get is "wait, why do you want to protect child abusers?"

I've been getting angry about this for the last week, and am just now getting to the point where I can accept the reality that the snowball is already rolling. The government isn't going to step in and prevent this, unless it's to make the backdoor larger, and nobody who was going to buy an iPhone or use iCloud is going to stop because of this. So, it's done. Apple will put a backdoor on your phone and yes, and obviously that backdoor will eventually be used for things beyond the scope of what they're telling you now.


My friend is a camgirl and OF girl. I take most of her pics and vids on my iPhone. So I have thousands of nudes of her on my phone.

She's 24, but could pass for much younger. I'm paranoid that I'll get flagged.. Especially since all of her tik-toks with any skin shown, or any drinking get flagged.


It's the same as with parents photographing their naked babies in the bathtub. Those images won't be flagged because they aren't in the database provided to Apple. Only verified illegal photos should be in the database. So you and your girlfriend will be fine.


> Only verified illegal photos should be in the database.

Considering PhotoDNA, the source code of Apple's implementation, a 30-year Ph.D. level in cryptographic knowledge, and the processes by which neuralMatch actually run are unavailable to the OP, I think OP is justified in the minor paranoia of not wanting to be falsely accused of owning CSAM.

That's precisely the problem with Apple's announcement here. You can't apply common sense to an algorithm and dragnet-style surveillance.

I doubt even Apple specifically knows precisely how things will turn out with this system.


These are perceptual hashes though.

What if you take a picture of your naked baby in the bathtub or your child on the beach, and the picture is very similar to a known CSAM picture with a hash in the database, enough to pass the distance threshold they're using?

The picture would be sent for screening, the Apple screener would indeed say that's a naked baby/child, and soon enough you've got the FBI (or whatever is the equivalent in your country) knocking on your door and arresting you for pedophilia.


> Those images won't be flagged because they aren't in the database provided to Apple

This is not accurate. Apple is using perceptual hashes. If the features of an image are close enough to an image in the NCMEC database it may generate a matching hash.


Not defending the new tech by any means, but you'd have to have enough matches to exceed some hidden threshold, which would first trigger a manual review by Apple. It seems very unlikely to have your own family photo result trigger anything.


The problem with that is that the threshold is unknown, so it could be 2 or 3.


Until one of the OF patrons mistakes her for a minor, reports the images, and their hashes somehow end up in NCMEC’s database.


For those of us who just bought new iPhones (over the 2 week return period) and are having buyer’s remorse after this decision, is there a way to return this now that the product is different than when I purchased it? Credit card chargeback, etc? I would like to switch to a phone I can control.


Call your credit card company and ask if you can chargeback. Those lines exist to satisfy customers - they don't have to say yes, but I think you have a good chance.


Has Apple or someone at Apple commented on the whole thing following the outcry? Any chance of them reneging?


Maybe not what you were looking for, but some FAQs[0] were released by Apple today. I found them here [1].

[0] https://www.apple.com/child-safety/pdf/Expanded_Protections_...

[1] https://www.macrumors.com/2021/08/09/apple-faq-csam-detectio...


Thanks!


Recent stories from the world of child porn rings have shown, that members are very coordinated. If iPhones are deemed the safe platform they will migrate there.

This is a compromise to rid them from or at least threaten a potential safe space. All real compromises are bad compromises. This seems to be just that


> One’s device ought be one’s property, with all of the expectations of ownership and privacy that entails

Preach.

Though with Apple phones this has never been the case, unfortunately. Having your photos scanned on upload is just the little cherry on top of the sundae of user disenfranchisement that is iOS.


I'm not sure I agree with the premises that I own my device, which most people seem to take for granted.

After all, in the iPhone case, I'm not allowed to root it, install anything I want on it, etc. Apple already controls it, and already decides what I can do with it.

So, if Apple decides that it can't be used to send child pornography, how is it different from 'standard' usage restrictions ? Or is it only the fact that it might trigger a call to law enforcement which is a problem, and without this call, everything would be ok ? In other words, would an iPhone which detects child porn and refuses to do anything ('error : illegal content') with it be ok? Or would this be a violation of privacy?


Don't blame Apple for this. Blame the liberal lawmakers and political advocacy groups in EU and the US who are pushing for regulations that force Apple and other phone makers to implement this kind of mass surveillance. It always starts from


Apple new ad strategy: "Either buy and use Apple iPhone, or you are predator."


I actually completely understand WHY the average "lay-person" thinks encryption must be bad because it allows terrorists and child pornographers to confer in secret to plan their nasty deeds. It's completely defensible to think that way from a standpoint of human nature and it would make all the sense in the world for that to be the case - that indeed, encryption is fundamentally something we cannot have due to xyz. Now, that's not how the situation actually is, but understanding that it isn't is more complex than thinking about it for a brief amount of time only


With locked devices acting as a scanner for good behavior (and make no mistake, this is not about CP), the governments are nearly where they want to be: A world that is officially free (for propaganda reasons) but allows for every citizen to be imprisoned at once.

Now the only thing left is to install some devices that monitor every spoken word in our apartments and check it against, well, CP ain't going to cut it, religious extremism, maybe? Good thing that no one would be that stupid to voluntarily install such equipment in their home...


20 MILLION reports just from Facebook alone in one year?? I am surprised there is no discussion of the sheer scale of the problem. Will we need to build more prisons once this new system is rolled out?


It feels like unreasonable search and seizure. That's my property that is being searched constantly and without a warrant to make sure I'm not committing a crime.


If I could go out and buy a Google-free Android phone today, I would. Now that I no longer trust Apple with privacy doesn't mean I trust Google. Even less so, actually.

The alternative now is: no phone. Bye-bye Facebook and Instagram and Twitter and Snapchat and Candy Crush, etc. The mobile app market, and the advertising market, could fall apart if everyone did the same. The economy would be significantly impacted.

What a huge mistake.


> What happens when China announces its version of the NCMEC, which not only includes the horrific imagery Apple’s system is meant to capture, but also images and memes the government deems illegal?

This is the crucial part here. The means are there, apple can't say it's impossible to do so, the implementation is simple, and the consequences for individuals with winne-the-pooh photos are horrible.


I really did not expect this from Apple. Perhaps I am too naive. It makes me think Apple is doing this as a sort of litmus test or as a way to force another issue:

- Government says Apple has to do it

- Apple announces they will do it

- Apple brand suffers immensely

- Apple cites suffering as a reason to not do it?

Seems like there would be a word/term/phrase for this kind of thing.

I'm so optimistic I'm hoping there is an ulterior motive here or some sort of outside coercion.


I’m no expert in all things Apple, but this whole thing seems, to me at least, pretty un-Apple like. It would be easier to rationalize if this was actually a directive from, say, a certain large nation who said “if you want to continue to do business here, implement this capability” and what we’re seeing is Apple trying to spin it as a feature or in some sort of positive light.


https://en.wikipedia.org/wiki/FBI%E2%80%93Apple_encryption_d...

Apple once refused the FBI to hack a terrorist's phone because it would lead to a loss of privacy for iPhone users. Now Apple has reversed that decision.


Now that Apple has demonstrated ability/capacity to scan for unencrypted content, law enforcement & government agencies with court-backing will compel them to provide any content moving forward.

However, the majority of Apple users are uncaring and/or unaware of these recent changes and it will not affect their sales beyond the tiny tech minority.


According to their FAQ, they are only scanning photos in iCloud and not doing on-device scanning.

https://www.apple.com/child-safety/pdf/Expanded_Protections_...


I was at an outdoor exhibition last week, and one of the staff called me out and said "No Digital Photography Allowed... it says so on your ticket"

Meanwhile, it feels like almost everyone is taking photos, videos and live steaming the entire thing from high-end smartphones!


I'm a pretty big Apple fan. But I think this is the end. There's no way you can bob and weave around the fact this makes your property there's. You're insane or Gruber if you think that's okay.


Drop Tim an Email: tcook@apple.com

Explain your thoughts on that topic. I am pretty sure they have a this email monitored and can filter for "CSAM". Once big enough it might be reported to Tim.


I was considering getting M1 laptop, but now that's off the table. I am looking for a performant fanless laptop that will run Linux. Any suggestions?


Donate a few dollars monthly to https://asahilinux.org on Patreon. We'll get Linux running smoothly on the M1 within a year or so.


What I really don't get this is why would Apple implement this feature nobody asked for? And damage their privacy brand. Why?


But isn't apple just hashing the image and then matching it with the CSAN database on the cloud? Thats how the feature works no?


The canary died for Apple when they made it clear to all that they had developed a way to selectively bypass all device firewalls.


Sooo... once child predators switch from iphone to a regular camera ... then what, Apple? Fuck everyone's privacy though?


Well, the idea is they have to share it. If you scan their Gmail, scan their iCloud, get Sony/Canon/Nikon to scan when copying off the camera, get Windows and Mac OS to scan images on copy, you can constrain them.

It will be interesting what the effect on sexting will be once we know that some Apple employee can look at you naked because the tool mismatched.


The people who have written «scientific papers» on this absolute insane garbage should be laughed out of every serious conference on cryptography.

And the people pushing this behind the scenes know that the majority of CSA happens in the home or with adults that the child trusts, so if we are to be fine with reaching into everyones private life, video surveillance in every kid’s bedroom should be far more effective than attacking the distribution of CSAM alone…?


It’s truly disappointing that Apple got so hung up on its particular vision of privacy that it ended up betraying the fulcrum of user control: being able to trust that your device is truly yours.

This is it. Of course I'd like child pornographers to be caught.

But right now my expensive phone will be using up valuable charge looking for photos that aren't there.

And in future I can't be sure it won't be searching for evidence of support for the political opposition. (This is not only China or Gulf states: democracy seems on pretty shaky ground in the US, UK and elsewhere right now).


What makes you think this will churn battery? It’s one extra processing step that’s run before a photo syncs and then cached. You don’t sync every photo every time.

Out of all the arguments on this site against this feature, the battery bit is by far the most unfounded one and super odd to see repeated on a technical forum.


I agree it’s probably not very much battery at all. But it’s not going to be none either, so the principle remains: it’s my phone using my electricity to check up on me without my consent. I really don’t like that.


Scope creep concerns:

1. what else to scan for.

2. what other OS vendors should add matching functionality because Apple has proven it is doable.


how probable is it even to find CP in an iphone, considering that is not really possible to upload stuff directly and that most likely those people would not use anything that s connected to the cloud. I think it's more interesting that this will now be in the desktop


The device (as per ToS- which no one reads) is NEVER yours. You "rent" it. Not "own " it. I am wondering how long the hoopla around Apple's backdoor will last. Ordinary users don't understand and won't care. From the missing discussion- companies have the capability to plant evidence where it was non-existent.


I've likely bought my last Apple device.


Perhaps Apple should add “at home iCloud server” support to Macs so you can selfhost your personal iCloud, and keep all your data on your own property with easy UX.


There is little difference between Apple's proposal and a continuously running AI snitch examining your display output for forbidden thoughts and images.


Apple literally went 1984 at this point



They had to kill Big Brother to become Big Brother


Coming soon. Illegal meme hash scanning


Copyright scanning. Show us you legitimately bought and have a license to this content. Worry not if you don’t, the fines are automatic with Apple Pay.


Also on the roadmap: scanning for images of illegal firearms.


Scanning photos for images of suspects in crimes. Checking location info on photos for evidence for investigation.

Scanning texts for online bullying and sexual harassment.

Searching bank statements for tax compliance and money laundering.


Scanning for arbitrary subjects is wildly impractical. There are, what, trillions of photographs of guns in the world?


Also on the roadmap: making all firearms illegal.


[flagged]


The problem is that not even Apple can guarantee it's not gonna be used for anything nefarious, since they won't have (and probably don't want!) access to the source of the PhotoDNA hashes to inspect them.

Not only we have to blindly trust Apple, we also have to trust a (government?) institution that most of us had never heard about before.


[flagged]


why are you only calling out BILL's - coincidence I don't think so. You have a Bills only bias. :)


On somewhat of a side note, has anyone stopped their subscription to Stratechery in the past year?

I've been a paid sub to Stratechery since the Amazon acquiring Wholefood article - I think Ben's thinking is really unique and valuable.

But - I've recently stopped my subscription, partly because of change in my own finance, partly because I am no longer interested in what Ben wants to cover - namely privacy, policy, and anti-competitive. I miss the days where companies like Stitch Fix could earn a email. I think Ben is covering important topics - but it's not something that I'm really interested in.

Plus the fact that each post seems to require a large amount of prior knowledge - manifested by the large amount of quotes in each email.

What do you think? Have you found other blogs worth subscribing to?


[flagged]


This is a perfect example of the hypocrisy involved with such 'for the children' schemes. Elites are allowed to have massive birthday parties whereas plebs are not allowed to gather, elites are allowed to rape children while plebs are not.


What does Qui mean in this context?


It's... code for a meme which is... representative of a tight knit group... speaking any criticism against whom outlawed in many countries. Hence I can't even say.

youtube search this: france qui interview


Well hot damn.

THIS is so FN important. Its disgusting that there are many like yourself, and myself who have been aware of and been talking about it for DECADES and we were ridiculed and called "conspiracy theorists"

CSAM blackmail is the bread-and-butter of Dark Intelligence Agencies.

As a victim of abuse myself, where I lived on a Commune in Lafayette California in the 1970s, which was riddled with child sexual abuse - i was told by my father on his death-bed that the Commune (More University (The 'Purple People') was a "destination" and people, including rich and celebrities came there for "Experiences".

There was a write up about it in Penthouse Magazine in the 80s - and it was known that the CIA played a role in putting Vic Baranco in the position he was in as head of that cult.

My parents also knew Jim Jones when he was still in SF before the mass suicide..

In talking with other kids from the commune, we found out just how bad the abuse was, and the fact that the CIA was closely monitoring it.

I was involved in a study to follow a bunch of the kids from there , held by University of San Diego where they followed us as we grew up and interviewed us every few years to see what our life-path would turn out as based on our childhood situations... (it was a lot of different kids from all sorts of socio-eco backgrounds...)

This was also at the same time the Michael Aquino abuse was happening to kids in the Presidio and other parts of San Francisco...

Personally, I feel pedophilia should be a capital crime. and I would have no issues exacting it on people who abuse small children.

One girl from the commune, Maria, was turned into a prostitute at the age of 4 years old by her father.

We got out in 1978....


It took me time to think about this news. I despise the cavalier attitude some people in software engineering have towards the social consequences of their work. A community where 'think of the children' is an ironic phrase is a sick one. Yet, something here doesn't smell entirely right.

TL;DR: I think one of three must be correct:

A) NCMEC is seriously neglecting protecting children by accepting a subpar solution. We're getting the client-side scanning precedent without any benefit to society.

OR

B) Apple is not entirely ditching server-side scanning, so again we'll be gaining nothing from client-side scanning aside from normalization of a dubious practice.

OR

C) The client-side solution will end up way more invasive than what even Apple believes it would be.

---

Lets look at the worst scenarios. One or more pedos is actively molesting in order to record CSAM on a iPhone. The iPhone soon syncs to the iCloud, while the pedos distribute the CSAM. This is the scenario we all want to stop most and ASAP.

1) Apple's is a hash-based approach, not an AI based one. So the original upload must be marked as 'clear' by the iPhone, since NCMEC can't yet know about the CSAM.

2) Hopefully, sometimes later NCMEC does find the CSAM and marks it appropriately.

3) In the 'naive' server-side approach, Apple could scan iCloud and find the original CSAM and then the uploaders. This is not what we're doing. This is a client-side approach, and if certain rumors are right Apple will eventually do end2end encryption and so will not even be able to scan server-side. So do we do?

4) One option is for NCMEC to give up and let Apple do a worse solution than server-side scanning where the original perpetrators not only go free but can use iCloud for storage without being detected. That's our option A above.

This is undesirable and unlikely. Even if NCMEC were so compromised, Apple can't want to get all this fire and then implement an non-working solution. Even if Apple were to do that, eventually pressure will force them to implement a working solution.

5) The other option is for the iPhone client to receive regular hash updates, and then periodically rescan any picture that was at one time uploaded to the iCloud (option C). Since the incriminating files may well not even exist anymore on the phone (the pedos in our scenario have good motivation to eventually delete the CSAM to save space even without the issue of client-side scanning), Apple will have to store the hashes of deleted images on the phone in order to periodically rescan them.

So to implement effective client-side scanning, iPhone owners will not be able to ever truly delete an image if it was ever synced to iCloud, even if the owners deleted it from the cloud and their phone. The hash will be stored, and not only that - a rough facsimile of the image should be restorable from the hash*. I can't see any other way to do it, and Apple and NCMEC must want the client-side solution to work. In a world where Pegasus and things like it exist, this is.. not optimal.

6) Of course, Apple could avoid all this by storing the hashes in their iCloud (option B), but what would be the point of client-side scanning than, aside from a dubious precedent?

Am I missing something?

* AFAIK the ability to make a rough facsimile of the original image from the hash is a property of visual hashes required for them to work even on manipulated images, and also required for the manual review system. Apple can't avoid this on a client-side solution - it's either the original image or something close enough so it can be manually reviewed if later found to be CSAM.


Thinking about it, another client side solution would be to also scan images when downloading from iCloud. That way pedos will have difficulty using iCloud, since the hash will be rescanned when downloading and the hash might be up to date this time. Would this allow Apple to not store the deleted image data on the client? This would still catch significantly less perpetrators than the server-side approach, would NCMEC etc. agree to this, or would pressure force Apple to go even further?

Either way, a workable client-side solution has to end up more encompassing than what Apple has done.


Brilliant guerrilla marketing move, really: anyone who chooses Android over iPhone now implicitly has some very unsavoury reasons for doing so. And if it was really a broader surveillance initiative pushed by the government, the same implication neutralises any protest.


As long as false positives are a thing, there are very practical reasons for switching, especially for those who have kids.

I wouldn't be surprised if a completely innocent false positive gets you put on a list indefinitely, with little recourse.


False positives of the kind you're thinking of aren't possible--it's checking for hashes that match known bad images, not running machine learning/image detection to detect if the photo you just took contains bad content. The issue is that there's nothing stopping Apple/the government from marking anything it finds objectionable--like anti-government free speech--as a Bad Image, beyond CSAM.


The thing is Apple uses some custom hash thing with parameters generated by AIs. As some other article shows you can get conflicting hashes if some color patterns and shadows match. Also the threshold they mentioned is secret so it could be 1 or 2 or it could change in future.


Once the policy decision is made that it can run some kind of scanning, it opens the doors for any kind of scanning. Today it's that "neural hash", tomorrow it's going to do something even more invasive.


Really, hashes are sufficiently unique? The objections I saw for this news were along the lines that random images could be manipulated to have hashes that match the flagged cases, in a way that was undetectable by the naked eye.


Doesn't the hash change by exporting a photo as a new file type or by changing a few pixels in photoshop?

If this was the FINAL solution to catch every last child pornographer in one glorious roundup MAYBE it would be worth the massive risk of authoritarian abuse but this algorithm sounds stupidly easy to get around for the deviants while still throwing our collective privacy under the bus.


This is a PhotoDNA hash, not a file-content hash. It is a bit more powerful than a normal hash:

> In the same way that PhotoDNA can match an image that has been altered to avoid detection, PhotoDNA for Video can find child sexual exploitation content that’s been edited or spliced into a video that might otherwise appear harmless

https://en.wikipedia.org/wiki/PhotoDNA


False positives are possible. Apple states that their hash function has about a 1 in 1 trillion chance of producing one.


> it's checking for hashes that match known bad images

Many of the hashes provided by the NCMEC are MD5. There are going to be false positives left and right.


It is not a surveillance initiative yet, but it will be trivial to expand upon the scanning capabilities once established.

You have more trust in Apple, something i don't. So we see this change in a different light.


They're all a bunch of commies who want to repair their stuff and... own? it.


Private property is the opposite of communism.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: