I think the issue is that, as Ben Thompson pointed out, the only thing preventing them from scanning other stuff now is just policy, not capability, and now that Pandora's box is open it's going to be much more difficult to resist when a government comes to them and says "we want you to scan messages for subversive content".
Given that they obviously already have the capability to send software updates that add new on-device capability, this seems like a meaningless distinction. It’s already “just policy” preventing them from sending any conceivable software update to their phones.
I disagree, strongly. Let's say you're authoritarian government EvilGov. Before this announcement, if you went to Apple and said "we want you to push this spyware to your iPhones", Apple would and could have easily pushed back both in the court of public opinion and the court of law.
Now though, Apple is already saying "We'll take this database of illegal image hashes provided by the government and use it to scan your phone." It's now quite trivial for a government to say "We don't have a special database of just CSAM, and a different database of just Winnie the Pooh memes. We just have one big DB of 'illegal images', and that's what we want you to use to scan the phones."
Could they though? An authoritarian government could just as easily say "if you do not implement this feature, we will not allow you to operate in this country" and refuse to entertain legal challenges.
Authoritarian countries are the least of the worries. There's a large class of illegal imagery that is policed in democratic countries: copyrighted media. We are only two steps away from having your phone rat you to the MPAA because you downloaded a movie and stored it on your phone.
I can guarantee that some industry bigwigs are salivating at the prospect of this tech. Imagine youtube copyright strikes but local to your device.
This would have been a huge fear of mine if this was 10 or 15 years ago and legitimate publishers were still engaging in copyright litigation for individual noncommercial pirates. The RIAA stopped doing individual lawsuits because it was surprisingly expensive to ruin people's lives that way over a handful of songs. This largely also applies to the MPAA as well. The only people still doing this in bulk are copyright trolling outfits - companies like Prenda Law and Malibu Media. They make fake porn, covertly share it and lie to the judge about it, sue you, and then offer a quick and easy settlement route.
(Yes, I know, it sounds like I'm splitting hairs distinguishing between copyright maximalism and copyright trolling. Please, humor me.)
What copyright maximalists really want is an end to the legal protections for online platforms (DMCA 512 safe-harbor) so they can go after notorious markets directly. They already partially accomplished this in the EU. It's way more efficient from their end to have one throat to choke. Once the maximalists get what they want, platforms that want to be legitimate would be legally compelled to implement filters, ala YouTube Content ID. But those filters would apply to content that's published on the platform, not just things on your device. Remember: they're not interested in your local storage. They want to keep their movies and music off of YouTube, Vimeo, Twitter, and Facebook.
Furthermore, they largely already have the scanning regime they want. Public video sites make absolutely no sense to E2E encrypt; and most sites that deal in video already have a content fingerprinting solution. It turns out it was already easy enough to withhold content licenses to platforms that accept user uploads, unless they agreed to also start scanning those uploads.
(Side note: I genuinely think that the entire concept of safe harbors for online platforms is going to go away, just as soon as the proposals to do so stop being transparently partisan end-runs around the 1st Amendment. Pre-Internet, the idea that an entity could broadcast speech without having been considered to be the publisher of it didn't really exist. Publishers were publishers and that was that.)
One reason I haven't used itunes in years, and don't use iOS, is that I have a huge collection of mp3s. It's almost all of music which I bought and burned off CDs over decades, and it's become just stupidly difficult to organize your own mp3 collection via Apple's players. Even back in the ipod days, you could put your mp3s on an ipod but you needed piracy ware to rip them back off. I can easily see this leading to people like me getting piracy notices and having to explain that no, I bought this music, since we're now in a world where not that many people have their own unlocked music collection anymore.
The only authoritarian country that holds any sway over Apple is China and China can easily just ban iPhones if they feel like it, especially with Xi Jinping in charge.
Democratic countries are much more of a danger because Apple has offices in them and cannot easily pull out of the market if they feel the laws are unconscionable.
All these years I have heard of people getting in trouble for sharing copyrighted materials with others but never for possessing them or downloading them (BitTorrent somewhat blurs the line here). And there's nothing illegal about ripping your own CDs. I find this scenario far-fetched. iTunes Match would seem to be a longrunning honeypot if you take this seriously.
Should this fact be reassuring or even more frightening ?
Because it’s something to push back against governments by saying « I can’t ». But it’s a societal issue if a corporation can say « I don’t want » to a government.
It’s really not the same thing and Apple just burned their « I can’t » card and are implying they can just say « no » to governments. Which is quite an even more dystopian thing.
Of course they won't. The idea of a company refusing to comply with the laws of the countries they operate in purely out of some grand sense of principle just seems naive. Especially if it's the country where HQ is.
Yeah I'm pretty sure that this wasn't an easy decision for the Chinese government. Even in dictatorships you can't afford to piss off the population too much.
Google is pretty big too but they implemented the filtering in China as requested. Then when they decided they didn't want to do that anymore there went their site.
I meant in the context of U.S. governments. What Democrat administration wants to ban Apple? What Republican administration wants to ban Apple?
Sounds like political suicide on either side: government interference with the country's largest company, an iconic brand, for no reason other than to hopefully spy more on your citizens.
Analogy: the prospect of “vaccine passports” for jurisdictional movement and access is getting serious consideration at all levels of U.S. governance. Not long ago, anything so resembling “papiere, bitte” commonly inspired a sense of violent resistance against totalitarianism. Just takes one engaging “but this is different” to reframe the worst of humanity to popular compulsion.
> Apple is already saying "We'll take this database of illegal image hashes provided by the government and use it to scan your phone."
This is incorrect.
Apple has been saying—since 2019![0]—that they can scan for any potentially illegal content, not just images, not just CSAM, and not even strictly illegal.
That's what should be opposed. CSAM is a red herring.
The difference is that's scanning in the cloud, on their servers, of things people choose to upload. It's not scanning on the user's private device, and currently they have no way to scan what's on a private device.
The phrase is “pre-screening” of uploaded content, which is what is happening. I'm pretty sure this change in ToS was made to enable this CSAM feature.
Why do you think essentially no one is complaining about using ML to understand the content of photos, then (especially in comparison to this rather targeted CSAM feature)? My impression is that both Apple and Google have already been doing that since what? 2016? Earlier? There's been no need for a database of photos, either company could silently update those algorithms to ping on guns, drugs, Winnie the Pooh memes, etc.
Why do you think essentially no one is complaining about using ML to understand the content of photos…
At last in Apple's case, ML is used only if a minor child (less than 13 years old) who is on a Family account where the parent/guardian has opted-in to the ability to be alerted if potentially bad content is either sent or received using the Messages app.
Google and Apple both use ML to understand the content of photos already. Go into your app and you can search for things like "dog", "beer", "birdhouse", and "revolver". Given that it can do all that (and already knows a difference between type of guns), it doesn't seem like a stretch to think it could, if Apple or Google wanted, understand "cocaine", "Winnie the Pooh memes", or whatever repressive-government style worries are had. And it's existed for years, without slippery sloping into anything other than this Messages feature for children.
Bingo. All Apple has done is add another category for “nude child” with a simple notification when the AI finds one. The tech has arrived; consequences happen.
There is so much disinformation about this with people not being informed. Apple have put out clear documentation, it would be a good idea to read it before fear-mongering.
1) The list of CSAM hashes is that provided by the relevant US government agency, that is it.
2) The project is not targeted to roll out anywhere other than the USA.
3) The hashes are baked into the OS, there is no capability to update them other than on signed and delivered Apple OS updates.
4) Apple has exactly the same leverage with EvilGov as at any other time. They can refuse to do as asked, and risk being ejected from the country. Their stated claim is that this is what will happen. We will see.
Regarding 3: it’s very easy to make a mistake in the protocol that would allow apple to detect hashes outside the CSAM list. Without knowing exactly how their protocol works it’s difficult to know whether it is correct.
For example here is a broken PSI protocol in terms of point 3. I don’t think normally in PSI this is considered broken because the server knows the value so it is part of its private set.
Server computes M_s = g . H(m) . S_s
where g is a generator of an elliptic curve, H(m) is the neural hash of the image and S_s is the server blinding secret.
The client computes M_sc = M_s . S_c where S_c is the client ephemeral secret. This M_sc value is the shared key.
The client also computes M_c = g . H(m) . S_c
and sends the M_c value to the server.
The server can now compute M_cs = M_c . S_s = M_sc since they both used the same H(m) values. This allows the server and client to share a key based on the shared image.
However, what happens if the client does it’s step using the ‘wrong’ image. If 3) is to hold it should not be possible for the server to compute the key.
Client computes:
M_sc = M_s . S_c
M_c = g. H(m’) . S_c
The clients final key share is: M_sc = g . H(m) . S_c . S_s
Now server computes: M_cs = M_c . S_s = g . H(m’) . S_c . S_s
The secret shares don’t match. But if the server knows H(m’) it can compute:
M_cs’ = M_cs . inv(H(m’)) . H(m)
and this secret share will match
Normally this client side list in PSI is just used to speed up the protocol so the server does not have to do a crypto operation for every element in its set. It is not a pre-commitment from the server.
Also, maybe the way I’m doing it here is just normally broken because it is not robust against low entropy inputs to the hash function.
I've also reversed some of apple's non-public crypto that is used in some of it's services and they have made dubious design decisions in the past they have created weird weaknesses. Without knowing exactly what they are doing I would not try and infer properties that might not exist or trust their implementation.
Authoritarian governments have never needed preexisting technical gizmos in order to abuse their citizens. Look at what Russia and the Nazis did with no magic tech at all. Privacy stems from valuing human rights, not easily disproven lies about it being difficult to actually implement the spying.
True. You can have an authoritarian regime with little technology.
But what people are afraid of is how little it takes when the technology is there. And how Western governments seem to morph towards that. And how some bastions of freedom are falling
Even a Jew stuck in a ghetto in Nazi Germany enjoyed significantly better privacy at home from both the state and capital than a wealthy citizen in a typical western liberal nation today.
Soviet Russia and Nazi Germany in fact prove the exact opposite of what point you think you're making. They were foremost in using the "magic tech" of their day - telecommunications and motorised vehicles.
Even then, they had several rebellions and uprisings that were able to develop thanks to lacking the degree of surveillance tech found today.
On the contrary, the reason Stalin and Hitler could enact repressive measures unprecedented in human history was precisely new technological developments like the telephone, the battle tank, and IBM cards. Bad people will never value human rights; getting privacy from bad people requires you to make it actually difficult to implement the spying, rather than just lie about it. That's why the slogan says, "cypherpunks write code."
No, they could enact repressive measures because they had almost absolute power. The form that oppression took would obviously rely on the then current technology, but the fundamental enabler of oppression was power, not technology.
Power which was extended through the use of said technology. I'm fine with focusing on regressive hierarchical systems as the fundamental root of oppression but pretending like the information and mechanical technology introduced in the late 19th century to early 20th century did not massively increase the reach of the state is just being ridiculous.
Prior to motorised vehicles and the telegraph for example, borders were almost impossible to enforce on the general population without significant leakage.
Ironically in spite of powered flight, freedom of movement is in a sense significantly less than even 200 years ago when it comes to movement between countries that do not already have diplomatic ties to one another allowing for state authorized travel.
Incidentally, this is part of why many punishments in the medieval age were so extreme - the actual ability of the state to enforce the law was so pathetically limited by today's standards that they needed a strong element of fear to attempt to control the population.
There's a flipside to this, too: technology doesn't just improve the ability of the state to enforce law (or social mores), it also improves the ability of it's subjects to violate it with impunity.
Borders in medieval Europe weren't nearly as necessary because the physical ability to travel was limited. Forget traveling from Germany to France - just going to the next town over was a life-threatening affair without assistance and wealth. Legally speaking, most peasant farmers were property of their lord's manor. But practically speaking, leaving the manor would be quite difficult on your own.
Many churches railed against motor vehicles because the extra freedom of movement they made possible also broke sexual mores - someone might use that car to engage in prostitution! You see similar arguments made today about birth control or abortion.
Prior to the Internet, American mass communication was effectively censored by the government under a series of legally odd excuses about public interest in efficiently-allocated spectrum. In other words, this was a technical limitation of radio, weaponized to make an end-run around the 1st Amendment. Getting rid of that technical limitation increased freedom. Even today, getting banned from Facebook for spouting too much right-wing nonsense isn't as censorious as, say, the FCC fining you millions of dollars for an accidental swear word.
Whether or not a technology actually winds up increasing or reducing freedom depends more on how it's distributed than on just how much of it there is. Technology doesn't care one way or the other about your freedom. However, freedom is intimately tied to another thing: equity. Systems that economically franchise their subjects, have low inequality, and keep their hierarchies in check, will see the "technology dividend" of increased freedom go to their subjects. Systems that impoverish people, have high inequality, and let their hierarchies grow without bound, will instead squander that dividend for themselves.
This is a feedback effect, too. Most technology is invented in systems with freedom and equality, and then that technology goes on to reinforce those freedoms. Unequal systems squander their subjects' ability to invent new technologies. We didn't see this with Nazi Germany, because the Allies wiped them off the map too quickly; but the Soviet Union lost their technological edge over time. The political hierarchy they had established to replace the prior capitalist one made technological innovation impractical. So, the more you use technology to restrict, censor, or oppress people, the more likely that your country falls behind economically and stagnates. The elites at the top of any hierarchical system - aside from the harshest dictatorships - absolutely do not want that happening.
A lot of excellent points though it's completely inaccurate to frame the fall of the Soviet Union's scientific progress due to their hierarchy replacing the "prior capitalist one" when the former lead to the mass industrialisation and rapid economic growth that lead to the USSR being the first nation to conquer space flight whereas under the latter, the USSR was an agrarian society suffering from food shortages and mass poverty. The decline of Soviet science and economy had more to do with NATO and American restrictions on trade and movement than something self imposed by the Soviets.
Capitalism is completely compatible with hierarchies and censorship - indeed one could argue that capitalism is completely incompatible with a true flattening of hierarchies since it rests on the ability of an owner class holding a monopoly over the means of production. The majority of dictatorships around the world use capitalism as the basis of it's economy. Following the dissolving of the USSR, Russia continues to be authoritarian and arguably more so than the USSR was past the Stalin era.
Aside from that, I think it's a little premature to frame the internet's ultimate effect on the world as reducing the ability of the government to censor the population when it's only been around for less than 30 years and we are already seeing mass adoption and development of both censorship and surveillance tech that goes beyond even the wildest dreams of 20th century era dictators.
I don't really buy the argument that most technology is invented in systems with freedom and equality either. It just sounds more like something we want to believe than something borne out by data. The internet and rocket ships were the product of the military - an institution that has more to do with using force to enact the will of it's host nation on others and limiting their freedoms than preserving the freedoms of their own, especially for superpowers like the USA and the CCP, which are effectively immune to conquest by military force.
This is in fact the same for the Silicon Valley and most private industry, you only think all this tech is the product of your freedom and equality because all of the actual extreme inequality and lack of freedom is kept compartmentalised to the global south through long and convoluted supply chains. It's not really freedom if only 10% of the actual people involved in the sustaining of an economy have any semblance of it -leaving aside the observation that for even this 10% that represents the average US citizen, their actual democratic agency in the state or in their job is vanishingly low.
While there's some good food for thought in here, I want to quibble with a couple of things.
First, the Soviet Union didn't have a prior capitalist hierarchy; the whole reason for Leninism as such was that Russia was a feudal, agrarian society, which in Marx's theory had not progressed to the stage of capitalism and therefore could not progress to communism.
Second, food shortages and mass poverty did not end with the establishment of the Soviet Union; in fact, they became enormously worse. Even before being invaded by Germany in WWII, the Soviet system caused the Holodomor, a famine unprecedented in the history of the Ukraine.
Third, the internet has been around for 52 years, not less than 30. I've personally been using it for 29 years, and I can tell you that it already had a long history when I started using it. One of the founders of Y Combinator first became well-known as a result of breaking significant parts of the internet 33 years ago, an event which resulted in a criminal trial.
Fourth, the internet was the product of universities, although the universities were funded by ARPA. Rocket ships have a long evolution including not only militaries but also recreational fireworks, Tipu Sultan of Mysore, Tsiolkovsky, Goddard, the peaceful space agencies, H. G. Wells, and possibly Lagâri Hasan Çelebi.
Fifth, I don't think the argument is that the technology is necessarily produced by systems with freedom and equality, but that it is invented by them. This is somewhat dubious, but not as open-and-shut wrong as your misunderstanding of it. Goddard's rockets were put into mass production as the V2 in Nazi Germany using slave labor, but he invented them at WPI and Princeton. Tsiolkovsky lived in the Czar's Russia and then the USSR, and his daughter was arrested by the Czar's secret police, but he himself seems to have had considerable freedom to experiment with dirigibles and publish his research (but no slaves to build rockets for him), and indeed he was elected to the Socialist Academy.
I think we can make an excellent case that certain kinds of intellectual repression, whether grassroots or hierarchical, fall very heavily on the kinds of people who tend to invent things. William Kamkwamba's family thought he was insane, Galileo spent the last years of his life under house arrest, Newton was doing alchemical research that could have gotten him burned at the stake in Spain, Qin Shi Huang buried the Mohists alive, Giordano Bruno was in fact burned at the stake, and the repression of Lysenko's intellectual opposition was a major reason for the USSR's and PRC's economic troubles in the 01950s and 01960s.
Living in the so-called "global south" (a term which I have come to regard as useless for understanding the world system, if not actively counterproductive) I have to tell you that there's very little production of advanced technology going on here. But I live in Argentina, and the situation is different in different countries; Indonesia, Thailand, Vietnam, etc., have all done significant technical production for the world economy while in the grip of dictatorships, even though Argentina never has. But most countries don't. If tantalum cost ten times as much, we'd still have cellphones, and you probably wouldn't even be able to detect the price difference.
There's a difference -- often a big one -- between being theoretically able to build some feature into a phone (plus the server-side infra and staffing to support it), and actually having built it. That's the capability angle.
However, the difference between having a feature enabled based on the value of a seemingly-unrelated user-controlled setting, versus having that feature enabled all the time... is basically zero. Additionally, extending this feature to encompass other kinds of content is a matter of data entry, not engineering. That's the policy angle.
When you don't yet have a capability, it might take a lot of work and commitment to develop it. But on the contrary, policy can be changed with the flip of a switch.
Yes, of course that is true. I use iCloud Photos and find this terribly creepy. If Apple must scan my photos, I'd rather they do it on their servers.
I could maybe understand the new implementation if Apple had announced they'd also be enabling E2E encryption of everything in iCloud, and explained it as "this is the only way we can prevent CSAM from being stored on our servers."
> "this is the only way we can prevent CSAM from being stored on our servers."
Why is that supposed to be their responsibility?
Given the scale of their business, there is effectively a certainty that people are transmitting CSAM via Comcast's network. That's no excuse for them to ban end to end encryption or start pushing out code to scan client devices.
When you hail a taxi, they don't search you for drugs before letting you inside.
Because they're not the police. It isn't their role to enforce the law.
They (and Google, Microsoft, Facebook, etc.) are essentially mandated reporters; if CSAM is on their servers, they're required to report it.
It's like how a doctor is a mandated reporter regarding physical abuse and other issues.
Because they're not the police. It isn't their role to enforce the law.
They're not enforcing the law; if the CSAM reaches a certain threshold, they check it out and if it's the real deal, they report to National Center for Missing and Exploited Children (NCMEC); they get law enforcement involved if necessary.
> They (and Google, Microsoft, Facebook, etc.) are essentially mandated reporters; if CSAM is on their servers, they're required to report it.
I don't think that's correct. My understanding is that if they find CSAM, they're obligated to report it (just like anyone is). I don't believe they are legally obligated to proactively look for it. (It would be a PR nightmare for them to have an unchecked CSAM problem on their services, so they do look for it and report it.)
Consider that Apple likely does have CSAM on their servers, because they apparently don't scan iCloud backups right now. I don't believe they're breaking any laws, at least until and unless they find any of it and (hypothetically) don't report it
Last year Congress mulled over a bill (EARN IT Act) that would explicitly require online services to proactively search for CSAM, by allowing services to be sued by governments for failing to do so. It would also allow services to be sued for failing to provide law-enforcement back doors to encrypted data. There's also SESTA/FOSTA, already in law, that rescinded CDA 230 protection for cases involving sex trafficking.
Quite honestly, my opinion is that any service not scanning for CSAM is living on borrowed time.
Not necessarily. The scanning implementation can be similar to what they plan on doing on your device. I don't want them to do the scanning on my phone. If shit hits the fan and I become a dissident, I would prefer to have the option to stop using iCloud, Dropbox or any other service that might enable my government or the secret police to suppress me.
One can encrypt a picture before uploading it to the aforementioned services, and as such nothing would really happen. That’s not a possibility on pictures taken on iPhones that get uploaded to iCloud.
I think the issue is that, as Ben Thompson pointed out, the only thing preventing them from scanning other stuff now is just policy, not capability, and now that Pandora's box is open it's going to be much more difficult to resist when a government comes to them and says "we want you to scan messages for subversive content".