> Last year, the FBI and its international partners announced Operation Trojan Shield, in which the FBI secretly ran an encrypted phone company called Anom for years and used it to hoover up tens of millions of messages from Anom users.
What other services might be run, controlled, or surveilled by the US investigative authorities?
What other services might have operators that can be extorted or blackmailed by those same authorities, due to the fact that US-based data aggregators (FAANG et al) have extensive information about the lifestyles, behaviors, habits, and travel of billions of people worldwide?
We already know Apple has preserved a backdoor in the end-to-end cryptography of iMessage at the FBI's behest, as reported by Reuters. WhatsApp has always had the same backdoor (unencrypted backups to cloud services). The largest services are all unsafe for privacy.
>We already know Apple has preserved a backdoor in the end-to-end cryptography of iMessage at the FBI's behest, as reported by Reuters. WhatsApp has always had the same backdoor (unencrypted backups to cloud services). The largest services are all unsafe for privacy.
I don't agree with your characterization of that as a "backdoor" and I think that dilutes the term dangerously. There is no need to use Apple's backup at all, iDevices can still be backed up to your own computer same as always. I do think it's a real problem and one of the real clear cases where Apple's lockdown is anti-user, it should be possible to direct convenient automatic backup at any service one wishes using standard APIs. But it's not any kind of backdoor in iMessage, in the same way it's not a backdoor in Signal or whatever else you might choose to run. Or would be a backdoor if you decided to do unencrypted backups to your own NAS because you decided under your threat model that physical attacks there were less of a risk/value then losing data due to losing keys or something. It's an entirely orthogonal system to the encryption of the messengers themselves. It's not a "backdoor" in a communications system, any communications system, if someone chooses to keep logs unprotected elsewhere. Lack of E2EE in the most convenient wireless backups is a flaw in the general iOS ecosystem, not iMessage specifically.
As long as iCloud backup is a) on by default, and b) isn’t clearly marked as being readable to Apple, it is a back door in practice, especially since the FBI is the reason that they did this.
Let’s not even talk about Chinese users, as apparently Apple bending over to store all their data in CCP data centers doesn’t count.
Agree and it's not just Chinese users. Just talking about that one country (there are others, let's set that aside) it's Chinese users, and anyone who happens to be in China, and it would seem to be also anyone who, knowingly or unknowingly, anywhere around the world… in the US, the UK, the EU, etc… anywhere, has so much as a one-time interaction with such a user.
I feel like that paragraph would lose most people because it's a long chain of connections. It's hard to do a TL;DR but here, I'll try:
Basically "If you message someone in China, Apple sees to it that your identity and content is handed to the Chinese government."
I don't know this for a fact. But as far as I can tell (and they aren't saying anything) this is exactly what is going on.
Interesting point! The thing with Apple that annoys me is they believe their own marketing, but if you say "Privacy is a Human Right" and deny it to Chinese citizens, you either don't consider Chinese people human, or you are full of shit. Honestly, not sure where Apple stands given their labor practices in China, the people who assemble the widgets certainly haven't been treated like human beings.
No, you're just wrong, even if your statements were right which they aren't either. Again, by your logic every single communication system on iOS is "backdoored" simply because iCloud Backup exists. Or for that matter any comms method on Linux or FreeBSD or macOS or Windows if someone makes unencrypted backups. That's horse shit, and it degrades the specificity and value of the term "backdoor" in the same way as "bricked" has become. Backdooring is a specific part of a given product/stack, a covert method of bypassing regular authentication. There is no convert status here, Apple clearly lays out exactly what components of iCloud are encrypted and how in their "iCloud security overview" [0]. And it's not as if E2EE backups have no downsides mass market, it means that users are fully responsible for their data with zero recovery possible. I still think Apple is wrong to not offer even the capability to do better wirelessly/networked and in fact that should be against the law, but it's not a "backdoor" and if there were options I'm sure lots of people would still pick the "recoverable in an emergency" one. Think, if it turns out there actually is a backdoor in iMessage, like a secret second key that can be applied to any MITM'd data to decrypt it, how would you even describe that pray tell as different from "choosing to use a non-E2EE backup method"? Or do you not think any differentiation there would matter, that such a thing would be identical to you saying right now that "Signal has a backdoor"? All that said:
>a) on by default
I've never seen it on by default, it's a toggle. I can't find anything to support this assertion, and Apple's docs seem to indicate too it must be turned on [1]. How would it even be possible for this to work? Apple only gives you 5GB by default, and backups absolutely count against the quota.
>and b) isn’t clearly marked as being readable to Apple
As I linked they do clearly convey that. If you think it should be some extra warning dialog on enabling it, maybe that's a criticism, but there's certainly no standard around that across software industry-wide including on computers. Whether something is E2EE or not is usually something those that care need to look up. Maybe that should change. But no, it's not a "backdoor in practice".
>Let’s not even talk about Chinese users, as apparently Apple bending over to store all their data in CCP data centers doesn’t count.
No let's not, and no it doesn't here. That's a case with a lot more complexity then tends to come out on HN where instead people like you use it as a lazy bit of whataboutism. Apple is in the wrong there, and the US for allowing/encouraging it as well, but not for the same reasons as with the FBI and the path away from it is very different and harder as well. They deserve major blame in both cases, but why they deserve blame differs, and that matters.
> Again, by your logic every single communication system on iOS is "backdoored" simply because iCloud Backup exists.
You seem to be unfamiliar with the concept of iOS storage classes. The iOS security overview pdf from Apple will explain better than I can.
> I've never seen it on by default, it's a toggle.
I set up dozens of iOS devices per year. Logging into even the App Store (after declining to log in during initial setup) silently enables iCloud, and iCloud Backup. It is on by default and most users are never once presented with the toggle. You can accidentally enable it just by installing an app.
I consider iCloud backups being enabled by default to be a backdoor to E2E encryption; however there's a "more real" backdoor in iMessage, which is that Apple can undetectably add new devices (i.e. an FBI iPhone) to conversations so that all forward secrecy is broken. This has not been fixed.
Remember that the FBI really doesn't want to get caught nor does Apple.
So if you make a software package that monitors for "FBI iPhones" being added to your account, and make that available on github for other people to use, and have it send results back to a big web dashboard, then both the FBI and Apple would have to immediately stop for fear of being caught.
Such a package could be on Mac, where you have easy root access, because the e2e keys of chats has to be visible to all clients, not just iPhones.
Remember you only have to find one convincing case of Apple/FBI adding a phone to a users account without consent, and Apples privacy conscious reputation is ruined.
You may not be aware that Signal endpoint keys are of a device-local storage class that are excluded from backups of any kind, and consequently they do not leave the device. iMessage endpoint keys are backed up to Apple, effectively without encryption.
There is no step you can take that will easily compromise your Signal endpoint keys to a second party. Simply logging in to an iPhone (required to install apps!) will configure your device to escrow your iMessage keys to Apple.
When it comes to being run, I don't think there are too many that provide actual services because it's a lot of work to keep the secrecy going for long.
Consider the story of Crypto AG, the most enduring and therefore successful example of this sort of exploit; crucial for putting it in place was that the founder happened to be friends since 20 years with a highly placed chief in the NSA and that's just not a scalable model. It also required convincing an independent, world renowned, crypto expert to completely compromise their work and serve as the authority that kept lesser experts from questioning the cooked algorithm, also not a terribly scalable thing.
"Initially the device offered strong DES encryption, but this was replaced in 1984 by an NSA-supplied alternative algorithm."
"The NSA bought 12,000 DES-based PX-1000 units, along with 50 PXP-40 printers and 20,000 ROMs that had already been produced, for the total sum of NLG 16.6 million (EUR 7.5 million)."
I know this isn't probably the best approach in general, but it feels like a decent compromise for my use case:
I try to use medium sized services that are based in other countries. I figure if their government has insisted on backdoors it is less directly impactful than if my government does.
I don't really have anything to hide anyway so if my assumptions/approach is wrong then worst case they find out about the concert I'm talking about going to. I grew up thinking encryption and technology were going to free us though, and have found reality to be quite the opposite -- so I try to cover my tracks out of spite I guess.
> What other services might be run, controlled, or surveilled by the US investigative authorities?
Any service that is marketed to you as privacy- or security-as-a-service, or software sold as privacy- or security-enhancing, is virtually guaranteed to be secretly working against the interests of its users. You can't buy security or privacy in the form of software or services, because privacy and security are a set of good practices, not a product. People who think they can buy a "privacy phone" are just marks who are being conned by various organizations.
I don't follow your logic here. Why can't a company legitimately focus on a niche sub set of users who value privacy in their products? I'm thinking of products like protonmail, standard notes, and signal.
Good faith actors want to serve a market. Bad faith actors want to exploit it.
If you are seeking out a way to hide information, you are part of a market that is signalling you have something worth hiding (to you, at minimum). As a bad analogy, it's a bit like putting up a sign in front of your house that says "We went on vacation, but the door is locked!"... basically, begging to be exploited.
Short of regular, independent audits, you are mostly reduced to guessing who to trust, and even then (as demonstrated by Lavabit) the trustworthiness of the actor isn't always the only relevant factor.
> Why can't a company legitimately focus on a niche sub set of users who value privacy in their products?
No one is saying that they can't, somebody is saying that they are, but not in that customer's best interest.
-----
edit: with parallel construction, there are absolutely no drawbacks to narking on your users, assuming that the way you do it is an obvious possibility that you just minimize or ridicule the likelihood of.
e.g. "Everybody knows that they can use Method A to break your encryption, but that would be company suicide! Do you seriously think they're stupid enough to do that!? They even made the client open source to be open about what they can or can't do."
rather than
"Guys, I just noticed a process running on my phone that isn't supposed to be there."
I'm not sure why it is 'virtually certain'. It seems very likely to me that a company, which takes payments as a funding model, could exist with end 2 end and be a legitimate business.
Do you have any source at all to back a claim like that?
If you buy your privacy services for $9.99/month from a sponsored ad in a YouTube video, then yeah.
If you are willing to spend 6+ digits, there are absolutely good solid privacy products/services you can get. These folks aren't catering to individuals or street criminals, though.
You're falling for the exact same fallacy as the dipshits in the article fell for. The FBI was selling the ANØM "service" for $4000 per phone per year.
That's a couple orders of magnitude below the bar of what I am referring to.
I agree, if you're a person with a serious targeted privacy threat, and you think there's a magic bullet, you're kidding yourself. Any serious data privacy solution is going to involve a ton of associated meatspace solutions beyond buying one SKU and calling it done.
I'm not super informed on this topic, but I was under the impression that all the chat apps were somewhere between malevolent and incompetent, except possibly Signal.
> What other services might be run, controlled, or surveilled by the US investigative authorities?
More boringly, and more simply, they hire people who work at Apple, Google, and so forth to exfiltrate data, create constant new bugs that will at some point be called a 0 day, and it goes on and on.
Apple and Google turn over data without subterfuge under FAA 702. That's what the Snowden leaks were about. There are APIs for the IC to fetch this data directly from the servers of Apple and Google without a warrant. They don't need to hire anyone.
What other services might be run, controlled, or surveilled by the US investigative authorities?
What other services might have operators that can be extorted or blackmailed by those same authorities, due to the fact that US-based data aggregators (FAANG et al) have extensive information about the lifestyles, behaviors, habits, and travel of billions of people worldwide?
We already know Apple has preserved a backdoor in the end-to-end cryptography of iMessage at the FBI's behest, as reported by Reuters. WhatsApp has always had the same backdoor (unencrypted backups to cloud services). The largest services are all unsafe for privacy.
What about the medium-sized ones?