Hacker News new | past | comments | ask | show | jobs | submit login
Code from the FBI’s Anom encrypted messaging app (vice.com)
204 points by danso on July 7, 2022 | hide | past | favorite | 103 comments



> The code shows that the messages were secretly duplicated and sent to a “ghost” contact that was hidden from the users’ contact lists.

Lots of "secure" messaging apps do this for intel and surveillance and not just the white hats.

Other areas that "secure" messaging apps have holes in is the anti-spam/moderation systems that need to view messages and in the clients themselves who have access to the unencrypted content. This is also taking place in other client apps as well: VPN, password managers, extensions, wallets, even build systems and more. Many like VPNs have logs sent elsewhere but deleted locally -- access to entire machine and all network access. People are way too trusting of "secure" systems/apps that are very common today based on trust.

All of these apps/systems would pass code checks, reviews, security inspections and essentially be encrypted/"secure" though a copy is sent off to another area for review. At runtime the leak is in the direction of the data.


> Lots of "secure" messaging apps do this for intel and surveillance and not just the white hats.

Lots of VPNs, too!

"We don't keep any logs! We just pipe a direct feed to the government so they can keep logs!"


Do you have any examples?


If you have access to court records in the netherlands, look up the case of the Dutch KPN blackmailer from a few years ago. He got caught because he used NordVPN instead of only using Tor. NordVPN gave everything they had on him, which led to his conviction.

As a rule, if a VPN is hosted in Europe/North America, you need to assume that they log.

edit: my source is from this talk at BalCCon, unfortunately the video is not available. https://2k19.balccon.org/events/278.html


I was under the impression the KPN attacker merely used a single-hop VPN service from his KPN connection and the investigators managed to correlate traffic-flows.

Unfortunate that talk is not publicly available.


Unfortunately Google is failing me right now. There was a case within the last few years where someone was convicted because their VPN provider was sharing raw traffic (not logs) with the government. If anyone knows what I'm referring to, please chime in.

But given the existence of Room 641A[0], and other extra-judicial mass surveillance, I am confident in my assertion. Moreover, the explosion of VPN companies with large marketing budgets over the past few years has always made me suspicious.

[0] https://en.m.wikipedia.org/wiki/Room_641A


You're probably thinking of the big story from January of this year:

https://www.pcmag.com/news/nordvpn-actually-we-do-comply-wit...

NordVPN says they don't collect logs, but then it came out that they send information to law enforcement. So the big question is what information is being sent to law enforcement. Despite what NordVPN maintains, it seems like they do keep incriminating data about their users.


Maybe the vast majority of big companies listed on stock markets work for the govt, and the price of a CEO or board member keeping quiet is the income and wealth gained from these stock market listed entities?



What does raw traffic that is not in the form of logs look like? Maybe you mean that they are streaming logs in real-time rather than sending log files in batches periodically?

You don't mean sharing raw traffic as in forwarding actual requests, I wouldn't think?


It could be either mirroring all the traffic to an agency-provided black box, or sending just NetFlow (or sFlow) metadata about the traffic.

And if someone thinks the first option is not realistic - this is how almost every ISP in Russia works (search for SORM-2 and SORM-3 for more detail, typically traffic is mirrored at ISP's border gateway(s)). Sure, Russia or China wouldn't be great examples, but the point is that it's technically possible, even at scale, and all the real problems are in the meatspace (legal enforcement or coercion).


> You don't mean sharing raw traffic as in forwarding actual requests, I wouldn't think?

The usual method is either to use a splitter or switch configuration to mirror traffic to another interface, attached to a machine running packet capture/analysis tools.


Unencrypted obviously.


So one way you can identify VPN traffic is to slow the vpn connection temporarily between the target and the VPN server, whilst observing the connections coming out of the VPN server, spot the slow connection coming out, you then its possible to identify where the VPN traffic is heading! Its just traffic shaping.

Think of it as just monitoring vehicle movements on a road network which cross borders, you cant see the contents of the vehicle, but you can see where they are heading back and forth multiple times, and thus work out what they are upto, even if the destination is a cloud server!

The article also shows that the network carriers, land line and mobile network carriers many of which are stock market listed dont monitor the networks to protect their users, thus do they fail in their duty of care? I think many victims of crime could have a case, it wouldnt be hard to spot whats going on for them, in much the same way the postal service can tell whats going on when people deliver drugs through the postal system, or supermarket loyalty card schemes can highlight changes in their customers which can indicate health issues. The lawyers and judges probably wouldnt be able to understand it, in much the same way people cant understand quantum physics, so is there a case to bring?

Everyone gives away metadata, if you know what to look for, the crime is the current setup of society benefits a privileged few!


PureVPN’s ‘non-existent’ logs given to authorities, to arrest alleged stalker

https://thenextweb.com/news/purevpns-non-existent-logs-used-...

These guys just actually logged certain data when they said they didnt


You also have to keep in mind that spreading FUD about secure messaging apps can be a type of manipulation, in some cases planted by the FBI.

If you don't have confidence in say, iMessage, which is pretty secure, you might instead go for a "secure" messaging app that's actually a plant (Like Anom).


I would be bewildered if the FBI doesn't have a back door into every messaging app allowed on american cell phones. That being said there is one chat app that cannot be easily broken by the FBI: Pictochat on the nintendo DS. You would need an FBI agent with a nintendo DS searching for chat rooms in physical proximity to the communicators.


I'm not sure if this is 100% serious but if it is, how could the FBI having a backdoor into every messaging app fit in with the relative ineffectiveness of the FBI?[0] If they truly did have the ability to spy on any messaging app without issue then you would expect them to have far more success than they actually do. For comparison, a quick Google search says China has a 99.9% conviction rate. Remember security and intelligence agencies also want people to think they are some omnipotent and omniscient entity that you cannot escape from, even though we know this is far from true[1][2].

[0]: https://time.com/magazine/us/5264136/may-14th-2018-vol-191-n...

[1]: https://www.nytimes.com/2017/05/20/world/asia/china-cia-spie...

[2]: https://www.nytimes.com/2021/10/05/us/politics/cia-informant...


This is kind of a nitpick, but U.S. prosecutors also boast 99+% conviction rates. Note that conviction rate is not necessarily indicative of a systems effectiveness (or lack of fairness). Prosecutors only pursue charges in very few cases, and very few of those cases go to trial

https://www.pewresearch.org/fact-tank/2019/06/11/only-2-of-f...


The first link I mentioned says in 2017 the conviction rate of crimes referred to the Justice Dept. by the FBI was 47%. Of course I am guessing the differences here are definitional, regardless the stat was just used as a broad proxy for how successful the FBI is in getting people they think are criminals convicted.


I always encourage people who have this positive of a view of the FBI to go read the FBIs own case records and accounts.


All E2EE does is keep the general herd of users safe from malicious insiders and script kiddies. It's valuable but doesn't make these apps worthy of the catch-all "secure" descriptor.

For anyone worth targeting, there are so many options available to actors with moderate resources. They will pwn your OS with an RCE exploit; or interfere the next time you update that "E2EE" app via Google or Apple's servers; or your laptop will take a few seconds longer to reappear at airport security; etc.

Marketing messengers as "secure" because they use some derivative of the Double Ratchet is like your bank saying your funds are secure because their website uses TLS.


I don't know why radicals can't go back to physical written works and spoken word. No surveillance from someone sitting in a cubical 1000 miles a way at least. Suddenly the agency needs to spend a lot more money and effort to physically infiltrate your group and intercept written communication. It's worked for thousands of years, and the internet has not made it obsolete contrary to popular belief.


Still easier to do it digitally and e.g. physically exchange digital encryption keys?


Yes lol. You can share your AES key physically and write/edit an application that encrypts with that. You could share one time pads, or other preshared secrets for techniques like winnowing and chaffing and whatnot. Always amusing that these guys always opt for trust in some bizarre app.


> Lots of "secure" messaging apps do this for intel and surveillance and not just the white hats.

It's how Apple would do iMessage intercepts for the FBI.


I wonder if Messages will be available at all in lockdown mode? If Apple can be compelled to build in surveillance (and it's not clear to me that they can be), then it really should be.


In a word, yes.

> Messages: Most message attachment types other than images are blocked. Some features, like link previews, are disabled.

https://www.apple.com/newsroom/2022/07/apple-expands-commitm...


Lockdown mode protects against bad snoopers, not good snoopers...


I think they indicated it will be, for example, they indicated that link previews would not be available, if I recall correctly.


You can't really do this in iMessage.


What makes you think that?


Because it's impossible to "hide" recipients/devices. The get request will reveal that you're messaging someone you didn't want to and anyone adding a "stealth" device to a number auto-notifies all devices linked to that number.


I was talking about using a 3rd party key and copying during transit, not explicitly adding an additional contact to the message and sending it using standard channels. There has been papers in the past based off of Apple's own documentation that showed it was possible.

But based on some googling Apple hasn't provided this capability, at least to the FBI (they claimed it would require modifying the iMessage key server, so we have to trust them on that they haven't done that yet), mostly because they don't need to - the iCloud backups are usually enough for pen-register intercepts. A leaked FBI's document backs this claim up:

https://www.rollingstone.com/politics/politics-features/what...


Again..this isn't possible because we would have seen its use already and why the FBI uses the iCloud backup strategy for investigations.


> we would have seen its use already and why the FBI uses the iCloud backup strategy for investigations.

That's what I just said, it still doesn't mean the feds would be dumb enough to make visible changes the destination contacts


How could it be done better?

I imagine for example, the protocol could be opensource and documented, and then the app-maker could be a different company than the server-owner.

The server-owner need not be trustworthy as long as the protocol is sufficiently reviewed.

The app-maker still needs to be trusted, but you can at least constrain the app to only communicating with the one allowed server and having no other network access.

Perhaps the server owner could also make a webpage showing all the people you have communicated with... That way a malicious client couldn't send your data astray.


Anom wouldn’t pass a security review. Certainly not a successful decompile. Certainly not code review.


Matrix treats all chats as chatrooms, even 2 people chats. This is promoted as a simplification, but maybe it's a security problem. If a protocol only allows 2 people to chat, harder to exfiltrate the messages


Encrypted chats need device/key verification/permission before the receiver can see any message contents.

Even if Matrix were to limit chats by protocol, a malicious sysadmin could probably fake a cross-signed device if they had access to the client like this. I don't think this is actually a problem, a chat room is as good a representation as anything.


As the sibling implies: it doesn’t matter if the conversation is limited to 2 users - it’d just shift the attack adding a ghost device to one of those users, which is equivalent (and arguably more subtle) than adding a 3rd user to the conversation.


>Lots of "secure" messaging apps do this for intel and surveillance and not just the white hats.

That's a broad claim. Do you have specific examples?


> The code itself is messy, with large chunks commented out and the app repeatedly logging debug messages to the phone itself.

[…]

> For this new analysis of the code, a source provided a copy of the Anom APK as a standalone file which Motherboard then decompiled.

This doesn't add up. The code snippets they show are decompiled obfuscated Java. But compilation->decompilation wouldn't have preserved comments from the original source code.

Sometimes Java decompilers spit out chunks of code they don't understand as commented-out sections for manual analysis. Maybe Motherboard is misinterpreting this output? And, yeah, decompiled Java is gonna be messy, especially if the compiled code was obfuscated, as looks to be the case here.


So what's the strategy moving forward? The operation clearly hasn't permanently solved crime, the next generation of organized crime bosses won't trust any apps to handle their secrets, so I guess their communication just moves offline again? Or maybe each develops their own methods in house that they know they can trust (such as shooting holes in a wall on call of duty)?


> So what's the strategy moving forward? The operation clearly hasn't permanently solved crime, the next generation of organized crime bosses won't trust any apps to handle their secrets, so I guess their communication just moves offline again? Or maybe each develops their own methods in house that they know they can trust (such as shooting holes in a wall on call of duty)?

Maybe the strategy is just "get the win now, and tomorrow's another day." A lot of people seem to think it's a bad idea to use some technique that will motivate a counter-technique , like that counter-technique can be prevented by not using the technique (it comes up a lot when sanctions are discussed). However that's flawed assumption. Sometimes sitting on a technique will mean it becomes obsolete before you can realize advantage from it, and it's actually smarter to try capture that advantage while you still can.

Also, if organized crime stops trusting apps and goes back offline for communication, it could become far less efficient/effective, which would be a win for law enforcement.

Also, a sucker is born every minute. Maybe the Mob will shy away from encrypted apps due to institutional memory, but some upstart criminal orgs without that memory may still adopt "FBI 'Encrypted' Messenger 2.0."


People who seriously need communications hidden are probably using espionage type tactics already. Put out a benign signal out in the public realm someplace, but your agents have the secret key that turns that seemingly irrelevant signal into an actionable message. Number stations are the famous example (1), but lets think in terms of 2022. Imagine you had an instagram page that posted a certain meme of interest at a particular time as the signal, for instance. Or a seemingly automatic discord bot operation. Or a reddit bot coming in at a certain time to automatically correct a common typo. Anything could be used as a signal. This form of communication imo is impossible to prevent by virtue of that, and people would be wise to use it if they are speaking of things that require such protections.

1. https://en.wikipedia.org/wiki/Numbers_station


The goal is not to end organized crime for all eternity. It's to arrest as many criminals as they can today.

Tomorrow they'll think of a carrier pigeon spy.


> the next generation of organized crime bosses won't trust any apps to handle their secrets, so I guess their communication just moves offline again? Or maybe each develops their own methods in house that they know they can trust (such as shooting holes in a wall on call of duty)?

Organized crime is smart but even smart people can be dumb. They were dumb for trusting a random app and not using in-house or at least looking at the source code in the first place.

And as someone else noted, if they just decide to go completely online that’s going to be much harder


In my opinion the goal is similar to the MPAA's goals with movie piracy: Make it harder, and many people will stop doing it. Ultimately, that is what banks do when they put money into vaults. Someone could still steal the money, but it's insanely difficult.


But in many cases, movie piracy is faster, more convenient and more agreeable than consuming through locked down ad funnels like Netflix or Disney+ which don't let me outright own the content or the viewing experience, or Blu-rays which enforce DRM and specific region-locked devices.


You're the pirate equivalent of a master jewel thief. "It's so easy! Just hotwire the security system, dance through the laser field, duh!" It's stochastic. They want to cut out casual piracy, those of us who know how to pirate are acceptable losses.


I'm speaking on the UX involved in the process. If you can understand the technology at a working level (about as difficult as learning to install a browser or email client) then you have access to an ecosystem that prizes discoverability and lack of friction.

There aren't laser fields. There's a directory and concierge. A full back-catalogue. A social network.

Most importantly, the viewing experience is consistent across all media, your media viewer of choice, without the need for a constant internet connection. Just like it used to be.

I don't encourage rampant piracy, but I certainly understand when some people don't have the luxury of purchasing an unlocked blu-ray ripper and storing old blu-rays in order to get the same viewing experience.

Another good example is old games like NOLF which are entirely unplayable today without piracy. Pirated releases often contain much smaller download sizes as well as patches to enhance or correct the experience.

Pirating won't convince companies to change, so I vote with my wallet by trying to avoid purchasing media which has unacceptable licensing terms or pervasive DRM, and purchasing any media which I think integrates my values. I also only purchase on platforms which prize discoverability, such as GOG or Steam, while avoiding locked-down platforms like Origin or Epic Games store.


Making crime more difficult to get away with is a deterrent to participating in crome.


The decompiler they used to view that code is not very good, that output is garbled.

If you're going to take apart JVM bytecode, you're better off using Recafe or Quiltflower.

https://github.com/Col-E/Recaf

https://github.com/QuiltMC/quiltflower


The reason the output is "garbled" is because it was obfuscated with ProGuard - there's no real way around that except for manually renaming variables and classes.

Any idea whether any of these two decompilers work with Dalvik bytecode?


You could always use dex2jar first. https://github.com/pxb1988/dex2jar


Last year the author of this story was on Darknet Diaries talking about ANOM and encrypted phones in general. It was a good episode.

https://darknetdiaries.com/episode/105/


Great podcast, I really like the stories. Unfortunately it's a little basic technically. Any suggestions for more techy podcasts on netsec / the scene?


Not really. I have trouble following technical podcasts. I prefer to read technical details and listen to stories, so all the podcasts I like are focused on story telling.


I wish somebody would create some scheme to like self host the backend of an app.. like you launch Signal and it has a button to type in the name of your own server, where that server runs a VM that you configure and setup on your own PC locally then upload to AWS or something and has some facility to constantly report to you the hash of the memory and disk contents, along with some contract from AWS that states that Amazon cannot alter the results of the hash function results sent to you under any circumstances. Plus there would be some facility to validate the contents of what runs in that VM at any instant in time. Basically the idea is you can’t trust anybody at all. I guess pgp would maybe solve this for direct messaging?


I think something like Ricochet (if it were still actively maintained) could be a good solution.

https://github.com/ricochet-im/ricochet

Every user is their own Tor onion service, so you get E2E encryption and no centralized servers. The whole thing hinges on the security of Tor itself which is probably a safe enough bet.


Briar https://briarproject.org/ is similar, plus, AFAIK, it's using E2E on top of Tor service. However, looking at massively caught Tor services, I believe, to keep it truly secure, you have to rotate your service address, aka reinstall the app, losing all your history of communication.


You could still have holes in a self hosted setup. The self hosted version could forward stuff on to someone else… so you could setup a firewall. But is it really blocking everything or is it just telling you it’s blocking things. This can go on forever, it all depends on the threat model and how far down the rabbit hole you have time to go.


There’s plenty of that stuff. Jabber and XMPP. And also more up to date is Matrix.org which nobody seems to want to use and I’m not sure why.

The problem though is that you’re still trusting the code. Nothing stops self hosted from rotting on you unless you look and read the code yourself.


In case anyone is in doubt after reading your comment, I want to clarify that XMPP is "up to date" and has an active community and ecosystem of software projects, open-source and commercial.

Matrix is similar in many ways, and certainly younger, but also has a vastly different design at the protocol layer. It's more akin to a distributed JSON database/log, while XMPP is more focused on message routing and synchronization. Therefore each has different strengths/weaknesses for different use cases.

Despite these differences, both protocols indeed have IM apps and a whole lot of other software built on top of them.


> Matrix.org which nobody seems to want to use and I’m not sure why

Lots of people are using it (and probably more every day), but there are also some quite vocal haters. Of course it has its share of problems (availability of non-Electron clients and different servers among then), but many of them constantly improve as the ecosystem grows.


It's pretty easy to host your own chat server. https://twitter.com/inputmice/status/1170651869359804416


We're basically working on this at Comm: https://github.com/CommE2E/comm


> Last year, the FBI and its international partners announced Operation Trojan Shield, in which the FBI secretly ran an encrypted phone company called Anom for years and used it to hoover up tens of millions of messages from Anom users.

What other services might be run, controlled, or surveilled by the US investigative authorities?

What other services might have operators that can be extorted or blackmailed by those same authorities, due to the fact that US-based data aggregators (FAANG et al) have extensive information about the lifestyles, behaviors, habits, and travel of billions of people worldwide?

We already know Apple has preserved a backdoor in the end-to-end cryptography of iMessage at the FBI's behest, as reported by Reuters. WhatsApp has always had the same backdoor (unencrypted backups to cloud services). The largest services are all unsafe for privacy.

What about the medium-sized ones?


>We already know Apple has preserved a backdoor in the end-to-end cryptography of iMessage at the FBI's behest, as reported by Reuters. WhatsApp has always had the same backdoor (unencrypted backups to cloud services). The largest services are all unsafe for privacy.

I don't agree with your characterization of that as a "backdoor" and I think that dilutes the term dangerously. There is no need to use Apple's backup at all, iDevices can still be backed up to your own computer same as always. I do think it's a real problem and one of the real clear cases where Apple's lockdown is anti-user, it should be possible to direct convenient automatic backup at any service one wishes using standard APIs. But it's not any kind of backdoor in iMessage, in the same way it's not a backdoor in Signal or whatever else you might choose to run. Or would be a backdoor if you decided to do unencrypted backups to your own NAS because you decided under your threat model that physical attacks there were less of a risk/value then losing data due to losing keys or something. It's an entirely orthogonal system to the encryption of the messengers themselves. It's not a "backdoor" in a communications system, any communications system, if someone chooses to keep logs unprotected elsewhere. Lack of E2EE in the most convenient wireless backups is a flaw in the general iOS ecosystem, not iMessage specifically.


As long as iCloud backup is a) on by default, and b) isn’t clearly marked as being readable to Apple, it is a back door in practice, especially since the FBI is the reason that they did this.

Let’s not even talk about Chinese users, as apparently Apple bending over to store all their data in CCP data centers doesn’t count.


Agree and it's not just Chinese users. Just talking about that one country (there are others, let's set that aside) it's Chinese users, and anyone who happens to be in China, and it would seem to be also anyone who, knowingly or unknowingly, anywhere around the world… in the US, the UK, the EU, etc… anywhere, has so much as a one-time interaction with such a user.

I feel like that paragraph would lose most people because it's a long chain of connections. It's hard to do a TL;DR but here, I'll try:

Basically "If you message someone in China, Apple sees to it that your identity and content is handed to the Chinese government."

I don't know this for a fact. But as far as I can tell (and they aren't saying anything) this is exactly what is going on.


Interesting point! The thing with Apple that annoys me is they believe their own marketing, but if you say "Privacy is a Human Right" and deny it to Chinese citizens, you either don't consider Chinese people human, or you are full of shit. Honestly, not sure where Apple stands given their labor practices in China, the people who assemble the widgets certainly haven't been treated like human beings.


> As long as iCloud backup is a) on by default, and b) isn’t clearly marked as being readable to Apple, it is a back door in practice

Tough to disagree.


No, you're just wrong, even if your statements were right which they aren't either. Again, by your logic every single communication system on iOS is "backdoored" simply because iCloud Backup exists. Or for that matter any comms method on Linux or FreeBSD or macOS or Windows if someone makes unencrypted backups. That's horse shit, and it degrades the specificity and value of the term "backdoor" in the same way as "bricked" has become. Backdooring is a specific part of a given product/stack, a covert method of bypassing regular authentication. There is no convert status here, Apple clearly lays out exactly what components of iCloud are encrypted and how in their "iCloud security overview" [0]. And it's not as if E2EE backups have no downsides mass market, it means that users are fully responsible for their data with zero recovery possible. I still think Apple is wrong to not offer even the capability to do better wirelessly/networked and in fact that should be against the law, but it's not a "backdoor" and if there were options I'm sure lots of people would still pick the "recoverable in an emergency" one. Think, if it turns out there actually is a backdoor in iMessage, like a secret second key that can be applied to any MITM'd data to decrypt it, how would you even describe that pray tell as different from "choosing to use a non-E2EE backup method"? Or do you not think any differentiation there would matter, that such a thing would be identical to you saying right now that "Signal has a backdoor"? All that said:

>a) on by default

I've never seen it on by default, it's a toggle. I can't find anything to support this assertion, and Apple's docs seem to indicate too it must be turned on [1]. How would it even be possible for this to work? Apple only gives you 5GB by default, and backups absolutely count against the quota.

>and b) isn’t clearly marked as being readable to Apple

As I linked they do clearly convey that. If you think it should be some extra warning dialog on enabling it, maybe that's a criticism, but there's certainly no standard around that across software industry-wide including on computers. Whether something is E2EE or not is usually something those that care need to look up. Maybe that should change. But no, it's not a "backdoor in practice".

>Let’s not even talk about Chinese users, as apparently Apple bending over to store all their data in CCP data centers doesn’t count.

No let's not, and no it doesn't here. That's a case with a lot more complexity then tends to come out on HN where instead people like you use it as a lazy bit of whataboutism. Apple is in the wrong there, and the US for allowing/encouraging it as well, but not for the same reasons as with the FBI and the path away from it is very different and harder as well. They deserve major blame in both cases, but why they deserve blame differs, and that matters.

----

0: https://support.apple.com/en-us/HT202303

1: https://support.apple.com/guide/iphone/back-up-iphone-iph3ec...


> Again, by your logic every single communication system on iOS is "backdoored" simply because iCloud Backup exists.

You seem to be unfamiliar with the concept of iOS storage classes. The iOS security overview pdf from Apple will explain better than I can.

> I've never seen it on by default, it's a toggle.

I set up dozens of iOS devices per year. Logging into even the App Store (after declining to log in during initial setup) silently enables iCloud, and iCloud Backup. It is on by default and most users are never once presented with the toggle. You can accidentally enable it just by installing an app.


I consider iCloud backups being enabled by default to be a backdoor to E2E encryption; however there's a "more real" backdoor in iMessage, which is that Apple can undetectably add new devices (i.e. an FBI iPhone) to conversations so that all forward secrecy is broken. This has not been fixed.

https://www.wired.com/2015/09/apple-fighting-privacy-imessag...


Remember that the FBI really doesn't want to get caught nor does Apple.

So if you make a software package that monitors for "FBI iPhones" being added to your account, and make that available on github for other people to use, and have it send results back to a big web dashboard, then both the FBI and Apple would have to immediately stop for fear of being caught.

Such a package could be on Mac, where you have easy root access, because the e2e keys of chats has to be visible to all clients, not just iPhones.

Remember you only have to find one convincing case of Apple/FBI adding a phone to a users account without consent, and Apples privacy conscious reputation is ruined.


> There is no need to use Apple's backup at all, iDevices can still be backed up to your own computer same as always.

Even if you disable iCloud Backup, Apple can still read all of your iMessages.

They'll be in the (on by default) iCloud Backups of everyone you chat with.

It is absolutely a backdoor: https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...

You may not be aware that Signal endpoint keys are of a device-local storage class that are excluded from backups of any kind, and consequently they do not leave the device. iMessage endpoint keys are backed up to Apple, effectively without encryption.

There is no step you can take that will easily compromise your Signal endpoint keys to a second party. Simply logging in to an iPhone (required to install apps!) will configure your device to escrow your iMessage keys to Apple.

That's a backdoor any way you slice it.


Why use Apple in the first place? Problem solved.


When it comes to being run, I don't think there are too many that provide actual services because it's a lot of work to keep the secrecy going for long.

Consider the story of Crypto AG, the most enduring and therefore successful example of this sort of exploit; crucial for putting it in place was that the founder happened to be friends since 20 years with a highly placed chief in the NSA and that's just not a scalable model. It also required convincing an independent, world renowned, crypto expert to completely compromise their work and serve as the authority that kept lesser experts from questioning the cooked algorithm, also not a terribly scalable thing.


https://www.cryptomuseum.com/crypto/philips/px1000/index.htm

"Initially the device offered strong DES encryption, but this was replaced in 1984 by an NSA-supplied alternative algorithm."

"The NSA bought 12,000 DES-based PX-1000 units, along with 50 PXP-40 printers and 20,000 ROMs that had already been produced, for the total sum of NLG 16.6 million (EUR 7.5 million)."


I know this isn't probably the best approach in general, but it feels like a decent compromise for my use case:

I try to use medium sized services that are based in other countries. I figure if their government has insisted on backdoors it is less directly impactful than if my government does.

I don't really have anything to hide anyway so if my assumptions/approach is wrong then worst case they find out about the concert I'm talking about going to. I grew up thinking encryption and technology were going to free us though, and have found reality to be quite the opposite -- so I try to cover my tracks out of spite I guess.


> What other services might be run, controlled, or surveilled by the US investigative authorities?

Any service that is marketed to you as privacy- or security-as-a-service, or software sold as privacy- or security-enhancing, is virtually guaranteed to be secretly working against the interests of its users. You can't buy security or privacy in the form of software or services, because privacy and security are a set of good practices, not a product. People who think they can buy a "privacy phone" are just marks who are being conned by various organizations.


I don't follow your logic here. Why can't a company legitimately focus on a niche sub set of users who value privacy in their products? I'm thinking of products like protonmail, standard notes, and signal.


Good faith actors want to serve a market. Bad faith actors want to exploit it.

If you are seeking out a way to hide information, you are part of a market that is signalling you have something worth hiding (to you, at minimum). As a bad analogy, it's a bit like putting up a sign in front of your house that says "We went on vacation, but the door is locked!"... basically, begging to be exploited.

Short of regular, independent audits, you are mostly reduced to guessing who to trust, and even then (as demonstrated by Lavabit) the trustworthiness of the actor isn't always the only relevant factor.


> Why can't a company legitimately focus on a niche sub set of users who value privacy in their products?

No one is saying that they can't, somebody is saying that they are, but not in that customer's best interest.

-----

edit: with parallel construction, there are absolutely no drawbacks to narking on your users, assuming that the way you do it is an obvious possibility that you just minimize or ridicule the likelihood of.

e.g. "Everybody knows that they can use Method A to break your encryption, but that would be company suicide! Do you seriously think they're stupid enough to do that!? They even made the client open source to be open about what they can or can't do."

rather than

"Guys, I just noticed a process running on my phone that isn't supposed to be there."

Which is what we've been taught to watch out for.


Yeah ... i mean ... everyone who uses protonmail non-ironically is a dupe. It is virtually certain that it is a front for state intelligence agencies.


I would love to hear your explanation for that claim. They have publicly entered legal battles over compliance with LE requests.

The government doesn’t have the resources to compromise every online service. There’s money on the line for entities like proton.


I'm not sure why it is 'virtually certain'. It seems very likely to me that a company, which takes payments as a funding model, could exist with end 2 end and be a legitimate business.

Do you have any source at all to back a claim like that?


Connections with big academia and their custom syncing thing are red flags, if only that.


If you buy your privacy services for $9.99/month from a sponsored ad in a YouTube video, then yeah.

If you are willing to spend 6+ digits, there are absolutely good solid privacy products/services you can get. These folks aren't catering to individuals or street criminals, though.


You're falling for the exact same fallacy as the dipshits in the article fell for. The FBI was selling the ANØM "service" for $4000 per phone per year.


That's a couple orders of magnitude below the bar of what I am referring to.

I agree, if you're a person with a serious targeted privacy threat, and you think there's a magic bullet, you're kidding yourself. Any serious data privacy solution is going to involve a ton of associated meatspace solutions beyond buying one SKU and calling it done.


I'm not super informed on this topic, but I was under the impression that all the chat apps were somewhere between malevolent and incompetent, except possibly Signal.


Matrix and some forks of Signal are also cool.

But, yes, largely correct.


> What other services might be run, controlled, or surveilled by the US investigative authorities?

All closed source software


> What other services might be run, controlled, or surveilled by the US investigative authorities?

More boringly, and more simply, they hire people who work at Apple, Google, and so forth to exfiltrate data, create constant new bugs that will at some point be called a 0 day, and it goes on and on.


Apple and Google turn over data without subterfuge under FAA 702. That's what the Snowden leaks were about. There are APIs for the IC to fetch this data directly from the servers of Apple and Google without a warrant. They don't need to hire anyone.


The only jump on encryption you can hope to have is quantum encryption....If you don't know what quantum encryption is well you better learn. In quantum computing qubits are always in the superimposed position meaning they are all information at the same time ....you change one thing of the first qubits and the second set changes automatically no matter the space or distance away. The military is already developing a way to use qubits to create an active network of communication that cannot be hacked there are papers already published cited:https://ieeexplore.ieee.org/document/1269020 so you know it's real .....I say again if you ever create something that's world changing you better believe it's not solely yours


And we wonder why the common person believes in conspiracy theories by the government against them


weird. the page briefly loads and then goes to a full screen 404 page for me.


I'm able to read it if I turn off Javascript.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: