I wish the definitions were spelled out. It says Signal isn't "anonymous", which I assume means "uses a phone number to find peers". And it has the usual feature matrix problem: sure XMPP "does E2E". But what does that mean? It supports S/MIME. Do you want S/MIME? (You don't.) It supports OTR, TS and SCIMP too: but you need to be an expert in messaging schemes to understand how those are different. None of them implement double ratchets. None of them implement even close to the privacy features Signal has implemented. But on this diagram it is clearly better because there is more green and less red.
Another example: "open server" and "on-premise" says nothing about whether or not you really want to run one of those instances. It just says that hypothetically one could.
In terms of errors: the linked "E2E audit" for Telegram did not audit E2E at all, and in fact only cites sources saying that it's probably fucked. Wire has a real audit that isn't listed. WhatsApp uses Signal, just with fewer of the.
Use WhatsApp to talk to normal people. Use Signal for nerds, and... probably Matrix for group collab? Or maybe stop caring about secure messages for group collab so much :-)
Please make comments on individual cells for improvements to be seen/added more easily. This is obviously big research undertaking that got thrown together last weekend :)
> Another example: "open server" and "on-premise" says nothing about whether or not you really want to run one of those instances. It just says that hypothetically one could.
I know a number of people that run matrix.org servers for personal use and companies. The entire French government runs on riot.im/matrix.org.
When security and really privacy matters, you don't want a third party being able to push updates to your clients/servers at any time without warning.
> None of them implement even close to the privacy features Signal has implemented. But on this diagram it is clearly better because there is more green and less red.
What features specifically? Happy to add more columns if signal really has anything unique to offer here.
The things Signal gets red marks on are pretty fair though imo, and things others do better.
> Use WhatsApp to talk to normal people.
I think you will find many options above WhatsApp on the list in terms of security and privacy that have clients that are every bit as simple to use.
Other than their (very) effective marketing advantage, -why- would you encourage people towards these respective walled gardens instead of more open alternatives listed?
In order to comment on individual cells, we appear to first have to have an argument about how audits work. You say WhatsApp can only "claim" certain features as a consequence of it being closed source, but that's because of a misunderstanding about how audits work.
In a backchannel, as a consequence of this HN article, someone (names withheld to protect the guilty, they can identify themselves if they'd like) started looking at Dust and figured out the key store password is a hardcoded, short, ASCII string and the messages are encrypted with unauthenticated AES-CBC. They did not need the source code to do that.
Even if they release all their code at this moment to a dozen third party code auditing firms, they can then undermine all the verified security with the next closed binary they release.
I will never use Signal with their current direction and don't recommend anyone use it, but they get credit where it is due. They actually offer source code for their walled garden and allow that their basic crypto can -generally- be verified except for extreme cases like my other comment.
Allo, Whatsapp and other closed systems that give you no reason to trust them other than faith in the people advocating them and that their engineers got it 100% right. Their claims can't be verified so they can only be marked as just that, claims.
Sure, plenty of obvious flaws can be spotted without source code access, but many are hard to find even if you -do- have source code to the point they would probably have never been found were they not open to allow the right set of eyes to eventually read the right section of code (Heartbleed etc).
I found random number generation flaws in Terraform I would of -never- found without source code access. It is for this reason I trust Terraform and Hashicorp quite a bit. They normally get it right, but are not afraid to have other people audit and point out flaws because they don't have this arrogant idea their engineers will get everything right 100% of the time.
Security is -hard- and anyone that thinks they can get it right with a SPOF closed source approach is more interested in marketshare than security.
Trust, but verify. Likewise if you are not even allowed to verify, you should instantly distrust.
> Allo, whatsapp and other closed systems that give you no resaon to trust them other than faith in the people shilling them. Their claims can't be verified so they can only be marked as just that, claims.
I have repeatedly refuted that point and you have repeatedly ignored it.
> Sure, plenty of obvious flaws can be spotted without source code access, but many are hard to find even if you -do- have source code to the point they would probably have never been found were they not open to allow the right set of eyes to eventually read the right section of code (Heartbleed etc).
You are implying that no source code prevents you from finding obvious flaws, and I have given you two counterexamples in a messaging service that came up in this thread that I had never heard of before. That was casual peeking, not even a serious audit.
> I found random number generation flaws in Terraform I would of -never- found without source code access.
Do you professionally audit software? The fact that you can't find a bug without source does not mean that no-one can. Project Zero finds bugs in Windows and Edge every other day that are a lot more complicated than figuring out what RNG Terraform uses and how Terraform uses it.
> Security is -hard- and anyone that thinks they can get it right with a SPOF closed source approach is more interested in marketshare than security.
Since you've impugned my motives in two separate places in that post (referring to me as a "shill"), I am no longer interested in discussing this with you. I'm sure people can make up their minds from the thread.
I changed the word "shill" in an edit right after I posted as that was unfair/unhelpful and realized you might think it was directed at you. It was not. I welcome this type of debate personally.
> Do you professionally audit software?
I do as a matter of fact. I'll be honest it is normally much easier in closed products as I know what to look for. It is generally much harder to find flaws in popular open source systems as someone else has usally long beaten me to the low hanging fruit in critical codepaths.
I have been burned and seen others burned so many times by closed software and teams that ship security regressions that I now personally use only open tools I can audit at some level, or can audit the auditing and reproducible build process. I have seen "security" companies cut corners too many times in order to feature farm to trust anything that won't let me see the source code.
I have even professionally audited multiple systems on the spreadsheet myself, some of which I am aware of vulnerabilities for currently under embargo.
So far you have cited examples which I not only didn't ignore but responded to in the form of counter examples.
This is however turning into a open source vs closed source debate with subjective evidence, which in the end always boils down to if you have blind faith in a small group of people or not.
In the spreadsheet I tried to only include things that could be fairly objective, but when we get into concepts of trust of binaries and their authors, it gets muddy to be sure, and we will end up choosing paths based on our own threat profiles and experience.
This is why I tried to include -everything- in the spreadsheet I am aware of, so people with threat profiles different than mine can make informed choices.
You professionally audit secure messaging applications? For what kinds of vulnerabilities? I have turned down invitations to audit some of these applications because I didn't feel qualified to render an assessment (I've been doing professional software security assessment, of closed/open source applications, since 1996). Did you take money for those assessments? Which ones did you do? I'd like to take a closer look at some of them sometime.
I audit tools myself, my employers, or partners consider and if I find obvious flaws then I file bugs for those products and generally we either don't use them or limit their use.
I feel qualified only to certify that a tool is obviously insecure, but I don't think -anyone- will ever be qualified to solo certify something as totally secure. (but sadly that is how clean audits are generally read). I have in the past hired 3-4 audit firms and each would find flaws the others did not, and that mine did not, but fail to spot flaws that mine did. No one has the full picture.
The only relevant one not under embargo atm was my recent casual 2-3 hour audit of Lifesize which I found right away had a number of alarming issues the company would not address. Some of these may of been found to be non issues under further inspection, but there was more than enough to assert security was not a major focus of the platform.
This cursory look was all it took for me to feel confident in ending any consideration of that product to protect the interests of the entity considering it.
I reported all these to the company and there was no response for over 90 days and made my findings public after due warning.
I think you probably understand where I am coming from at this point. When I think about "audits" for secure messengers, I think about professional, specialized, contracted assessments with dedicated teams of experts (I'm a little biased, since that used to be my line of work). And my two objections to this discussion are:
1. I think LVH is right to point out that "audit" doesn't have much meaning in the document you've produced; it gloms together assessments of wildly differing depth and quality and condenses them all to a single pass/fail.
2. Secure messaging security is hard, much harder than secure transports (which you alluded to in the other subthread when you mentioned TLS). There are in fact not that many people in the world who can do a proper cryptographic assessment of a messenger at this point (not because it's prohibitively difficult; it should be well within the reach of everyone with a graduate degree in cryptography who enjoys coding --- rather, just because it's a specialized skill set that not many people have an opportunity to get good at). Like I said: I wouldn't say I'm qualified to perform such an assessment (LVH, different story). And all that "just" gets you the cryptography! If your messenger gets popular, the framework it's built on becomes one of the most important targets on the Internet!
So I get itchy when people say things that amount to "I've audited things, I have an authoritative opinion about this stuff". Probably not? Maybe, but, like, if we were going to place a bet...?
You can turn that logic around on me. But I'm not the one making an extraordinary claim. You are. So: what really bugs me is the idea that this is a problem domain you can reduce to a Wikipedia-style comparison chart. That kind of chart will get people hurt.
You did ask me if I audited things before and I answered honestly. I frequently report vulnerabilities in a range of open and closed systems and read a lot about those others find. I also don't consider myself an expert and generally distrust people that claim they are in this space because it is, as you say, really hard.
I initially started a spreadsheet to document the very high level objective traits of messengers I find useful and others found it interesting/useful and helped extend it.
It is far too much cognitive overhead to do deep dive evaluation of 75+ messengers, but what this list allows one to do is quickly eliminate services from consideration that lack features vital to address their use cases, platform targets, or threat profile: for instance the ability to self host, or end to end encryption.
From there once you have your short list, you can then make more effective use of time reading code, doing audits, reviewing audits of others.
If a list like this simply makes someone aware of new up and coming projects in the space, or old ones that have been quietly evolving in capability... then I feel justified in having shared it instead of having kept it as a private document of my personal notes.
I for one learned about a number of new projects when putting this sheet together, and interesting new approaches on solving these problems.
The thing that is pretty subjective about the list is how I sorted items based on my own research and threat profile.
I won't sit here and say Matrix or briar or any other items near the top are perfect, or lacks any security flaws. I personally place my bets on Matrix at this point based on my deep dive evaluation of it and similarly featured alternatives, but that is subject to change! Others are free to share your view that a well funded but closed source messenger like whatsapp is the general best bet.
To sober up on this topic a bit: Matrix has had glaring security flaws in the past, but also so have the options you personally recommend like Signal and Whatsapp.
At the end of the day all one can do is collect data, look over all the options, deep dive into the relative merits and claims of each to the extent one can, look at the research provided by others, and make a judgement call.
This list should be considered a starting point for research and a way to discover lesser known projects as it was for me.
It should -not- be seen as an end all "use this it is most secure" recommendation by any means. Anyone that looks at any one high level list of binary data points and makes an exclusive choice on that alone is doomed to shoot themselves in the foot with or without my help.
Hopefully that addresses your concerns about my intent here.
You published a spreadsheet listing messaging apps 'ordered by security' in which Signal, Whatsapp and iMessage are shown as way less secure than IRC. It also still says, and you've repeatedly argued here, that things like whether, say, WhatsApp uses E2E encryption is essentially unknowable.
IRC is lacking in features but setup correctly with modern OTR I stand by it having easy to reason about security advantages Whatsapp and iMessage do not. (Usability is another story)
Notably if there was a blackbox audit of Whatsapp or iMessage 6 months ago you have no path to easily check if a blatant backdoor was introduced in the build you installed last night. You also can't know if there were obvious flaws in the code the whole time that would be very hard spot in blackbox testing if you didn't know what to look for. Maybe the app build from last night leaks its key intentionally via very subtle steganography in metadata?
Compare to a binary I installed via F-Droid that I can confirm was built reproducibly from a given git head I can go see the code review for.
To use a simple analogy: I can see the exact ingredients of what went into -my- meal instead of what went into the meal of the food inspector.
This allows much deeper release accountability and -is- a major security feature iMessage and Whatsapp lack worth flagging.
Verifying security with source code is hard enough. Without source code it is substantially harder and I for one have no interest in using or recommending security tools that fail to be 100% transparent about how they work.
Without source code all we have are claims that can never be thoroughly verified.
You keep making this argument and ignoring experts who tell you it is bogus. The whole thread is there for everyone to read; you can pretend you haven't read the rebuttals, but you can't pretend for everyone else. The idea that "without source code we can't thoroughly verify things" is false, and at odds with basically all of modern software security.
Oh, I read them. I simply firmly disagree with them and my personal experience of 15+ years finding and fixing security issues flys in the face of your statements. We clearly test software for bugs very differently.
I made specific arguments and use cases to justify my position and you have simply told me I am wrong without directly addressing them.
Once again, I find the term "expert" overrated. I for one admit I am not an expert on security, a field that is already hard enough on auditors like myself without withheld source code.
I have also worked with a half dozen or so security auditing firms all of which stated source code access would make their job much easier.
It didn't take me hours of blackbox testing for me to find CVE-2018-9057. It took me 20 minutes of reading code on Github half asleep at 4am because I was curious about an unrelated bug.
I remain convinced blackbox testing would of very probably never found that vuln, and even if it did, not in as short of a time period. I trust Terraform over closed alternatives because it was patched within a couple hours of me mentioning it on IRC by a peer who submitted the bug report and patch in one shot. I could verify the source code fix easily and compile the new binary myself to take advantage of the patch before Hashicorp even merged it.
I can also easily verify there are no regressions in future releases.
Tell me how you go about solving for this or other subtle cases like stego exfiltration more easily -without- source code. Also how you or your team could of patched the issue yourself without source code.
If I really solved this the hard way then I will by all means move my security engineering career to focus more on blackbox testing as you seem to be advocating for.
It sounds like you're arguing that you'd require source code to see whether something was using math/rand vs. crypto/rand, something that is literally evident in the control flow graph of a program. I do not doubt that source code makes it easier to review code when you're half-asleep at 4 in the morning.
For your particular example: go download a copy of radare and pull up any go build artifact and see for yourself how hard it is understand what's going on.
I don't know about your security engineering career, but if you intend to get serious about vulnerability research, yes, you should learn more about how researchers test shrink-wrap software. I spent years at a single stodgy midwest financial client doing IDA assessments to find vulnerabilities in everything from Windows management tools to storage appliances. It wasn't my IDA license; I was augmenting their in-house security team, which had 4 other licenses. This was in 2005.
The primary vulnerability here was that the author used the current time as the sole random seed for generating passwords. From there the output of a PRNG -looks- random, but is in fact based on something that happens every day and thus not really random.
If you can understand a PRNG algorithim and how it was seeded without source code using nothing but radare faster than I could read the code... then you really do have some superhuman skill, and most of my arguments fall flat.
Subtle cryprography flaws like this could be introduced intentionally as well by a bad actor, or pressure from a state actor. They are very hard to see without source code in my experience.
You just kind of made my point for me, in that seeing the output in something like radare -is- often much harder to understand what is going on than just looking at the source code.
Don't get me wrong, I have a deep respect for people that are very good at finding bugs this way. When you -don't have source code then finding bugs via methods like this is the only thing you have on the table, and it is impressive.
What I am taking issue with is you trying to in effect claim that some people like yourself are so good at blackbox testing that you could find all potential bugs faster with those tactics than you could reading the relevant source code.
Consider though that not all researchers work this way. Many bugs have been found by myself and other researchers I know by simply reading source code, so your argument that a vendor releasing source code gives it no security advantage is just not true.
While I am no fan of Signal, the fact they make their source code public makes it much easier to audit and trust its e2e cryptography implementation than say Whatsapp. Even the two tools you favor are wildly unequal in transparency and auditability.
Perhaps the majority of my background working with FOSS software has made me undervalue blackbox testing and you have made a good argument for it. It would make me more well rounded and I intend to pursue it.
I think if there is anything you can take from my side of this discussion it is seeing the value of providing source code to the right eyeballs that know how to quickly spot certian classes of issues.
That source code in the hands of the right person is a faster way to find some bugs than one could in a binary reverse engineering environment.
Oh for God's sake, dude. Write a Go program with a function that seeds math/rand from time.Time, compile it, and then load it into radare. "aa", then find the symbol for your function, then switch to disasm view and look at the function.
(The Terraform function you found this problem in is literally called "generatePassword", in case you planned on writing 6 paragraphs on how hard it is to find the right symbol to start analysis at.)
This is such a silly, marginal bug, it's bizarre to see you kibbitzing on the "right" way to fix it (the bug is that they're using math/rand to generate a password instead of crypto/rand, not that they're seeding math/rand from time). But whatever! Either way: it's clear as day from literally just the call graph of the program what is happening.
Your example of a bug that is hard to spot from disassembly is a terrible, silly example, that I've trivially refuted in just a couple of Unix shell commands.
I don't think you understand the arguing you're trying to have. I get it: you have a hard time looking for bugs in software. That's fine: auditing messengers is supposed to be hard. You don't have to be up to the task; others are.
IRC is lacking in features but setup correctly with modern OTR I stand by it having easy to reason about security advantages Whatsapp and iMessage do not. (Usability is another story)
The fact that you can replace OTR with OTP in this sort of statement and it becomes even truer should tell you what a lousy argument for the practical security of anything it is.
The apps listed in the spreadsheet are clearly sorted by number of features supported (with weights). I don't think OP is necessarily claiming IRC is more secure than Signal.
They are sorted largely by security, according to the author.
"It is currently roughly sorted by security. Usability is subjective and people can work down the list to the most secure tool they feel they can comfortably use."
You are correct. I generalized this as "roughly sorted by security". Some in the list, like Tox, have notable design flaws, but this list is binary and does not account for implementation quality.
This is mostly for discovery of options to consider diving into.
He's not correct (as he has since acknowledged!), and you did, and still do, suggest that placement on the chart implies greater or lesser security. You literally included instructions on how to read and use the chart that make that point. "Usability is subjective and people can work down the list to the most secure tool they feel they can comfortably use."
The care Signal puts in that actually makes it a better, more secure messenger is primarily about metadata management. Consider how long it took for them to implement profiles, how much time they took to explain how their contact discovery works, et cetera. These are not trivial matters: they got subpoenaed and had nothing to respond with, because they're tried extraordinarily hard not to.
These are problems other tools have solved, without having to resort to a walled garden network or having a SPOF.
Sure, maybe Signal has done some useful technicality -legal- protections for now for US citizens, but what happens when a state actor threatens to kill the family of a Signal employee if they don't ship a very subtle compromise in how their binaries source random numbers, or if they don't sell the metadata of who is talking to who.
Signal is not anonymous so that metadata alone could have real value. It is at the end of the day using phone numbers as identifiers. Sure they do SGX remote attestation but that has been demonstrated broken multiple times and won't stand up to a motivated physical attacker. Even if it -is- solid now, I would not underestimate how far a state actor will go. (As demonstrated by NSA wiretaps on google datacenters). Can they compel the right Intel employee to CA certify a manipulated enclave? Can they just get handed the key?
Also why would people outside the US trust the legal protections afforded to a US company to protect US citizens?
Their refusal to federate their network just creates a Lavabit sized target... and I have yet to hear any technical reasons for doing so particularly when, again, other projects have demonstrated end to end encryption and decentralization are not ast mutually exclusive as Moxie claims.
The idea that only Signal can do this right, and only if they keep it centralized on their servers, with them being the only people that can sign the binaries... is pure hubris imo.
There are a lot of alternatives we all should be carefully considering for the next -standard- for ubiquitious secure messaging.
You asked "what valuable thing has Signal done" and offered to track them, and I responded with two examples. "What if someone threatens a Signal employee" is a moved goalpost. Who else has solved private contact discovery? Conversely: who else has solved Mossad as a threat model? I have repeatedly pointed out the Signal subpoena elsewhere, which is responsive to a number of your comments.
If your threat model includes Mossad, you're going to get Mossaded upon. You say "thinking only Signal can do this is hubris", but in the same breath suggest we're just carefully consideration of a standard away from being safe against Mossad threatening someone's kids. That's hubris.
I will look closer into private contact discovery across messengers, sorry for rushing over that point.
I did try to respond to the sopoena comment in that that is only of limited value. A blackhat will proably have an easier time getting to servers than a lawyer with Signals setup, and I do credit them with providing a substantially better assurances than say Whatsapp... but still not good enough for my particular threat profile.
Personally I don't like using permanant non anonymous identifiers like phone numbers so I would not use the feature as they have implemented it, but that doesn't mean it does not have some value for some use cases worh exploring.
That is at least something that can be looked at objectivly in the scope of the spreadsheet.
but what happens when a state actor threatens to kill the family [...]
No messenger system protects you against that. You seem to be going through the full sequence of well-known poor ways to evaluate the security of something like an instant messenger, starting with the feature matrix, going through 'it can't be secure if it's not open source/self-hosted/federated' and reaching the Mossad. Which is a worthwhile and educational exercise but it's still not a good way to evaluate the security of something like an instant messenger.
If the system doesn't have central server then it would become more difficult. You cannot subpoena a Tox network.
Any messenger that requires and stores a phone number (read your real-world identity and physical location) is neither anonymous nor private.
Also, a centralized messenger with a single server means that all traffic between all the users around the world goes through one data-center in one country that can do what? Observe the traffic and detect who is using this messenger, at least their IP address and country.
Of course, for most users this doesn't change anything because they don't do anything illegal. But if they don't do anything illegal then they don't really need protection and can use VK or Telegram.
> Also, a centralized messenger with a single server means that all traffic between all the users around the world goes through one data-center in one country that can do what? Observe the traffic and detect who is using this messenger, at least their IP address and country.
I mean, yes, a single server would have that property. But what system are we discussing that has a single server in a single data center?
Furthermore: if you actually care about metadata hiding it's a lot more complicated than "we have more than one person operating the servers".
> Of course, for most users this doesn't change anything because they don't do anything illegal. But if they don't do anything illegal then they don't really need protection and can use VK or Telegram.
Privacy is not just for people who do illegal things.
So that whole "it's all flowing though a single DC" thing for Signal - that has a name in this sphere. It's called "Don't Stand Out".
When five people call the known mob boss you follow all of them. Maybe one is just a friend from high school. Another is the mob accountant, another an enforcer, you're getting leads. But what if it's five thousand people - now you can't follow them all, it's overwhelming.
Knowing five people in your country use Signal puts them all on the watchlist. If it's five million that's pointless. Without "Don't stand out" the encryption used just makes you a target.
Barry, who uses Barry's very own private self-hosted server for a popular federated system, Stands Out. Message from Japan to Barry's system? That's for Barry. How do we know? Well Barry's the only one on that server, easy. No cryptography can fix this.
If the system doesn't have central server then it would become more difficult. You cannot subpoena a Tox network.
The threat that was brought up was an actor with state-level resources, coercive capability and lack of scruples - the specific example being the threat of murdering someone's family, not subpoenas.
Journalists covering sensitive topics in sensitive areas -must- care about these questions.
If you are using something anonymous, fully end to end encrypted with open source reproducible verified builds on decentralized servers, you can greatly limit the risk of having a central third party that can be compelled to act against your interests.
Maybe in the US we don't think we need those sorts of protections, but we should consider the worst cases when designing security systems, and ensure no one single compromised person has the ability to backdoor thousands or millions of people.
Not everyone cares about this sort of thing though, and there are 70+ other options listed with various tradeoffs.
None of this makes any sense. This reads like an argument from a parallel universe where the only sane option for end-users is Linux on the Desktop.
In fact: a huge portion of commercial vulnerability research and assessment work is done on binaries of closed-source products, and it has never been easier to do that kind of work (we're coming up on a decade into the era of IR lifters). Meanwhile, the types of vulnerabilities we're discussing here --- cryptographic exfiltration --- are difficult to identify in source, and source inspection has a terrible track record of finding them.
There's no expertise evident in the strident arguments you've made all over this thread (network security work is not the same as shrink-wrap software security work, which is in turn not the same as cryptographic security work --- these are all well-defined specialties) and it concerns me that people might be taking these arguments seriously. No. This isn't how things actually work; it's just how advocates on message boards want them to.
You can of course do network analysis and say "this looks TLS encrypted with X algo" then dive into those packets and then say "this looks end to end encrypted with X algo".
You can also look into a binary application and study jumps, where the private key gets loaded into memory etc etc.
I am with you on this, there is tons of low hanging fruit you can do on a binary application, and I do a lot of this sort of thing myself.
Making sure an application at a high level seems to do what it says on the tin is a great phase one audit of something as a baseline, and should be done even on open source projects.
Trouble with closed source tools is you either have to do this over and over again on every new binary released, -or- you can look at the diffs from one release to next and see if any of those critical codepaths have changed.
You also -much- more easily evaluate bad entropy in key generation if you have the source code. People think they are clever all the time using silly approaches along the lines of random.seed(time.now()+"somestring") and it is much less time consuming to spot those types of flaws if you have the code.
This is why I argue you need both whitebox and blackbox testing when it comes to mission critical security tools, and a clear public auditable log of changes made to a codebase so review can continually happen, instead of only at specific checkpoints.
Again, some people may not care about that level of rigor and just want to share memes with their friends instead of journalists communicating in the feild. I tried to be pretty comprehensive with the list, and welcome recommendations for new objective criteria to include.
It is currently roughly sorted by security. Usability is subjective and people can work down the list to the most secure tool they feel they can comfortably use.
No, I don't think you're following me. You don't tcpdump and look at the packets (that would be silly; even horribly encrypted messages will look fine in a pcap). You don't even "study jumps, where private keys get loaded into memory". You take the binary and lift it to an IR and audit that.
People who do a lot of binary reversing vuln research were doing a version of this long before the we had modern tooling; you don't look at each load and store, but rather the control flow graph. People were auditing C programs, by manual inspection, in IDA Pro before there even was a market for secure messaging applications.
And that's just manual static analysis! Professional software assessors do a lot more than that; for instance, Trail has been talking about and even open-sourcing symbolic and concolic evaluation tools for years, and, obviously, people have been iterating and refining fuzzers and fault injectors for at least a decade before that.
This isn't "low hanging fruit" analysis where we find out, like, the messenger application is logging secret keys to a globally accessible log file, or storing task-switcher screenshots with message content. This is verification of program logic --- and, I'd argue even more importantly, of cryptographic protocol security.
"Bad entropy in key generation" is a funny example. Straightforward binary analysis will tell you what the "entropy source" is. If it's not the platform secure random number generator, you already have a vulnerability.
The problem with your chart is that there is no real rigor to it. It's just a list of bullets. It's obviously catnip to nerds. We love things like this: break a bunch of things into a grid, with columns for all sorts of make-me-look-smart check-marks! But, for instance, there's no way to tell in this chart whether a messenger "audit" was done by JP Aumasson for Kudelski, or some random Python programmer who tcpdumps the application and looks for local-XSS.
You can't even neatly break these audits down by "did they do crypto or not". The Electron messengers got bit a few months ago because they'd been audited by non-Electron specialists --- that's right, your app needed to be audited by someone who actually understood Electron internals, or it almost might as well not have been audited.
Given that: what's the point of this spreadsheet? It makes superficial details look important and super-important details look like they don't exist. You didn't even capture forward secrecy.
People who pay a lot of attention to cryptographic software security took a hard bite on this problem 5 years ago, with the EFF Secure Messaging Scorecard. EFF almost did a second version of it (I saw a draft). They gave up, because their experts convinced them it was a bad idea. I see LVH here trying to tell you the same thing, in detail, and I see you shrugging him off and at one point even accusing him of commenting in bad faith. Why should people trust your analysis?
Obviously I was just attempting to make a simple and easy to reason about example here, but yes of course there are a wide array of more practical approaches and tooling today depending on the type of binary and platform we are talking about.
I also fully agree you need to get people that specialize in the gotchas of a given framework to be able to spot the vuln. In fact that kind of bolsters my point, that you need the -right- eyeballs on something to spot the vuln, and this is much easier if something is open source vs only shown to a few select people without the right background knowledge to see a flaw.
The point I was trying to make is that you need both continual whitebox and checkpointed blackbox testing and if a tool is not open source then half of that story is gone unless you are Google sized and really can pay dedicated red teams to continually do both for every release... and even they miss major things sometimes in chromium/android that third parties have to point out!
Being open source client and server does -not- imply something has a sane implementation (and to your point things that rank well on paper like Tox have notable crypto design flaws).
This is more so you can quickly evaluate a few tool that contain the highest level criteria you care about for your threat profile, then you can drill down those options with less objective research into the team, funding, published research, personal code audting etc etc to make a more informed choice.
For me as an AOSP user, something without F-Droid reproducible build support is a non starter. Anything that is closed source I can't personally audit the critical codepaths of and have knowledge a lot of the security researchers I communicate with can do the same... is also a non starter. Lastly I want strong knowledge I can control the metadata end to end when it counts, and being able to self-host or being decentralized entirely is also an important consideration.
Many people have reviewd this list and described to me they have learned about a lot of options they didn't even know existed, or didn't know where closed or open source. That is the point. Discoverability.
It would be unreasonable and even irresponsible for someone to fully choose a tool based -only- on this list imo. If that was not clear elsewhere then hopefully it is clear now.
Have you used any of these tools? Have you done a binary reversing engagement? Have you found a vulnerability in closed-source software? Can you tell us more about the tools you used to do this work? What kinds of vulnerabilities did you find? You have very strong opinions about what's possible with open- vs. closed- source applications. I think it's reasonable to ask where the frontiers of your experience are.
It's clear to me from all your comments that you are somebody who wants to sysadmin their phone. That's fine; phone sysadmin is a legitimate lifestyle choice. But it has never been clear to me what any of the phone sysadmins are talking about when they demand that the rest of us, including billions of users who will never interact with a shell prompt, should care about their proclivities. The argument always seems to devolve to "well, if we can't sysadmin this application onto our phones, you can't be sure they're secure". But that's, respectfully, total horseshit, and only a few of the many reasons that it's horseshit appear among the many informed criticisms of your spreadsheet on this thread.
You might want to put your warning about how unreasonable and irresponsible this tool is on the actual spreadsheet, instead of just advocating that it be enshrined in Wikipedia.
On the topic of phone sysadmin, I agree it's unreasonable to expect a large proportion of people to sysadmin their phones directly. But what if we advocated for a world where we tech-savvy folk volunteer to be sysadmins for our family and close friends? They would trust our motives more than they can trust the motives of platform and device vendors.
My point is not that this isn't of interest or important (especially to people who routinely handle sensitive information) but that your methodology is poor and it's poor in ways that have been reasonably comprehensively examined. As I said, it's still a worthwhile exercise and there's no law of physics that says you have to agree with existing consensus-y thought but your effort would be more serious if you're familiar with the arguments. You, on the other hand, just reinvented a threat model that has its very own funny paper.
> Not everyone cares about this sort of thing though, and there are 70+ other options listed with various tradeoffs.
This is precisely pvg's point. The problem with your methodology is a systemic one that emerges in every crowdsourced threat modeling exercise. You've enumerated every possible security attribute and security feature of every software in a specific category, then tossed them all into a matrix of boolean values. But that does not result in a threat model users can competently assess, for several reasons:
1. You're treating all features as equal - if not in intention, then at least in presentation. Even if you don't intend it, the sea of green at the top is a loud proclamation of safety; likewise the sea of red at the bottom is a siren of insecurity.
2. You're not allowing any nuance in assessment feature or attributes. Boolean flags cannot capture all the nuance inherent in cryptographic security. Which specific party was responsible for an assessment? What are their credentials? What did they find?
3. You're including features which most users don't and shouldn't care about just because some minority might. Moreover you're not being opinionated enough, which is something that comes with expertise - for many of the "features" you listed, the minority that cares probably shouldn't if they only care because of a vague notion of security.
4. You're leaving out important features which should absolutely be considered for security. Where is forward secrecy? Where is authenticated encryption? Where is consideration of specific algorithms or primitives? Where is nonce misuse resistance?
5. Most importantly: you do not have any explicitly called out methodology that allows someone to audit what you've done. If you begin by pre-supposing that a given feature is worthy of inclusion because it's an important security metric, your conclusion is just going to end up magnifying that bias. Therefore it's paramount that you call out methodologies explicitly and early.
We see this time and time again in security. People try to first principles the security of an entire category of software by being exhaustive about every type of threat and security feature they can think of. But they invariably leave out important threats/features, underestimate the importance of some and overestimate the importance of others. Exercises which attempt to give users the world almost always end up "empowering" them to boil the ocean. People looking at this spreadsheet are approximately all unqualified to make an informed decision based on a critical assessment of all those features, which means they're likely to just go with the most green option (or worse, proselytize the most green option as the most secure one to their friends and coworkers).
> Also why would people outside the US trust the legal protections afforded to a US company to protect US citizens?
Oh, hello there. The short answer is, of course we don't.
It's a very obvious attack surface that Signal and Whatsapp could avoid if they wanted to. For Whatsapp, being tied to Facebook, I kind of get why they don't (it's mainly that I don't expect them to be better).
But for Signal there aren't any good reasons. I've read various threads (on github and HN?) with Moxie being asked questions about this and I've not heard reasons that satisfied me. On occasion he was even evasive. Now, when the reasons given aren't good enough to explain why to take such a major security affecting decision when there is an obvious better alternative, and Signal seems to be very meticulous about doing the right thing in almost every other area of the protocol and systems around it, then there MUST be another motivation behind the decision that is not stated openly. Maybe it's just something benign left unsaid, who knows?
But even then, the only reasons I can imagine for choosing becoming this huge target, are reasons that are just good for Signal/Whispersystems but meaningless risk to its users.
Your points are valid but you didn't mention that OMEMO [1] implements Double Ratchet for XMPP. You can find a list of clients which support OMEMO on https://omemo.top
That's a fair point, but the fact that there's yet another protocol, not mentioned on the XMPP E2E wiki, kinda plays into the point itself: XMPP has E2E maybe with a bunch of random protocols and the stars (and the people you talk with) need to align _just right_ for all of it to work. I think it would be fair to say that everyone uses WhatsApp and I know what they get, and at this point only XMPP people use XMPP (explicitly) and maybe they get some kind of E2E but who knows which one and what properties that has.
Fair enough, but this is kind of inherent for anything based on open standards. Was your email encrypted? It depends on whether the sending and receiving mailserver support TLS. Is your website visit perfect-forward-secret? Depends on whether your browser and the webserver support modern cipher suites. Is your DNS request encrypted? Only if your OS and your DNS server support DNSSEC or DoH.
These are valid challenges, but moving to propietary and centralized solutions instead is throwing away the baby with the bathwater. Was your WhatsApp conversation encrypted? You honestly can't know, and even if it is right now, Facebook could disable Whatsapp's e2e encryption at any time without you even noticing.
FWIW, OMEMO has been the (only) de facto encryption mechanism for modern XMPP clients in the last couple of years, and most clients that support it clearly distinguish encrypted and non-encrypted messages.
How do I know my XMPP client is actually doing what it says? Are you saying the provenance for my XMPP client is fundamentally better than that of the WhatsApp app?
Well, there are different ways. If you want to know what your client is sending and you use your own server, then your server might be able to show you what it receives. ejabberd for example, lets admins inspect the stored offline messages. That way it is very easy to see if a message is encrypted or not. But you could also run a MitM proxy.
Another option is trusting audits or the developers. Last but not least you can inspect the source code of open source apps. So I don't know how deep you want to go with this, but for XMPP there are plenty of options to make sure the client does what it advertises.
Btw. I do not think that OMEMO is fundamentally better than WhatsApp does, as they are implementing the same protocol (Double Ratchet). The main differences are that one is an experimental public standard while the other is a proprietary protocol extension.
That's all fine but not responsive to my point. GP post said "but what if whatsapp silently hamstrings e2e overnight" -- my point is: what if my XMPP client/server does?
EDIT: I previously said "turns off E2E", which I didn't say in my original referred-to post, and that's more misleading than "hamstrings", which is how the actual attack works.
The difference of course being that WhatsApp is closed source, and they can push any kind of change without anyone noticing.
If the client is open source, you can verify exactly what it does. Compile the app yourself or download it from F-Droid and you can be sure that the binary you get matches those sources.
Sure you can argue this all the way down to "Trusting Trust", but that doesn't really make sense when comparing two apps/ecosystems that operate in the same real world's constraints.
As I've mentioned elsewhere: you do not need the source code to verify what something does, that's not generally how you'd audit this. Audits may be source-assisted, but you'd still bang at it from the actual binary. If you're more comfortable reading source and compiling from scratch then fine, do that: but we should not pretend that Conversations on the Play Store is generally more trustworthy than anything else because the source code is publicly available.
The random update bit is real! But also real for Conversations or whatever, and more real for small developers less likely to have their opsec in check. For the vast vast majority of people in this fashion WhatsApp is identical to Conversations and Signal.
I didn't say that Conversations from the Play Store is significantly more trustworthy in this regard than WhatsApp from the Play Store. I said that an app - such as Conversations - that you can build from source or download from F-Droid is more trustworthy than the Play Store version.
WhatsApp is a proprietary app and as such it's only available on the Play Store. Conversations is open source so you can download it from the Play Store, or from F-Droid, or compile it from source. So if you care, you can be significantly more sure that your version of Conversations "does what it says" than you can be of WhatsApp.
Neither server nor peer will notice if you perform the serious attack: exfiltrate the plaintexts or key material and keep the OMEMO/Signal dance around as kabuki theater :-)
For paranoid users there is also an option of running their own servers and clients (both Prosody and Conversations are open source). XMPP at least provides that option and if you choose it you can still communicate with the rest of the ecosystem.
No such luck for Whatsapp and Signal. And although they may be fair now I think putting all eggs in one company's basket is a bad idea in general (see e.g.: Google).
Even though I'm not a "non-technical" user, I don't review crypto of my XMPP client. And that's fine, I know a few people that did and I trust them enough. This way I don't have to trust an entity that may benefit from being able to access my messages.
Also, this is a fallacy. Just because "typical user" won't care or won't be able to do something doesn't mean we shouldn't strive to build and popularize platforms that do right things. Of course it also has to be good at what users actually do care for to have any chance of taking off, but that's not the point.
Non-technical users will never modify their software. The act of doing so would recategorize them as "technical". I don't see any fallacy here?
The most likely interpretation of your comment is, you're assuming an argument that I am not making.
> Just because "typical user" won't care or won't be able to do something doesn't mean we shouldn't strive to build and popularize platforms that do right things.
That's a disagreement with an argument I did not make.
> Of course it also has to be good at what users actually do care for to have any chance of taking off, but that's not the point.
What is the point, even?
I just asked how something being "open source" (to borrow the scare quotes) is more desirable to users who do not read/write software code? What is their incentive supposed to be to prioritize open source over proprietary software?
That's not an argument. It's certainly not a fallacious one.
> Even though I'm not a "non-technical" user, I don't review crypto of my XMPP client.
Little bit of background: I'm the sort of person who would review the crypto of an XMPP client.
> And that's fine, I know a few people that did and I trust them enough. This way I don't have to trust an entity that may benefit from being able to access my messages.
Cool, let's ask an trustworthy expert then. I can think of no finer expert to chime in on this discussion than JP Aumasson, one of the co-authors of SipHash and BLAKE2, who wrote the book Serious Cryptography and conducted many cryptography audits in his career.
These days in practice there are just two E2E protocols in XMPP: OMEMO and OpenPGP, with OMEMO being the default in clients that opt into E2E-by-default and PGP being used by those who know why and when they're using it. Additionally, some clients still implement OTR, and that's pretty much it.
> I think it would be fair to say that everyone uses WhatsApp and I know what they get, and at this point only XMPP people use XMPP (explicitly) and maybe they get some kind of E2E but who knows which one and what properties that has.
That sentence reveals that you only know WhatsApp.
Imagine this argument:
> I think it would be fair to say that everyone uses Google Chrome and I know what they get, [the other people] maybe they get some kind of security but who knows which one and what properties that has.
The nice thing about wikis is that they're editable :) thanks for pointing out that page though, I've updated it a tiny bit (and will do more if I can figure out media wiki's table syntax).
Requiring a phone number means you have to disclose your identity (in many countries, for example in Russia) and your physical location (everywhere). This is the opposite to privacy and anonimity.
Imagine, one of your contacts is captured; attackers get his contact list that includes you; then they get your phone number from Signal; then they get your location and put you to all kinds of black lists, extremists lists, no fly lists, watch lists and so on.
Signal is nothing better than Telegram. They should be on the same position.
That equivalence does not follow. The counterpoint is simple: can people figure out how to get burner numbers? Yes, they can: there are a myriad of services for doing so. And, importantly: journalists already know how.
In many countries it's not (legally) possible to buy a SIM card without providing your ID.
Sure, you go ahead and buy an illegal burner number, then download Signal/Whatsapp from the Play Store and reverse engineer that binary to see if it "does what it says". Other people might find it useful to look at this comparison to discover alternatives that better fit the characteristics they find important.
What? No, SMS is not a reauth factor in Signal, and the number doesn’t have to be physically proximate to you: you just have to receive a text message on it once.
When you compare Signal with Telegram on this axis you actually end up with Telegram being "safer". Because while Telegram needs phone number for initial registration you can just throw the SIM away after that (how to get anonnymous SIM capable of receiving SMS is left as an excercise for the reader, but I believe that getting reasonably anonymous burner SIM for this single use is possible and even easy anywhere in the world, with the extreme approach being "borrowing" it from some random IoT device) as the authorization of new devices occurs in-band.
Typo correction, but I can no longer edit: WhatsApp uses the Signal protocol, just with fewer of the privacy tweaks in the implementation. The criteria don't seem to consider those. They're important, but the two should be equivalent.
You can't validate anything about an open source project either unless you have repeatable builds. Real audits can be source-assisted, but do not rely on anyone pinky-swearing that this is actually the checkout that eventually produced the apk.
I do agree reproducible builds are the only solution to have practical evidence a given binary came from given source code. F-droid does this, which is why I personally don't consider any clients that are not willing to undergo that rigor.
"AOSP" support in this context implies f-droid and implies reproducible builds, but maybe I should break that out more clearly.
This is not responsive to the core of my argument that nobody who actually audits and analyzes this stuff will tell you that source availability is necessary or even useful for figuring out if a particular application does what it says it does, unless you’re building the entire thing yourself off a trusted buildchain.
Even if you have repeatable builds you still audit what the thing actually does.
Even with repeatable builds there are some ways to exploit the system, for example sending a targetted binary to audience that does not check the binaries, or sending a malicious update that exports all your history when you won't notice it (at night?) and then sending a good update to cover up.
Mozilla has done some research to close these issues [0] but until this is enforced on a system level reproducible builds won't solve the underlying problem.
Signal using phone numbers is almost as anti-anonymous as you can possibly get. A phone number leaks nearly everything about you to anyone who has access to the right data sources. Most people would be more anonymous providing a SSN than a phone number in practice.
You just said the same thing LVH did, just with more emphasis.
It's tough to reason about on a message board. Signal does some important privacy things better than anyone else does; for instance, Signal cares more about metadata that I think any mainstream messenger does.
On the other hand, Signal made a conscious, deliberate choice to ensure that it works for ordinary users. It is not a goal of Signal's to mollify the tiny fraction of people in the world who have strong opinions about Tor; it is much more important to them that, say, any immigration lawyer in the country could readily pick it up and start using it.
The phone number ID thing creates an instantly bootstrapped social network. It solves a problem Signal actually cares about, while making a problem nerds care about a little more annoying. That is a reasonable reason not to use Signal (if you're one of those nerds), but, like DeVault's inane F-Droid conspiracy theory, not a reasonable reason to warn people off of Signal.
Signal, WhatsApp, Wire: any one of those is going to be radically better than the alternatives nonspecialists are likely to use (Fb Messenger, email). But my confidence drops rapidly when you go to anything else on this list.
You are way out of touch here. Disclosing a phone number is absolutely not a nerd problem and very much a normal people problem. Particularly women or vulnerable people.
I don’t even need to warn people about it, they already don’t want to share their phone number with strangers. That’s in spite of them not even knowing a fraction of what a dangerous person can do with your phone number.
Journalists and nerds know how to get burner numbers. They have less to worry about. I know people who have been sim-jacked to bypass their 2FA and steal their savings, which they did not get back. I know people who have been stalked using cell tower geolocation. I know (of) people who have hacked LexisNexus accounts and can get your whole life with a phone number. Hell, I know which forums I could spend $20 on to get that info.
Aside from the really dangerous stuff, many people don’t want harassing texts or calls from creepy nerds, so they have reservations about sharing their phone number (and rightly so). The set of people I want to casually message is far larger than the set of people who I want to have my number, and I’m not even particularly worried about these things.
The goal is to make messaging secure for everybody who is messaging now. The most popular messaging application in the world already uses phone numbers as identifiers. I understand what a phone number is and why people find them sensitive, but I'm not the person who's out of touch in this debate.
That's always a problem with comparison charts when used to survey they field. The flip side is if they compare a bunch of features you don't care about at all, there's a bunch of red on some which makes no difference to you in real life, but now you either need to know what each esoteric feature means so you can ignore it, or just accept that the one with more red is probably worse and avoid it, even if overall it's a better fit for your needs. The extreme ends of this are simple charts where someone just tells you "good" or "bad" on one end, or pointlessly complex ones where someone adds bullshit fields like "experienced developers" or something like it.
I'm not sure what the solution is, besides much more interactive and thorough presentation of features in a way that allows classification of how advanced they are or likely you are to need them, but that's a lot of work. Until then, a comparison like this will always suffer from rarely matching exactly what the reader is looking for. They do work well as quick references though.
I'm concerned that they don't work well at all: a number of highly important properties are missing and a number of the criteria have been challenged; see rest of thread.
I am aware that OTR is being rolled back. My point is that extensibility is at odds with practical security and privacy for the vast majority of users. The people who need it the most do not understand the difference between OMEMO and OTR, but I can tell them to go install WhatsApp or Signal, give them a rough idea of why you want one or the other, and everything will be copacetic.
This isn't actually a thing anyone uses; I'm not aware of any non-commercial XMPP clients implementing it. It's just an attempt the IETF had a while back at standardizing E2E. It didn't really work, OMEMO has come much further.
And you don't have to explain the difference between OTR and OMEMO to anyone, just tell them to "Go use Conversations, it's on the Google Play store" and they'll be using OMEMO to speak with you.
> The people who need it the most do not understand the difference between OMEMO and OTR,
For most of them there won't be any difference. Modern clients do not implement OTR so people will not even encounter the term and OMEMO is enabled by default so users don't need to bother with it.
Federated protocols can be made secure, if there are popular, secure by default players on the market. Look at what happened with HTTP2 and TLS 1.3: browsers basically used their powerful position to upgrade the security for the entire federated ecosystem. There are also other minor factors, such as tooling (SSL Labs) that incentivizes people to maintain their servers. And most certainly users of HTTPS don't need to understand how e.g. Certificate Transparency works, that's handled internally by their clients (browsers).
Of course there will always be a niche of low security clients, but who cares that TLS 1.2 doesn't work on some old Java?
HTTP/2 is federated in a sense that the network is composed out of heterogeneous nodes operated by different parties and these nodes communicate with each other (applications running on servers frequently act as clients accessing other HTTP servers).
If calling HTTP/2 federated bothers you I can rephrase my argument:
> Federated protocols can be made secure, if there are popular, secure by default players on the market. Look at what happened with the other IETF-standard protocol: HTTP2: browsers basically used their powerful position to upgrade the security for the entire ecosystem. There are also other minor factors, such as tooling (SSL Labs) that incentivizes people to maintain their servers. And most certainly users of HTTPS don't need to understand how e.g. Certificate Transparency works, that's handled internally by their clients (browsers).
> Use WhatsApp to talk to normal people. Use Signal for nerds.
This has been my go-to advice for a while now too! The key driving point is that amazing crypto is 100% useless if the person you're talking to doesn't use it, or uses it incorrectly.
The only sticking point with the above advice is the nerds who think they understand crypto but don't and insist on you using some crazy app :/
Consider that security you can't possibly verify is just marketing.
Maybe try listening to those nerds and try out some open alternatives with security that is possible to verify.
You might be surprised to find both tools are pretty low on the list in respect to security and privacy compared to tools with smaller marketing budgets.
Could you elaborate how WhatsApp and Signal are not doing well from a security/privacy perspective? Can you name an alternative that does better under those criteria? I was under the impression that Signal precipitated most modern messaging protocol design and verification. But what do I know: I'm just a relapsed cryptographer :)
Signal's stubborn insistence upon using Google Cloud Messaging (which is mostly notable due to their attempts to shut down third-party clients that remove this requirement) combined with its reliance upon phone numbers for identity is itself a serious problem, but when you combine this with the fact that their servers know "this phone number sent a message to this phone number at this time" (information that is even stored, at least temporarily, on their servers in order to implement rate limiting) we really should not give Signal as much default credit as it gets :/.
> "this phone number sent a message to this phone number at this time" (information that is even stored, at least temporarily, on their servers in order to implement rate limiting)
This claim appears to be unsupported by both Signal's privacy policy and public evidence. Unless I misunderstand, they've claimed to use IP addresses for rate limiting. Messages only necessarily contain the recipient's identifier for delayed delivery but certainly does not imply they have a store of (src_phone, dst_phone, hires_timestamp) triples. When subpoenaed for user data[0], they claimed to have no responsive records of IP data, let alone src, dst _and_ hi-res timestamps altogether. Are you saying that has changed, they're lying in their response to the subpoena, they were lying in their privacy policy, or something else?
The issue of long-term identifiers for offline delivery is well-understood (e.g. Rottermanner05) but also not actually a Signal problem. In that light: what do you propose we do instead? (You can probably see the response coming already: let's just say metadata protection is, ahem, complicated.)
You do realize that their code is open source, right? Here's the line of code that does exactly what I "claim": their message rate limiter implementation uses string keys that are constructed using the source and destination phone numbers.
Frankly, the fact that people like you believe so strongly that Signal doesn't do this should be damning for Signal, as it goes to show just how deceptive they are being about this issue: the reality is that Signal is quite careless with metadata :/.
(As for what you do instead, there are tons of trivial ways of making a secure messaging system that are better about metadata than Signal, and even ways that allow you to implement various forms of rate limiting. Signal is just being lazy here.)
I do realize that Signal is open source, yes (and I'm guessing from your phrasing you know I know that), but I don't feel a moral imperative to source dive every time someone says something weird. Putting that burden on the claimant seems pretty reasonable. This set of threads alone was exhausting enough without having links to GitHub with every message :-)
In particular: I interpreted your "stored (at least temporarily)" claim as in like, a logfile that's rotated daily or something. I think we can both agree that would be much worse than a 60 second leaky bucket and hence I interpreted that as more outlandish than what you intended.
But regardless, you're right: that's not how it works today, I read that code an extremely long time ago (when it was really just sender ID limited), and having both sender and receiver in plaintext is clearly worse than having just one, and having a leaky bucket with a timer is clearly a much higher resolution timestamp than the day previously claimed. I'm guessing the distinction is between what's stored and what's not? But I'm definitely uncomfortable and will make a note to follow up. In particular, now I would like to know under what circumstances that Redis cluster will attempt to persist. I think the answer is "never--it only has caches and the directory", but you've definitely shaken my confidence in that answer :)
It's always been possible to get a non-GCM version of Signal if you really wanted. And there are plenty of reasons to shut down third party clients that have nothing to do with the Signal team wanting to keep backdoors open.
Given the amount of noise that politicians and law enforcement are making about WhatsApp - there's very good circumstantial evidence that their EndToEnd encryption works.
I am a nerd - I went through a phase of trying to get people to use PGP and then XMPP. Both are technical masterpieces but a complete disaster for actual users.
Here's the crux: I'd rather have all my conversations encrypted than just the ones I have with other nerds. In this respect WhatsApp has been the best thing that's happened to messaging since the invention of the internet.
As there's pretty obvious bias showing in the values, some methodology would be good to accompany this sheet.
e.g.
- Telegram: E2E Private: TRUE
- WhatsApp: E2E Private: CLAIMED
These are either both "true", or both "claimed". Pick one.
In particular, what's the definition of the "Open Spec" column? Signal's GPL spec gets a FALSE here so I'm presuming the definition is something along the lines of "Spec produced by one ofa group of arbitrarily approved bodies of which Open Whisper is not a member"
Telegram’s state of source code availability (and whenever prebuilt binaries match that) is a total mess. I think only F-Droid build would qualify, but not the mainstream sources.
That argument goes for every floss app. If you download their binaries from a play store (that is not f-droid), there's no guarantee that what you get is the same as what the source code would produce. f-droid is not a 100% guarantee either (e.g. not all apps support reproducible builds), but it's certainly better than the mainstream play stores.
That only works if you trust the store, and you somehow only know how to audit from source. Neither is true for professional audits: you'd generally start from the APK anyway, even if it's minified, and in many cases you'd actually get the source (which is an optimization only, e.g. to make bug descriptions better/faster).
Even with an open source edition it's hard to link strongly the source code to the resulting APK. How do you know that the generated APK hasn't been built with extra "features"?
For each release, independent people would have to compile the same source code and inspect if there is any differences with the published APK.
One way to make this easier would be to fix the build system so each build with the same source would provide a bit-exact APK file (without timestamps). Then the independent parties just have to check if all of the bits are the same (with a hashing function).
Ideally Google would also publish the hash output of each APK so that it can be checked against another distribution channel on installation.
Ah, thanks, I didn't see those comments. That makes some sense for the E2E actually. That's a very reasonable distinction.
Technically one could still argue the same for Telegram unless you're self-building given their source-release delays but that might be nitpick too far.
This isn't true. A trivial example of a significant difference in which they would still interoperate would be a binary distributed version which phones home the shared key.
There is no mention of Mumble (client) or Murmur (server). [1] From a privacy perspective, I find it superior to everything else. End-to-end voice encryption with PFS. As much or little server logging as you wish. Super easy to set up and scales to large numbers of people. I have a few of them running on VM's with 1GB ram. Only downside for me: It is not as happy-clicky (frictionless) as discord, yet.
Authentication can be tied into 3rd party apps (LDAP, phpBB, etc) but I have not tested this yet. [2]
If you try it, use their latest snapshot for server and client. Incredible sound quality. Nice UI/UX experience. Decent support for game overlays. Very low CPU usage.
Well, that is a protocol comparison. A client comparison would be much closer to the real world user experience. Don't get me wrong, I am a huge fan (and daily user) of XMPP, but the best protocol will not be of any use if the clients are too complicated or buggy to use.
So yes, XMPP supports audio and video calls but finding two different clients which work on the first try together can be a challenge. Sometimes I wish there would be some compatibility XEP which defines a common set of supported XEPs including a test suite to run it against.
We have the XMPP Compliance Suites 2018[0] providing an overview of protocol-level specifications that a modern client or server should implement, and there was recently a nice article[1] for some example use cases.
What is still missing is everything above the wire protocol level. The XSF, being the XMPP Standards Foundation, is guarding the protocols, and things like UX and client interoperability are considered as off-scope. However, there are people interested in these topics as well looking for fresh collaborators.
That looks pretty cool. Maybe I should ask SamWhited why they didn't include OMEMO, Audio and Video Calls as features as he is writing in this thread too :D
Funny how that works isn't it. Let's start out with Electron so we can get our new app on all platforms quickly, we'll build a proper native UX when we can afford to >> company grows and grows >> yeah, so screw UX, where are our customers going to go instead? Hipchat? lol, let's buy them instead.
Indeed, and for "compatibility" it doesn't really say anything about the quality of the software for that system. Signal, for example, doesn't have a native iOS app and it shows.
It's funny and sad that XMPP hits almost all of the points, has been around since 1999 and yet every year someone reinvents the wheel and makes another messenger system. There are what, about 60+ by now.
Granted XMPP is not a messenger it's a protocol and a bunch of standards but still it's hard not to laugh.
When Jan and I evaluated XMPP in 2009, we found that it was not very mobile friendly. To provide a couple of examples -- (1) The login path required an inordinate number of round trips (I think it was 4+). This slowed down login quite considerably. (2) XMPP is byte verbose and expensive on mobile networks.
We came to the conclusion that XMPP was built for desktop computers connected to strong internet via LAN connections not for mobile networks. We went on to invent our own protocol which was byte efficient and minimized roundtrips.
Now take all of this with a grain of salt. It is now 2018 and things have changed considerably.
Yeah, I remember 10 years ago (or more), they were looking into ways to make it more mobile friendly, including Roster Versioning (so you wouldn't need the whole series of transactions to get your roster on initial connection), session keepalive (the basic protocol expects one long TCP session, which is of course impractical on mobile), and stream compression.
Sadly, those weren't really enough at the time, and the new protocol Babel happened.
> We went on to invent our own protocol which was byte efficient and minimized roundtrips.
Completely understandable. And well, looking back it was the right decision. You definitely have solid proof.
Yeah as much as XMPP looks good on paper I don't think it will ever take over the messaging world. Even if the devices and the network capacity could handle the larger messages I feel developers don't really want to deal with XML standards. There were a lot of cool things in the 90's but XML wasn't one of them.
XMPP is a protocol; it doesn't have a user experience. Sure, it may have been difficult to adapt for mobile, but that doesn't mean it inherently has a poor experience.
It's actually pretty easy to adapt to mobile, it may have been hard in 1998 when it was released, but these days with stream management (session resumption and TLS like packet counting) and message archive management (history and catchup) it's pretty great on mobile. The round trips to log in have also been significantly reduced (you'll incur a few more if you use SCRAM based auth, but there's always a trade off for security), though there are still a few of them for the most basic login. If you ignore TLS (which can come first so you don't have to do STARTTLS anymore) and do PLAIN auth (over TLS so it's fine) I think you have 3 round trips to login. That can be improved in the future with pipelining which has some experimental specs out.
I think it's a valid distinction. You could make the case that a bad protocol makes implementing a solid UX harder, but you wouldn't say the protocol has "poor UX".
For me, the problem is how incredibly slow Riot is (and every other client I've tried has almost unusable bad UI, sometimes in combination with being slow).
IMO: Text chat with a few emojis and images here and there should not ever be among the things that slows your computer to a crawl.
EDIT: I'm speaking of the UI, not the network connection; the latter is sometimes slow too, but that's understandable
We're still aiming for a full redesign of Riot.im to be out before end of the year, there should be things to look at at the end of the month, all crafted for the non-techy friends, so watch de the space!
And yes, the backend is not helping neither, although we've also done good progress on perf improvement there in the last months and still rolling new ones (e.g. Py3 being deployed as we speak and reducing server RAM by 3). Switching away from matrix.org can help, and agreed that a directory of public servers à la Mastodon could be interesting (although we would need to find a non-scary way to do so, lots of non-tech people would run away from it: they just want one click onboarding without having to understand what's happening behind the scenes).
We've also soft launched a paid hosted offering, for 50-100 people teams who could do with their own DNS and faster servers at http://modular.im
Matrix.org is so slow it's borderline unusable, that's right. However, while switching to another homeserver (and avoiding federating with big rooms like Matrix HQ) helps a lot, Riot isn't exactly a lightweight client as well.
Agreed, I set one up and tried to convince people to switch but the latency made it just unusable. It worked fine with a channel or two but if someone tried to join a federated channel it would bring the whole thing to its knees for hours at a time, knocking out the local channels with it.
If you want to experience full speed Riot, you can choose a different server. This basically comes down to client UX: if server choice were more discoverable or if it at least didn't always default to the same one, then everyone wouldn't end up on a single overloaded server.
I was primarily talking about UI lag, not network speed. Network speed was slow at times, but the worst above all was how laggy every keystroke is, every UI click, etc. and how much CPU it burns (heating up my laptop whenever it's open).
Native IRC clients for example have no such problem, and consume ~0% CPU at all times.
I won't argue any of your UI opinions other than to say that riot - which is only one of the many possible clients [1] over the matrix protocol - is still in early days, and is getting better with each version. That being said, as far as having to remember everyone's matrix id, I'm sure users had similar complaints back when email addresses were still novel. I'm sure conceptual address books will be a thing in future matrix clients - both riot as well as others. Failing that, you can always submit a feature request! [2]
Yes. I think it's brilliant but the UI/UX needs a big update. Also from a techie point of view it's cool but from a normal user, there are too many options in group chats. Esp when people change phones etc.
Not just the UI but it's still rough around the edges in some cases. I ran into a couple bugs, I realize now I shoulda reported / researched further into, but my use for it is minimalist.
Are there any good XMPP clients that provide a "modern" messenger experience? For example seamlessly switching between online/offline mode, built in audio and video calls, sharing photos/videos.
Yes, Conversations (Android) and ConverseJS (web) are on par with - or better than - the commercial apps like Whatsapp. On ios, Chatsecure is pretty close.
Didn't realise there were so many, I guess XMPP isn't dead after all. Also worth mentioning Zom is a fork of the Chatsecure (on iOS) and Conversations (on Android) with a more UX focussed interface. https://zom.im
Pidgin is barely maintained for the last couple of years and doesn't support most modern XMPP extensions. I'd definitely recommend Gajim on both Windows and Linux, which supports everything you need out of the box.
Hence the link for extras, but it is a valid issue.
Gajim is XMPP only, where Pidgin is multi-protocol, which is the main reason why I'm still using it.
A missing column among the Features is if a system allows automation (chatbots or other). Notable examples: Telegram and FB Messenger do, WhatsApp doesn't (there are workarounds but they're mostly against the Terms of Service.)
XMPP is a protocol meant for building chat services; some of these others are chat services themselves so eg. it doesn't make sense to say that XMPP is not e2e by default (of course it's not, it's a protocol which may or may not be used to build an e2e encrypted chat service). Maybe that should be changed to "Jabber" which is what a lot of people call the public, federated network of servers built on XMPP these days? (The term has all sorts of other historical baggage and some people use XMPP/Jabber as synonyms, but mostly I think people use Jabber to refer to the public network these days and XMPP to the protocol, rather like email and SMTP).
This is a pretty important category for me. Users are actually pretty happy to migrate to new chat systems (with clear benefits) if you can pull off a gradual migration with seamless chat linking.
I did a gradual migration of our group chat from $LEGACY_CHAT_SYSTEM to Telegram because at the time, all the options with better crypto stories fell down; WhatsApp is very anti-bot, Wire's API was unusable on free accounts, Signal's group chats work differently / would've needed O(n) phone numbers, Rocketchat's IRC integration was incompatible, Matrix couldn't be simplified to hide the federation for standalone, etc etc.
There still isn't a popular messaging and voice call platform that supports private end-to-end encryption by default. How terrible is this? I mean it would be so trivial to establish a secure and private communications standard. Europe and North America has a population of almost a billion people combined. If 500 millions of those live in first-world conditions and only 1% cares about privacy, with $1/year worth of giving a fuck we could have a budget of $5M/year, or close to 50 top notch developers to pull this off. Obviously a lot more could be spend, but even with this minuscule spending we could still have a viable, standardized alternative to Facebook and Google.
We literally had better privacy when we had analog phone lines that anyone could tap into. That's just terrible.
WhatsApp does but the author of this sheet has chosen to list it as "claimed", despite other also-unverified clients like Telegram getting a "true" for their (non-default) e2e support.
FWIW I believe Riot/Matrix are planning e2e by default as soon as their implementation stabilises. Theirs is more complex/powerful than WhatsApp's though since they have multidevice support (which WhatsApp lacks). They've avoided making it default sofar due to bad UX and the possibility of losing access to conversations across devices, but it's improving rapidly.
Yup planning to get it on in the coming months. As a data point e2e by default is on for the French government deployment and we haven't seen huge drama, so we're mostly waiting to get the UX out. We'll be sharing our work for insights this week.
Even before then it's highly likely that they're gathering as much metadata as possible:-
* who's chatting with whom,
* via what means (text, audio, video),
* conversation time and duration,
* location of participants,
* how much data transferred etc.
There's a lot they can gather and imply from all that when WA users phone numbers will be known to FB, so they can graph connections via others' who run the FB app or previously shared their contact info. Also those who are perhaps logged in on, or simply visiting other sites they track via like/share buttons etc. Whilst what you say is encrypted, the circumstances around that conversation can be used, at least to advertise to you.
There is no way that an open IM platform will be able to guarantee E2E by default on all clients simply because someone/somewhere will produce a client that doesn't or doesn't do it properly. It is probably better to start with the E2E encryption system (in my example OMEMO) and then see where you can get it.
What do you mean by “private” so as not to have WhatsApp and iMessage fit this description ? Because as far as I understand, they do. Especially the telephone lines bit; iMessage and WhatsApp offer more privacy than telephone lines did, already at the operator level, but definitely at the tapping level.
Someone will correct me if I'm won't but I believe Apple Messages are end to end encrypted by default. I'm not sure if FaceTime audio/video is encrypted.
With categories emphasizing security, open standards, and audited code, I'm surprised SpiderOak's Semaphor isn't on this list: https://spideroak.com/semaphor/
What's the difference behind TLS "true" or "claimed"? It's pretty straightforward to verify all outgoing requests the client is making without looking at source code.
Interesting comparison Irvick. Enjoying the resuling conversation too. More detail on meanings and methodology would be helpful as some other HN folks suggest.
Stealthy.im isn't mentioned (I develop that presently).
Stealthy makes use of decentralized identity in the blockchain and a decentralized storage system called GAIA. Regular chat is E2E encrypted using ECIES. Currently released for Android and iOS.
I think Tox should be higher than Jabber. With Tox, there are no servers and contact list is stored on your device. With Jabber, it is stored on the server (and will be happily provided to NSA when needed). So unless you deploy your own Jabber server, it provides less privacy.
Regarding Wire, is not contact list stored on Wire's server? Then it is even less private than Jabber.
I came across Tungsten Messenger the other day and it's supporting even multiple personas (public, private) and runs traffic via Tor. Pretty neat UI and the team is throwing out new features regularly: http://tungstenapp.com/
Is there any messenger that does end-to-end encryption with reliable, in-order delivery with decentralized group chat and conversation history syncing? Been looking for years but so far the double-ratchet is inadequate.
Mumble / Murmur are close to this, but without chat history syncing. End-To-End PFS encryption though. Very decentralized, but you can tie it into ldap and other authentication systems. I linked in another part of this thread.
WhatsApp does, atleast their doc says so. https://faq.whatsapp.com/en/android/28030015/ ( So as we’ve introduced more features – like video calling and Status – we’ve extended end-to-end encryption to these features as well.)
Yes, but like most of these, that's really to do with the fact that E2E is part of the WebRTC spec, so anything built on it is safe for audio & video... the main advantage of Signal here is not leaking all your metadata.
A flawed document like this does far more harm than it does good. I'd implore the author to take it down before they mislead people into drawing the wrong conclusions.
Since the author added a link to the Wire audit after I pointed out it existed, I think it means "third party has audited it and verified claims". On the other hand, they also still claim Telegram has been audited in this sense, which I have also pointed out and they have not resolved.
E.g. by comparing them in person? Ricochet can do that. The "Ricochet ID" is just a hash of the public key, so if you meet up in person and verify that you have added the correct ID for your counterparty, that sounds like you've verified the public keys, no?
If you can communicate random text, you can layer E2E encryption over almost any communication medium. OTR is a great example of this. But it isn't a feature of the protocol.
Many irc servers listen on an ssl port, but this then only encrypts data between client and server. The data is decrypted on the server and sent to other clients. (Maybe encrypted)
Some ircd's support "encrypted only channels" where if you're not connected via ssl, you can't /join
So many competing, incompatible protocols. So, so many.
Also this is lacking a column for "stickers". Seriously they are one of the things that has my and my crowd using Telegram, especially since it's really easy to make your own custom sets instead of relying on whatever they pay someone to draw for you.
I stopped using Signal due to being bitten by the the issue where a friend uninstalled Signal without going through a special process and then I couldn't send messages to them without changing my message type from encrypted to unencrypted every time. See report/references here[0].
Dust is woefully underspecified and uses an ancient design with no forward secrecy and no (specified) message or sender authentication of any kind.
Blockchain, as usual, purports to solve a problem nobody had. Dust doesn't try to address the simplest, best-understood problems we have for reputation systems on chained blocks.
I wish the definitions were spelled out. It says Signal isn't "anonymous", which I assume means "uses a phone number to find peers". And it has the usual feature matrix problem: sure XMPP "does E2E". But what does that mean? It supports S/MIME. Do you want S/MIME? (You don't.) It supports OTR, TS and SCIMP too: but you need to be an expert in messaging schemes to understand how those are different. None of them implement double ratchets. None of them implement even close to the privacy features Signal has implemented. But on this diagram it is clearly better because there is more green and less red.
Another example: "open server" and "on-premise" says nothing about whether or not you really want to run one of those instances. It just says that hypothetically one could.
In terms of errors: the linked "E2E audit" for Telegram did not audit E2E at all, and in fact only cites sources saying that it's probably fucked. Wire has a real audit that isn't listed. WhatsApp uses Signal, just with fewer of the.
Use WhatsApp to talk to normal people. Use Signal for nerds, and... probably Matrix for group collab? Or maybe stop caring about secure messages for group collab so much :-)