You can coherently argue that encryption doesn't matter, but you can't reasonably argue that Telegram is a serious encrypted messaging app (it's not an encrypted messaging app at all for group chats), which is the point of the article. The general attitude among practitioners in the field is: if you have to reason about how the operator will handle legal threats, you shouldn't bother reasoning about the messenger at all.
> you can install a reproducible build of Telegram and be sure it's end-to-end encrypting things.
This is incorrect. The construction for group chats in Telegram is not e2e at all. The construction for dm’s is considered dubious by many cryptographers.
It does not matter if you can reproduce a non-e2e encrypted message scheme, you must still trust the servers which you have no visibility on.
Trustworthy e2e is table stakes for this for that reason. Reproducible builds aren’t because we can evaluate a bunch of different builds collected in the wild and detect differences in implementation. This is the same thing we’d do if reproducible builds were in effect.
There are lots of reasons splitting jurisdictions makes sense but you wrote a whole bunch of words that fall back to “hope Telegram doesn’t change their protections in the face of governmental violence”.
The reproducible build of Telegram lets you evaluate the code doing end-to-end encryption. Once you satisfy yourself it's doing this kind of encryption without implementation-level backdoors, then you don't need to worry about servers reading it (except for #5 above).
I didn't claim it encrypted "group chats". I said "things". If you want me to be specific, the "things" are individual 1-1 end-to-end encrypted chats.
Reproducible builds are not required to evaluate the encryption algorithm used in Telegram.
Software auditors use deployed binaries as a matter of course.
They’d do so even if reproducible builds are on offer because the code and the binary aren’t promised to be the same even with reproducible builds and validating that they are can be more problematic than the normal case of auditing binaries.
It's interesting how all these years later and cryptographers can still only be dubious; nobody has actually cracked the implementation (or if they have, they haven't publicized it for whatever reason).
> On the question of balancing privacy and security, there are in fact solutions, but you have to get away from the idea of a centralized police force / centralized government, and think in terms of a free market of agencies, that can decrypt limited evidence only with a warrant and only if they provide a good reason. The warrant could be due to an AI at the edge flagging stuff, but the due process must be followed and be transparent to all
What does this mean? How can "we" move away from centralized states to "a free market of agencies"? How can there be a "market" of police forces, even in principle? Who are the customers in this imagined market? Who enforces the laws to keep it a free market?
At first glance, this sounds like libertarian fan fiction, to be honest, but I am curious.
Have you read the article I link to in that point? After you read it, you'll have a better idea, and then if you have a specific point, we can discuss.
I read it now, and I saw nothing at all about a free market of LEAs, either police or intelligence agencies. It's only speaking about some silly idea of filming every private scene but relying on magic encryption with keys that are stored... Somewhere?... And somehow the keys to decrypt these most private moments are only accessible when it would be nice to do so. It's clearly not a serious idea, so it gives me a good idea how wild the speculation goes about the broader trends.
Since the parent post was flagged, I do not see any sense to continue, since no one will really see this conversation.
I do interviews with the top people. I build solutions. I give it away for free. I discuss the actual substance. And in the end it's just flagged before a serious conversation can be had. HN is not what it once was.
"You need to run your own platform people." What problem does this solve?
I'm someone who's been on the business end of a subpoena for a platform I ran, and narcing on my friends under threat of being held in contempt is perhaps the worst feeling I'm doomed to live with.
"XMPP is ..." not the solution I'd recommend, even with something like OMEMO. Is it on by default? Can you force it to be turned on? The answer to both of those is, as it turns out, "no," which makes it less than useful. (This is notwithstanding several other issues OMEMO has.)
Note in particular that the Ethernet connection to xmpp.ru/jabber.ru's server was physically intercepted by German law enforcement (or whatever-you-think-they're-actually-enforcing enforcement), allowing them to issue fraudulent certificates through Let's Encrypt and snoop on all traffic. This was only noticed when the enforcement forgot to renew the certificate. https://news.ycombinator.com/item?id=37961166
This is like saying we shouldn't use TCP/IP because it's not encrypted. How it actually works is that encryption is enforced by the application - indeed the only place you can reasonably enforce it. See for example the gradual phasing out of HTTP in browsers by various means.
What this means in practice is that you shouldn't focus on whether XMPP (or Matrix, or whatever) protocols are encrypted, but whether the applications enforce it. Just as there are many web browsers to choose from, there are many messaging apps. Use (and recommend) apps that enforce encryption if that's what you want.
I'm not sure I agree, particularly given that there's some incentive for us to get our relatives using these messenger protocols and clients. The Web made it work because everyone came together and gathered consensus (well, modulo some details) that enforcing HTTPS is, ultimately, a good idea given the context.
So far, I'm not seeing that same consensus from the XSF and client vendors. If the capital investment can be made to encourage that same culture, the comparison can perhaps be a little closer.
The consensus comes from the people using the clients, not from the standards bodies. It's the same for HTTPs, where the users (in this case the server admins) decided it would be a good idea to use encryption.
There are even apps like Quicksy which have a more familiar onboarding experience using the mobile phone number as the username, while still being federated with other standard compliant servers. There is little reason to use walled garden apps like Signal these days.
As if it were that simple. Where are you going to host that self-hosted instance? What protections against law enforcement inspections do you have? What protections against curious/nefarious hackers? How are you going to convince every single person you interact with to use it?
Gung-ho evangelists rarely convert like a reasonable take on the subject does
If you want self hosting to happen, with things like Matrix, and so on, the hard truth is that it has to not be easy for someone who can program, but trivial for someone who says "wow, can you hack into <x>" if they see you use a terminal
You're assuming end-to-end encryption doesn't exist, and that the only way to be safe is to have someone close to you self-hosting.
Self-hosting is terrible in that it gives Mike, the unbeknownst creepy tech guy in the group 100% control over the metadata of their close ones. Who talks to whom, when etc. It's much better to either get rid of that with Tor-only p2p architecture (you'll lose offline-messaging), or to outsource hosting to some organization that doesn't have interest in your metadata.
The privacy concern Green made was confidentiality of messages. There is none for Telegram, and Telegram should have moderated content for illegal stuff because of that. They made a decision to become a social media platform like Facebook, but they also chose not to co-operate with the law. Durov was asked to stop digging his hole deeper back in 2013, and now he's reaping what he sow.