Over the past decade I've had to deal with a lot of executives and security people who don't actually understand security all that well. Or at all. (Not that I'm a security expert, but that hardly makes it better when even I can see that something is nonsense).
Right now I know of at least half a dozen products that are marketed as having E2E encryption but do not actually implement this (no, I'm not going to out them. See second to last paragraph as to when to be wary). In part because executives, marketers and salespeople don't know what it means. And in part because when explained what it means they will insist on their own definition/interpretation and demand the product is marketed as E2E.
It is also important to note that quite often you are not dealing only with the company that makes a product, but the regulatory bodies that can pressure companies into complying with their wishes.
As for Zoom, I don't understand why people trust them or still use their product if they are at all concerned about security. It makes very little sense.
> It is also important to note that quite often you are not dealing only with the company that makes a product, but the regulatory bodies that can pressure companies into complying with their wishes.
While considering the regulatory requirements helps explain the desire to lie, it does not make the lie any more defensible. Even if a regulatory body is making impractical demand, I very much doubt they are demanding companies lie to their users and potential users. Even if they were "just following orders guv" is not an acceptable excuse.
The key facts: Zoom lied. They didn't have to. They could have accurately reported what encryption they use and what they were working towards if that was due to change.
Even if we accept that the initial claims were wrong due to executives misunderstanding what their own security/dev people had stated, that doesn't defend continuing to make the claim without seeking further clarity after questions were raised.
Look up "warrant canary". IANAL, but my understanding is that you can be secretly compelled to not speak, but you cannot be legally compelled to actively tell a lie -- the Wikipedia page agrees with my interpretation. So, you can just publish "I am not subject to a gag order" every day until you are subject to one.
"In part because executives, marketers and salespeople don't know what it means."
Being a technical founder, I found some non-technical founders use this an advantage. They can lie to customers without guilt or investors with brimming confidence about their "MVP". They can use "making it simple" or "ignorance" as an excuse, if at all they get caught. These kind of lies are grey lines and exist everywhere.
I've worked with these types of people and what I've noticed is, even after you explain to them simply what they're saying is false, they insist or pushing those statements or as close to those labels as they can. They may even be angry after you inform them because they lose plausible deniability.
I've also been in situations where an ultimatum like E2E encryption is dictated by a marketing team and then expected to be created without adequate budgeting or time, essentially creating pressures on development teams, project/product managers, etc to lie.
The conclusion I've come to in business is that ultimately, your product or service is going to be falsely advertised and oversold one way or another. It's a lot easier for some to lie, act deceitful, and/or feign ignorance than it is to actually deliver. Your competitors are doing it, if you don't, you lose.
The way I deal with this nonsense is that I make it a point at least once in meeting or fairly tracable record like an email that others know what is and isn't true once and it's up to them to decide who they want to lie to. I've been on the other side being pressured to lie and its not fun so I'll happily pass that responsibility. I didn't pursue a career in computing to be a constant liar, I'll let the people who want to lie, lie.
In an unfortunately rare case of reason conquering madness, a VW exec (Oliver Schmidt) was extradited and convicted over the diesel emissions scandal, instead of the engineers taking the brunt of the punishment.
We expect name brand products to indemnify their vendors to an extent. Consumers don't want to chase down the guy who made the screw that failed and caused a bunch of excess deaths. You put the screw in the assembly, you took most of the profit margins. So you get the lawsuit.
If you want to go and sue your vendor to recover damages, that's between you and the vendor. But the class action goes to Acme Inc, not Acme Screws and Fasteners.
Similarly, I'm not getting a mansion. I can barely get you to buy the servers we need to make half of what you say not a blatant lie. I'm not the one who should be punished when they find out about it. I'm not the one lying to people's faces while I pocket their checks.
> In an unfortunately rare case of reason conquering madness, a VW exec (Oliver Schmidt) was extradited and convicted over the diesel emissions scandal, instead of the engineers taking the brunt of the punishment.
Side note but I think he was grabbed at the airport, not extradited from abroad.
I was trying to recall his name and did some googling. One of the first articles said that he had been approved for extradition. Sounds like they just got to him before the state department had to step in.
> I've also been in situations where an ultimatum like E2E encryption is dictated by a marketing team and then expected to be created without adequate budgeting or time, essentially creating pressures on development teams, project/product managers, etc to lie.
Basically "Our customers have been asking for E2E encryption, so I'm adding that to our next sprint."
>Your competitors are doing it, if you don't, you lose.
What's far more interesting to me is the fact that your vendors are doing it. I wonder how much business efficiency could be gained by taking advantage of the fact that we all know the products our businesses are buying are oversold?
You may find it interesting that recently Malwarebytes was mentioned in relation to 230 of the DMCA which to my mind relates directly to this. They are an AV solution that holds "legitimate" software vendors that operate an above board business to the fire when they start any practice that they (Malwarebytes) determines is violating a PC users reasonable expectations. That software begins to be detected as "potentially unwanted software" and recommended for quarantine just like any other virus.
Malwarebytes spends a whole lot of time defending the fact it recommends software from these companies for removal and the recent SCOTUS memo on the topic sort of implies that the problem -- how do we determine the voracity of statements made by businesses regarding their software, especially software which exists in a constantly changing state -- may be headed towards getting worse as so few people are familiar with legislation also have good understanding of the inherent complexity of software.
Tangent: Cheat Engine, an amazing piece of software, mentions on their website that they may be detected as malicious software because they do a lot of the same things malicious software does - hook into other processes and modify their behaviour, optionally with a kernel hook.
They don't mention that their installer ships with tons of malware that they install, and more that they try to trick you into installing but you can technically opt-out.
> how much business efficiency could be gained by taking advantage of the fact that we all know the products our businesses are buying are oversold?
Not much tbh. Our only other option is to not buy, and build in-house instead. Sometimes that's worthwhile, but other times (like in the case of zoom) it still makes sense to buy the vendor's product, even if you know that it's not everything it's advertised as being.
The real efficiency is found in having people who can determine which and if you should buy a vendor's product, or if you should go in house. Specifically people who can see through the marketing BS and evaluate technologies without personal or hype bias.
Your experience sounds identical to my career in biotech. My PI wanted a new line that produced a certain transgenic protein. When I explained that it wasn't possible I was asked if one set of results from another assay could be "used" in the current project. I said no but still provided all the the necessary data to lie if that was what they wanted to do. The lab ended up getting a $75k plus grant because of the fabricated data and I was left disillusioned and quit soon afterward.
Ever since I have told anyone that would listen that science is broken and I rarely believe anything until there is a working product. It is beyond sickening how much and how often people lie and how it is constantly covered up by their colleagues who don't want to cause a fuss.
It would be easy to ignore them, if they don't poison an entire startup ecosystem. If such founder gets into press and speaking circuit, lot of newbie founders assume such exaggeration is needed to succeed and this behavior becomes part of that ecosystem. Then it becomes hard to have authentic conversation with anyone there.
Some non-technical founders will just make stuff up. If they think it's a "small change", it may as well be done, so they speak about it as if it is. You correct them and you are ignored, or they tell you it's just for a high level discussion, so it doesn't matter. Sometimes they're right, sometimes they're not. It is a very fine line between stretching the truth and exaggeration, outright lies. As "technical" people we try to be precise on our language and want statements to reflect reality.
Have also encountered founders that know the difference, but lie about things by using 'weasel words' that are chosen to suit their audience, who may not be so knowledgeable :(
We're migrating stuff to a cloud provider, and they wanted to expose an internal only API to the internet so that the things could reach it. I was strongly against that, as it has no security involved at all. Fast and loose and all of that.
Two, count them, two people wanted to "just change it to use port 443, that way it's encrypted". I had to explain that you could pick any valid TCP port to pass TCP traffic, but simply changing a nonstandard port to "443" doesn't automatically make it start being encrypted. I had to explain that several times in order for it to sink in.
If it's AWS, the quickest path to doing this securely is AWS API Gateway mTLS authN[0]. You generate some certs, stuff the public halves in S3, slap an ACM cert on the Gateway, and you're done.
I have also used certificate authentication on TLS-terminating reverse proxies (e.g., this is easy to do with HAProxy) to do the same in other environments. You can pin the API's certificate on the client end in order to further reduce MITM risks.
If you don't want to supply a client certificate in your client application, Stunnel[1] is an acceptable wrapper that lets your clients remain TLS-unaware. You could use it for both ends of the tunnel, if you felt like it.
Either way, you end up with a secure tunnel through the internet to the proxy, at which point you're back inside private networks.
(Source: I build this kind of thing for a living.)
> As for Zoom, I don't understand why people trust them or still use their product if they are at all concerned about security. It makes very little sense.
I certainly don't trust them, but I do use Zoom (from a
dedicated unprivileged user, so it can't do any harm beyond
recording my conversations), because my colleagues use Zoom, and
because there doesn't seem to be any working alternative. I got
them to try Jitsi once, which simply didn't work.
PS. There may be working /secret-source/ alternatives, but I
don't know why one should think Zoom /more/ untrustworthy
than them.
Google's Meet has improved considerably and most importantly it comes free with G-Suite. They are also pushing it quite hard as every calendar invite has a Google Meet link automatically included.
The reason that people went with Zoom is "because it worked." As other products improve it's hard to see what Zoom's moat is and why we should continue to pay for it.
> The reason that people went with Zoom is "because it worked." As other products improve it's hard to see what Zoom's moat is and why we should continue to pay for it.
Ironically, I would say Google Meet defines "it just works" for me way more than does Zoom.
Joining a Google Meet:
1. Enter the URL in your browser.
2. Click join.
Joining a Zoom:
1. Enter the URL in your browser.
2. Accept launching an executable.
3. Watch a window or two pop up and close.
4. Decide if you're using video or not.
5. Watch more windows pop up and close.
6. See the main Zoom window appear.
7. Decide if you're using audio or not.
Perhaps part of my beef with Zoom is how many times its window shuffling steals focus during the several seconds needed to join a meeting. If I'm trying to get work done while waiting for a meeting to start, the focus stealing is very obnoxious.
"You don't need a Google Account to participate in Meet video meetings. However, if you don’t have a Google Account, the meeting organizer or someone from the organization must grant you access to the meeting."
Each of those alternatives is just as likely to offer government wiretap support to any government that asks as Zoom is, unless I’ve missed statements of refusal to do so to the contrary from them.
I think the concern is trade secret theft. Sure the US or EU might demand a wiretap but their goals are different. You don't see the CIA stealing trade secrets and handing them over to Apple or Microsoft. Businesses are primarily worried about their IP.
I know of more than one company where installing zoom on any company owned equipment, or using zoom on your own client devices for company business is a fireable offense.
These are companies that deal with some very sensitive data.
Sorry, I didn't think in terms of degrees of untrustworthiness.
What I miss is an open-source alternative. Doesn't Microsoft
let the NSA tap into Skype calls?
>Doesn't Microsoft let the NSA tap into Skype calls?
Yes, but it seems like Skype was doing that prior to being acquired (though Microsoft seems to have accelerated things). From some quick Googling to refresh on PRISM –
>• In July last year, nine months after Microsoft bought Skype, the NSA boasted that a new capability had tripled the amount of Skype video calls being collected through Prism;
>• Microsoft helped the NSA to circumvent its encryption to address concerns that the agency would be unable to intercept web chats on the new Outlook.com portal;
>Eight months before being bought by Microsoft, Skype joined the Prism program in February 2011.
> According to the NSA documents, work had begun on smoothly integrating Skype into Prism in November 2010, but it was not until 4 February 2011 that the company was served with a directive to comply signed by the attorney general.
I wouldn't assume that any given service is secure just because it hasn't been outed yet. Your guess is as good as mine with regard to which service is more secure or less secure.
What is immensely important is to raise the cost of lying to where it becomes something investors care about. The only real thing a company and its investors are afraid of is losing its customers.
If we teach companies it is okay to lie by staying with them, they will lie more.
There are at least half a dozen of open-source alternatives. Have you tried all of them ?
For instance Big Blue Button : it's not perfect, because it's Canadian, it's hosted on Microsoft's Github, and might have some outstanding security issues [1], but I would probably still trust it more than Zoom or anything GAFAM.
What does not work with jitsi? I've been using a lot recently and it is by far the easiest one to use. One link and done. I have lots of video and audio issues with zoom. Now, if you're a company, bluejeans may be the best one.
If you're going to have 10+ People in the meeting, there will be issues. Video/Audio getting bad, People loose have signal, etc. There is also a very noticable load on even more powerful PCs once you have some more people in the call.
So jitsi might work for one-on-ones but slightly bigger conference calls are a no-go.
I tried this and can confirm! I always had about 6-8 persons and never got this issue before. Well, this actually explains a lot of comments I see about Jitsi.
There was a period a few months ago where jitsi was consistently crashing chromebooks. Obviously, if a webpage can crash the OS, it's an OS problem, but it still made jitsi unusable for those with chromebooks.
The native app doesn't work with the free 8x8 rooms, as far as I could tell.
I'm not sure I consider not crashing the OS when the conference starts 'good performance' so much as 'working'. Running it in Firefox at the time was bad performance (sluggish), haven't tested since.
I think both video and audio were skippy to the point of uselessness. I've also used Jitsi with moderate success with a couple of interlocutors, where video disappeared now and then.
I'm not a company, I'm at a university, and the u. has decided to use Zoom, perhaps because it doesn't care about security, or because it thinks being concerned about Zoom is being paranoid.
> from a dedicated unprivileged user, so it can't do any harm beyond recording my conversations
Unless I'm misunderstanding what you mean by that, I don't really see the point in it, TBH.
Have there been cases of Zoom infecting machines with malware or transmitting viruses? The whole concern, as far as I know, is terrible security on their end, allowing people into calls without permission, not having E2E encryption, etc, and running as an unprivileged user won't help with that at all.
You don't see the point of being suspicious of secret-source? and especially of an entity that is known to be dishonest? unless it is known to have been dishonest in the precise manner in question?
I strongly recommend attempting to fix that and/or (while I am aware that it may be difficult in the current climate) searching for a new boss.
In the meantime be very careful to monitor anything your name is associated with, just in case any of your customers get wind of the situation and sue-balls are thrown.
To be honest, before the modern machine learning approach, this was known as a decision tree and was thought to be a valid way to approach "artificial intelligence". Lots of "AI" hype in the 80s was based around "Expert systems" and "Decision trees".
And there are even modern tree based approaches, that beat some of the modern artificial neural network approaches! It's not like it has become an absolutely unusable class of algorithms.
People seem to think that the presence of neural nets and deep learning means that most of types of models are practically superseded whereas in my experience if some non-deep-learning model even gets you mostly there, then the efficiency and explain-ability wins make it worth it.
At my previous job I had a ML-based service that used a basic random-forest model instead of a neural net because it was faster to train and operate, not to mention easier to maintain and had equivalent accuracy with little to no effort required on my part. It was a solid little service.
And you can even do both. Decision tree to get to a smaller problem space, then NN on the output. You end up with a bunch of neural nets, each of which performs better than a single monolithic net due to solving a simpler problem.
Decision trees/behaviour trees are still the most widely used way to build video game AI. Of course video game AI just has to appear smart, really you're solving a different problem a lot of the time.
You answered your own question in your last statement. People don't care about security. They care about it being easy to use and Zoom works better and for more (non-technical) users than any other tool of its kind.
For a long time zoom was the best choice for technical users too, as webex, Skype, and everything except for google hangouts had terrible Linux support.
> As for Zoom, I don't understand why people trust them or still use their product if they are at all concerned about security. It makes very little sense.
Phone calls and text messages aren't particularly secure either, doesn't stop people using them
At least phone calls are protected by law in some capacities (HIPAA allows for faxing but not email, warrants are supposed to be required for tapping phone lines but not email, etc)
Hi there! I'm in the video meeting space, and always looking to find that blend between usable and secure.
I'm curious - is there a video service out there you would recommend if you're conscious about security? Your third paragraph makes me think your opinion will be that no large company can be trusted, because they become a target for nation-state regulatory bodies.
Yes, although there are degrees and differences in culture.
For instance in the telco world you have a much more direct dependence on regulators because you need a stack of expensive and hard to acquire licenses to operate a network in most parts of the world. Some worse than others. In that environment there is a very high degree of compliance with regulators because they have to be given explicit permission to operate.
For pure internet services or P2P applications it is quite a bit different. You don't actually need anyone's permission to distribute software. And you can move your servers around the world. You don't depend on permission - just that nobody comes after you with warrants you cannot ignore.
So the advice is really to look at who you are dealing with and how dependent they are on regulators to operate.
Large internet companies tend to have entire divisions whose job it is to tell regulators to get lost or at the very least maintain a really high bar for interference. Of course, this becomes difficult when the government is also a large customer. So for instance you might want to be careful with vendors who make a lot of money in / off of the defense and intelligence sectors.
Thank you for the explanation! Seems to me that you're describing a trust chain where the product is directly affected by the landscape in which the parent company operates and their biggest customer base.
Use Jitsi (https://jitsi.org). You can find people to host or host your own. Open Source. No downloads for participants. try their instance meet.jit.si
> As for Zoom, I don't understand why people trust them or still use their product if they are at all concerned about security. It makes very little sense.
Actually it makes a lot of sense. Your boss sends you a Zoom link and asks you to install Zoom. Or you're having a meeting with the CEO of some company and they send you a Zoom link, saying it's the only thing their company uses. Or you are a high school student learning online and your teacher only delivers lectures on Zoom. Most people listen to their bosses and superiors instead of protesting their viewpoints about security.
Only privileged people can protest. Others just lose their jobs, or don't get their high school diploma.
For Zoom I suspect you are right. I think the engineers know what it means. But I have met a disturbing number of engineers (in security oriented jobs) who do not understand what the term means.
Regulation should prevent this from occurring. If you use a product that claims it is E2E and it is not, you should be able to sue wildly for potential damages given the sensitive nature of the software.
On the other hand, I think things involving cryptography at scale ought to come with regulations on language
For example, look at how the word "bank" is specially regulated by most governments. I can't just call myself a bank without meeting specific guidelines or else it's not just typical fraud, it's major financial fraud coupled with putting sensitive customer data at risk.
Same here. We need specific legislation targeting these scummy businesses who use corporate ignorance as an excuse for selling a product under false pretenses of end-to-end encryption.
Well, there might be conflicting interests within government. From a consumer advocate perspective government might want to demand this.
From an intelligence services perspective you might want companies to lie.
> From an intelligence services perspective you might want companies to lie.
No, I don't. I don't want companies to lie. You can collect intelligence the same way we've been collecting intelligence for our entire history on this planet prior to E2E comms. E2E isn't a hindrance, it's a way to enforce limitations on government overreach.
You don't. But I'm afraid that is the reality. The only way to change that is by law and then vigorous enforcement of law. That isn't likely to happen.
> In part because executives, marketers and salespeople don't know what it means. And in part because when explained what it means they will insist on their own definition/interpretation and demand the product is marketed as E2E.
This sounds like precisely how Grammarly claim they're not a keylogger by trying to change the very definition of what a keylogger is.
It doesn't make sense to a computer savvy hacker news poster perspective. But to everyone else, the reason they use it because they don't really care or thought about it. It will continue to be the most popular meeting app despite the wailing and gnashing from Hacker News.
I see my significant other using it on a nearly daily basis. She started a uiversity course in her 30ies and due to Corona is in her second semester from home.
The university has a MS365 license free for all students, but for video lectures nobody uses it. Why? Because it is really, really cumbersome to use compared to zoom. Teachers and students alike love the functionality, the quality of video/sound and esp. the ease of use.
Compared to all other solutions available to students and teachers - in terms of what they all want to use Zoom just blows the competition out of the water.
And who is to blame them? These are regular folks. They wouldn't even care, if the lectures were tranmitted in the clear, without any encryption. Most regular students fresh out of school I talked to don't even know the difference between https/http, why it is important to have encryption or what end-to-end means.
> I don't understand why people trust them or still use their product if they are at all concerned about security.
I've been a Zoom apologist from the beginning, and this is the money shot for me. What exactly do you mean by "security"? You're concerned zoom servers are recording your video - on purpose or because theyre compromised? thats too much data to dragnet (even for the NSA), so you think the servers are recording and theyre targeting your meeting specifically? the threat model here is very small and very specific.
who are the ultrasecret sensitive information folks buying the newest, shiniest, unvetted tool for use where infosec matters? i bought zoom because the ui has simple, big, colorful buttons for my unskilled users where g2m et al. are just a little too complicated.
if i needed an SLA specifying encryption models because of "security", I'd have a contract I could sue over. yes, zoom was wrong. they did a wrong thing, but the outcry against them has just been disproportionate.
My therapist uses Zoom for her clients, as she was assured that the E2E would help her meet HIPAA requirements and protect her patients.
If someone can get a transcript of what was said, let alone record, in these therapy sessions, they'd have a goldmine to blackmail from.
Please note, this has legal significance for her and other doctors, who'd started seeing patients over Zoom. So it's not just an abstract, "lulz security"
There are people out there with different threat models from you. Please refrain from talking about use cases you may not understand.
Encryption between the last HIPAA covered entity (including business associates) on one end and the first covered entity (including BAs) on the other (or between covered entity on one end and patient on the other) is effectively a requirement of HIPAA in communications between HIPAA covered entities of PHI, since anything else would constitute an unauthorized intentional disclosure of PHI to the third party intermediary (which is a crime, as well as triggering civil liability), and even a third party gaining access to unencrypted PHI without an intentional disclosure is a breach of unsecured PHI triggering mandatory reporting requirements under the HITECH Act.
Does that mean whenever medical information is sent via phone or Fax, HIPAA is being violated today?
Because plain old telephone service is not E2E and the phone company can eavesdrop on you quite easily (as can the government with a warrant, or a bad guy with a phone tap on your line...)
Not saying that e2e shouldn’t be used when practicable but a blanket assertion that e2e is required for HIPAA seems a little unbelievable to me when I’ve recently received COVID test results from providers via a cell phone call.
> Does that mean whenever medical information is sent via phone or Fax, HIPAA is being violated today?
Phone and fax are not considered “electronic” under HIPAA, so the rules, including the rule regarding encryption for exposed PHI to be considered secured vs. unsecured, specific to electronic communication don't apply. I think they may be explicitly given special treatment for some of the not-electronic-specific rules, too. They are well-known to be legacy loopholes to HIPAA privacy/security rules, which is one of the reasons fax held on so long in healthcare as a way of minimizing compliance costs.
You absolutely should not try to intuit what HIPAA requires for anything else by how fax and phone communication in healthcare operates.
Keep in mind that, while the current phone system is very much electronic, the phone system historically predates electronics. It is electric, but not inherently electronic.
Yes, this is a long way of saying e2e is not a hipaa requirement.
are you saying you have evidence of zoom retaining PHI and not safeguarding appropriately? because that would be a different conversation than everyone yelling because zoom said they were e2e and werent.
But HIPAA does (iirc) require not having arbitrary third-parties to communication. E2E prevents that, but if there wasn't E2E… fairly sure Zoom isn't meant to be a third-party to therapy sessions.
> by all means, show me all the concrete harm zoom has done.
“Oh, they built houses badly? Show me all the concrete harm that's done.” We might not know until the next (metaphorical) earthquake.
Therapists, lawyers, courts including closed door courts, confidential internal meetings for publically traded companies, doctors appointments, exchanging passwords/etc. Even my mom just telling me about a medical situation she's having.
All of those have legal requirements for privacy, and many of them used Zoom because it was supposed to meet those requirements. Zoom lied and failed to meet those requirements. There are other ways to meet those requirements (instead of E2E encryption you can have other kinds of controls) but since Zoom claimed to have E2E, they didn't bother with those other ways of meeting the requirements.
This wasn't an accident or a discrepency. Zoom didn't accidentally have some kind of fancy attack that could be pulled off. They literally, knowingly and plainly misrepresented their product, to get sales they shouldn't have. There are words for that like "Fraud".
> Zoom lied and failed to meet those requirements.
did it? non-e2e is not the same as non-encrypted.
> They literally, knowingly and plainly misrepresented their product
Where has that been proven? as the parent pointed out, there is a wide gulf between misunderstanding and knowingly misrepresenting.
> People at Zoom should be getting jail sentences.
this is precisely why i lean against the anti-zoom sentiment. jail sentences - seriously?! what is the maximum possible harm zoom could have caused? they were wrong and they deserve to be punished, but lets keep things in perspective.
> jail sentences - seriously?! what is the maximum possible harm zoom could have caused?
People have paid them some money because of an intentional lie - that's fraud, and fraud (above a certain amount) means jail sentences. There does not necessarily need to be some grievous consequences to justify jail - let's keep things in perspective, "just" defrauding your customers isn't innocuous, it absolutely justifies a criminal investigation and putting people behind bars, not just some monetary fine to the organization.
>> They literally, knowingly and plainly misrepresented their product
> Where has that been proven? as the parent pointed out, there is a wide gulf between misunderstanding and knowingly misrepresenting.
...Is this not literally the point of the article that we're discussing? Relevant sections:
> "[S]ince at least 2016, Zoom misled users by touting that it offered 'end-to-end, 256-bit encryption' to secure users' communications, when in fact it provided a lower level of security," the FTC said today in the announcement of its complaint against Zoom and the tentative settlement. Despite promising end-to-end encryption, the FTC said that "Zoom maintained the cryptographic keys that could allow Zoom to access the content of its customers' meetings, and secured its Zoom Meetings, in part, with a lower level of encryption than promised."
> The FTC complaint says that Zoom claimed it offers end-to-end encryption in its June 2016 and July 2017 HIPAA compliance guides, which were intended for health-care industry users of the video conferencing service. Zoom also claimed it offered end-to-end encryption in a January 2019 white paper, in an April 2017 blog post, and in direct responses to inquiries from customers and potential customers, the complaint said.
> what is the maximum possible harm zoom could have caused?
+ HIPAA violation
+ Violation of jury secrecy
+ FERPA violation
+ False advertising and fraud
That's US specific. I'm sure foreign governments will have their own opinions.
You're right, you can be HIPAA compliant and not be E2E encrypted - if you have the right paperwork and auditing process. Zoom didn't because they claimed to be E2E encrypted.
There are some things you can lie about and it's crappy but not a big deal. "Lag free video streaming!" - Sure, whatever. "The best quality!" Again, don't care. When it comes to information security claims though, lying has very serious penalties because the damage you cause is extremely serious. This wasn't them telling a white lie about how awesome they are, this is them intentionally and knowingly engaging in fraudulant behavior to make profit at the expense and security of users - and we should absolutely punish the hell out of people who do that to line their own pockets with a few extra dollars.
> Zoom has agreed to a requirement to establish and implement a comprehensive security program, a prohibition on privacy and security misrepresentations, and other detailed and specific relief to protect its user base
What a slap on the wrist. "You blatantly lied to your customers for years. How about you just continue to implement the thing that you were working on anyways."
I don't think punishment is always the best solution but it seems that you should at least set some sort of example.
Punishment is the best solution. Incentives are what drive behavior, and learning that you can get away with lying will just lead to more getting away with lying.
When it comes to training humans and animals, positive punishment is far less effective than most other training techniques like positive reinforcement. Don't Shoot the Dog[1]!
Unfortunately, the positives are customer adoption, and customers have already adopted zoom. This is like continuing to feed the dog treats because it's what you're used to, regardless of the outcome of their actions.
But more generally, it's not obvious that individual, "reptile-brain" incentives translate to large company leadership. I'd be hugely skeptical of applying positive psychology to international corporate leadership, but what do I know anyway.
Agree with your first paragraph, less so the second. People learn corporate leadership in steps, starting with a small group. The style of successful leadership doesn't change IMO, just the number of variables and possibility for greater success/failure.
So we should all remember that Zoom is probably depressed right now and could probably use some support from its friends. Maybe urge GCal to send it a nice note.
Not necessarily. Corporations are more than just sum of the people - they are a process that runs on top of people. People themselves are replaceable - and if you change the behavior of one to something the corporation doesn't want, it'll replace that person with someone new. You want to change the behavior of the corporation itself - and that's best done by creating monetary incentives and disincentives (i.e. punishment). The corporation will adjust the behavior of people on its own.
In other words: "appealing to the people" instead of addressing the corporation itself is like trying to heat up a climate-controlled room by lighting a small fire in it. You'll be fighting the AC unit all the way and causing lots of unnecessary damage, when the right way to do it is to adjust the thermostat on the AC unit.
With an alternative analogy, pushed to the limit, it's like trying to change someone's mind by appealing to the neuron. When there's a system advanced enough to exhibit adequate emergent behavior (which most big companies probability are), the subsystems are less and less important for the macro-system's outcomes. There are neural networks with fewer neurons than the population of the corporate leadership at Zoom that we still don't really understand.
My gripe is the companies who failed to implement because they couldn't do security in a way that was easy to use and resulted in a good user experience, but chose to be honest.
I hate the
(1) cheat to win and vanquish your competitors
(2) when you're caught, say you're sorry,
(3) win anyway because your competitors are gone
progression. It seems like the penalty for that should be existential or at least something painfully severe.
That's the point. It's the companies that are little known that get squashed. I don't know much about the space, but I tried Google and chose zoom instead because it was easier— and I pay for Google. I tried Jitsi. But what about the ones we haven't heard of, struggling to solve the problem that Zoom lied about solving, but because they're honest they never took that step forward.
It's like RealPlayer. By the time the courts catch up, the game is over.
Several people are on zoom instead of Google, for instance, even though I pay for Google. I don't know the other players in the space.
Zoom, unbelievably, built a better video conferencing solution than any product by any other company. Their top competitors were Google, Microsoft, and Cisco - several orders of magnitude larger than them.
In this case, I believe the underdog won.
InfoSec cuts both ways in the market. Sure, products with lower standards “poison the well.” But purchasers with burdensome, pointless, obsolete security audits do far more damage to the ecosystem. It certainly cost my startup a tremendous amount of potential growth. We far exceeded security standards like SOC Type II, but still had to bend how we solved security/user problems to Excel sheet checklists.
Zoom was facing a similar issue - they delivered “secure enough” until it wasn’t. Then they, in months, made massive, productive, effective changes that addressed the new issues from skyrocketing growth.
If our standards for good actors in the tech space is higher than that, I don’t know how humans can achieve them.
Actually intermittent reinforcement is much more effective. If it's offered every time, then when it is not offered it is less likely to trigger the desired behavior. Operant conditioning using intermittent reinforcement trains to not expect the reinforcement mechanism every time, so when it doesn't come, the desired behavior is still displayed:
> a prohibition on privacy and security misrepresentations
Why did they have to "agree" to that? Shouldn't that already not be allowed? Also, this sounds a bit like they're allowed to misrepresent other things...
Certainly with government access to messages. The minds in charge would never let such an opportunity slip. They are set in the cold war of terror and that won't change for the current generation. So it is still not a good idea to use Zoom.
Exactly. Any small startup owners would see jail time. Similar case in recent History is Trump non-profit (please no flamewars). There are tens of thousands of business-owners rotting in jail today because they embezzled half a million bucks or more - here with Trump charity you have case of at least $2 million stolen plus self-dealing and basically living your whole life/paying personal bills out of charity and what does the judge do? - "Here Mr. Trump is a $99 training seminar on "How not to steal" from your own charity. Go get you and your children watch this online class and report back when you done".
>What a slap on the wrist. "You blatantly lied to your customers for years. How about you just continue to implement the thing that you were working on anyways."
Honestly - that's inline with the severity of the crime.
>I don't think punishment is always the best solution but it seems that you should at least set some sort of example.
I'm not a fan of regulatory bodies making examples of companies for minor infractions. And this is a very minor infraction.
From my perspective, making security guarantees about a product is the same whether that product is software or hardware. If somebody guaranteed that their ferris wheel had x safety feature, then it turned out to be untrue, nobody would call that a minor infraction.
I agree. I see false advertising as a serious crime.
Obviously we should be utilizing critical thinking ourselves, but I think that we also need the threat of punishment. Because if we have that threat one critical thinker can report the problem and it will be solved for everyone. If there is no punishment then there is no incentive for companies to tell the truth.
Especially if one is ideologically committed to light touch regulation / free market economics. This makes false advertising a particularly serious crime because it introduces a false information asymmetry between the customer and supplier that damages the effective functioning of the market.
All they had to do was say "encrypted" instead of explicitly saying "end-to-end encrypted" when it very clearly wasn't end-to-end.
The former still could've been a bit weaselly and misleading (many non-technical users would probably have assumed "encrypted" implied total confidentiality), but what they actually did was so much worse. I hope they get hit hard on that.
It was encrypted, but not E2EE, so the only person who could have spied was Zoom itself, and we know the how too - by the same mechanism it performs a video recording, for example.
We just don't know if. But seeing as we've had zero reports of any real-world consequences that could only have come about by Zoom spying, combined with the fact that "spying on your customers" is anathema to your business model and therefore a risk no sane and rational board of directors would ever approve (moderate upside, enormous possibly business-ending downside if ever discovered)... Occam's Razor says no spying ever occurred.
"Zoom itself" spying sounds quite unlikely, "bribed underpaid Zoom intern" sounds a lot more likely, "the gvt. sending one of those silent warrants" sounds almost unavoidable.
Non-E2E encryption doesn't give access to just "the company" (which probably doesn't care to spy on you, true), but absolutely anyone who can bribe/trick/coerse anyone in their "supply chain" (from the CEO to the sysadmins, hosting provider, even janitor...). Not to mention a data leak due to a vulnerability in any part of their stack.
The company has shown complrte disregard for security multiple times in the past and I wouldn't be at all surprised if they had major security holes. And since they already lied about E2EE, it would be entirely safe to assume they would not have disclosed a breach either.
>You can't know, because it wasn't actually e2ee, eh
You can know that nobody external to Zoom spied on those streams as they were encrypted between client and Zoom servers. The fact that Zoom had access to your stream, in principle, is par for course.
>These are hard to quantify but they're not nothing.
And they got in trouble. There is the FTC slap and the PR cost associated with the negative publicity. That feels about right for the level of infraction. But when these kinds of articles come out, people are calling for regulatory bodies to 'make examples' of the companies in question. That's not how it works. That's not how it should work.
All network traffic in the US should be seen as the opposite of innocent untill proven guilty: Unless you can prove otherwise, everything we know of surveillance tells us that of course everything and everyone was spied upon. I can't think of any reason the NSA and/or CIA should not have spied when they do so on everything else they can get their hands on.
Years ago, this attitude was seen as paranoid and bonkers. Then Snowden proved it true. Not only true, but barely scratching the surface. What's actually happening is beyond the wildest fever-dreams of the most extreme 90s crypto-punk ever.
Why are people still able to pretend otherwise without being laughed out of the room?
> Why are people still able to pretend otherwise without being laughed out of the room?
It is a variant of a Bible Thumper & Bootlegger coalition.
A large portion of the population really doesn't want to believe it. A small population with a vested interest (and lots of relevant tools at its disposal) is happy to help them.
I think the assumed implication with E2EE is that no one other than the partcipants can get at the content of your communications. To do that you need:
1. All cryptographic keys controlled by the users.
2. Some way to confirm you are actually connected to who you think you are connected to.
3. A way to confirm that the code you are running is not leaking keys/content.
So Zoom failed on all 3 points. There are lots of things out there claiming E2EE that fail on one or more of these points. Almost all fail on point 2 unless the user does things that they almost never do. Is the FTC going to come up with a E2EE definition for trade and start prosecuting those that don't meet that definition? Otherwise it would seem unfair that they only went after the entity that ended up in the general media.
> almost all fail on point 2 unless the user does things that they almost never do
Are you referring to the "scan this QR code to verify your partner's key" function in secure messaging apps? I definitely use that. I try to keep all my primary contact's keys verified. It's harder during COVID when you're not meeting up in person as often, because anything besides meeting in person and verifying the two devices directly exposes you to another unverified channel.
It's very hard to bootstrap this stuff. Sure, "web of trust" but that's hard too. Speaking of which, didn't Keybase get bought by zoom to help with exactly these issues?
Telegram's crypto is shoddy [1]. It may not be a complete train wreck, but if you value good crypto and privacy, Signal is probably your only option. It also offers E2EE group chats, unlike Telegram.
"[S]ince at least 2016, Zoom misled users by touting that it offered 'end-to-end, 256-bit encryption' to secure users' communications, when in fact it provided a lower level of security," the FTC said today in the announcement of its complaint against Zoom and the tentative settlement. Despite promising end-to-end encryption, the FTC said that "Zoom maintained the cryptographic keys that could allow Zoom to access the content of its customers' meetings, and secured its Zoom Meetings, in part, with a lower level of encryption than promised."
That's the concept of E2Z2EE (End2Zoom2End Encryption)
I'm still not a native English speaker, but a Google search shows that non-paying customers are people who don't pay their bills, which is not the same thing as users who don't have bills to pay.
Also as I wrote, Zoom was thinking of selling E2E encryption as a payed feature, that's why the distinction really matters (I would happily pay for it if that would give me a strong assurance that I just don't have so far).
I don't think I'd happily pay for Zoom, regardless of their encryption promises. I've personally struggled more with zoom call quality issues and hardware conflicts than I have with any other video conference provider.
Also anecdotally, I hear the opposite from every single person I know. Zoom has been the video conferencing system that works the best. Have you ever used Go2Meeting, WebEx, Teams? Constant struggles with those applications for me, my friends, and my co-workers.
You really have to use Teams every day to appreciate just how buggy it is on all three platforms. I used slack video for remote standups for a year or so and aside from the odd little hiccup it was boringly stable. Teams fails at least once a week.
Non-paying customers can mean either customers who are delinquent in paying their bills, or customers that are using the service for free with permission.
I'm sorry, I'm not a native English speaker. According to the Oxford dictionary customers are people who buy a product or service.
Zoom was thinking of giving only them E2E encryption, and actually I would pay for that service if I would trust Zoom. Currently I use telegram to speak with my friends, but the call drops quite often as we don't have stable internet connection.
If my English here is so bad why do I see ,,end user'' in Zoom's terms of license all the time, and customer for paying customers?
Can you provide a better legal definition than what I see? (Only the legal meaning of the word matters in the current context).
We're talking about hundreds of millions of people being effected vs few million people, it matters a lot. You would understand that it's very far from pedantry if you followed all announcements that Zoom had in the past.
A deeper issue is how hard it is to "know" if companies hawking products with security implications (which is nearly everything, today) are lying.
I'm not even talking about the gradient ranging from innocent bugs to incompetent coders and how that gets papered over. When you buy shoddy physical goods, there are typically characteristics you can't hide, like cheap materials. But with software like this of course the only function your average person can verify is that the transmission happens, not how it is encoded. Neither Grandma nor your manager are likely to break out tcpdump to check.
And of course the DMCA complicates this in the US, and things are even worse for researchers elsewhere.
Third party audit and reputation are the only fixes I see. And the second one requires a commercial environment that rewards it. The current one doesn't; it rewards novelty and lies, so that's what we get.
You're right, but 3rd party audits can help, especially because the precedent set by Arthur Andersen w/ Enron. It destroyed their business completely when their fraud was discovered, so there would be a strong incentive for auditors to get it right. As you said, not a silver bullet, but it's a step up from nothing.
Nobody at Arthur Andersen went to prison and SCOTUS reversed their conviction. The firm may have gone up in smoke, but nobody was actually punished for their crimes. Who at Ernst and Young has gone to prison for Wireguard or WeWork? None by my count.
> It destroyed their business completely when their fraud was discovered...
I suppose rebranding and transferring assets is kind of like a Chapter 7 "destroyed their business completely", but no one involved went to jail, no one lost their Series 7 or any other kind of licensing, no one was ever barred for life from ever managing at a public company ever again, etc. Sure, to laypeople a selling off of assets and rebranding sounds pretty "destroyed...completely", but unless there are lifelong, severe, natural person repercussions, business people are thrilled with the results. No clawbacks, no offender registration, can always point the blame elsewhere in future discussions (like job interviews). This is mostly regulatory theater, and all net upside for those who benefited by unethical action or by unethical omission.
I completely agree, and that's a huge topic unto itself.
Briefly, the issue with auditing, as with most things, is incentives over time. The difference between fraud in finance and software engineering is how long the bezzle[1] lasts. In finance, it can last a very long time in up economies, leaving Big Three auditors plenty of time to scurry off. In software you have to deliver at some point, leaving lying auditors exposed to discovery by security researchers immediately.
There is certainly still room for shenanigans if not set up correctly, but less than in finance.
Auditors operate off money, too. I have seen this first hand. If I tell them about an egregious violation and they don't even bother to write it down, I know what type of "auditor" I am dealing with. If they write it down and the issue is not resolved, same thing.
I agree. I am writing my project a certain way to achieve a goal I call reimplementability.
This means that I try to design in such a way that a reasonably competent dev could sit down and rewrite the whole system in a couple hours/days/weeks.
How 'hard it is to "know"' is irreverent, and you agree to this when you accept the 'contract'. This isn't an issue of honesty, in fact; quite the contrary. They are extremely honest. It's just in the fine print.
Legal / Terms of Service / Terms of Use / Usage Policy
I find that a majority don't even hide unreasonable conditions in 'legal' terms anymore. Whilst there may be tens, hundreds, of pages in that ToS you tick before using the product - there's a few solid, clear, one sentence dot points that protect from all issues. The best of these is similar to: "We reserve the right to amend, change, or otherwise modify this agreement with - or without - notice.", or "We reserve the right to withdraw services/solutions with - or without - notice." Some, like the famous early React licenses (by Facebook), had indemnity clauses for simply using the product - even if your then legal engagement was entirely unrelated to your use of React. Impacted by Cambridge Analytica? Sorry. Many years ago you experimented with React. Immunity.
I don't think a third party audit is a fix. Even dismissing these previous statements. The volume of 'independent' auditors that are then found corrupt, or otherwise bias/incompetent in result, is pretty regular news. More often than not. Based on some experience with how contracts and engagements go with big corporations - some even factor in known 'expected losses' (such as fines, failing to meet SLA, etc) in their actual budget of contract.
The real fix is users taking responsibility. Don't like the ToS (And, believe me; you won't..). Don't accept it.
(@USERS, not @_jal) But don't complain that the product you did, or did not, pay a cent for - but blindly accepted the ToS - fails to deliver to your expectation. Sure.. It suggested, or possibly even states 'end to end encryption'. But the ToS clarifies context of that.
So will they get fined more than Snapchat for lying about ephemeral messaging or will this be the usual American "slap on the wrist" thing we usually see to protect the investors?
According to tha article, they won't be fined at all:
>"Today, the Federal Trade Commission has voted to propose a settlement with Zoom that follows an unfortunate FTC formula," FTC Democratic Commissioner Rohit Chopra said. "The settlement provides no help for affected users. It does nothing for small businesses that relied on Zoom's data protection claims. And it does not require Zoom to pay a dime. The Commission must change course."
Under the settlement, "Zoom is not required to offer redress, refunds, or even notice to its customers that material claims regarding the security of its services were false," Democratic Commissioner Rebecca Kelly Slaughter said. "This failure of the proposed settlement does a disservice to Zoom's customers, and substantially limits the deterrence value of the case."
” The European Commission has told its staff to switch to the encrypted Signal messaging app in a move that’s designed to increase the security of its communications.”
They would just have a single state-run CA and ban all E2E messaging apps from app stores. Only state employees would have access to an E2E messaging app that would only use govt certs from the CA. Any apps that continue to operate outside of an app store could have their domestic servers seized and anything foreign would be blocked by all domestic ISPs. The govt could allow for civilian apps to use weak encryption as some sort of compromise but anything the govt can't crack instantly would be banned. It would require a Great Firewall-level of control with the govt playing whack-a-mole for a while but with enough time and money, civilian E2E would be near impossible. Fortunately, this is still a pipe dream for even the most extreme statists but if large corporations can come around to the idea of giving the govt an unlimited backdoor to their internal communications, say good bye to any/strong encryption for the average person.
This level of planning is like the US govt outlawing all guns tomorrow, it just isn't going to happen any time soon since not only are gun-owners usually not the type to want to give up a gun, the prevalence of gun ownership is so massive that it would take equally massive resources to run a completely successful confiscation program.
> if large corporations can come around to the idea of giving the govt an unlimited backdoor to their internal communications, say good bye to any/strong encryption for the average person.
The EU isn't a single individual. It isn't even a group of individuals with aligned interests. As such, its many different heads shouldn't be expected to have consistent messaging. This is a draft so, as of now, it's factually untrue to say the EU are willing to ban encryption.
If Zoom made clear to users that connections were not secured to the same standards as competitors, and that potentially hundreds of employees could be silently listening in on any call, I think that would have prevented them becoming a leader in video conference tech.
So the right fine here is their entire market cap. That would put them back at square one, which is where an honest competitor would be right now.
Not defending them in any way - but don't think security was the primary reason for Zoom taking off. It was stability - it just worked and at the same time competitors didn't.
Everybody used to have Skype and I would have gladly handed over my data to MS if only it would have been able to do stable video calls. It was often a disaster for just 2-way calls, let alone group.
> don't think security was the primary reason for Zoom taking off. It was stability
Stability was the main draw, but company IT departments would have had more power to ban it if there were bigger and clearer risks of corporate secrets escaping.
Industrial espionage is real. There are many companies who are concerned about this and take active steps to keep data secret who would likely not have approved zoom use if they'd known e2e encryption wasn't to the level they were told.
Some folks are concerned with more than stability and ease of use.
Once can't just delegate responsibility like that. Any company should enage in some form of due dilligence before procuring software. If there are expecations of privacy then those should be proven by the company procuring the software, not the vendor.
How would you verify e2e encryption on a proprietary protocol? Not every company that cares about privacy has crypto experts on staff. They should have a reasonable expectation that the vendor is telling the truth.
No, if a company was really worried they shouldn't have opted for a cloud product with a (partly) Chinese-owned company. A lot of companies go through the trouble of giving their employees (especially management) "throw away" phones and/or computers when they send them to "problematic" places, in particular China, but then they install Zoom for their C-level and middle management executives to use, huh?
Any company IT department's power to ban something is inversely related to how much it's users want to use it. Also, the videoconference provider stealing company secrets it not part of most companies threat model. Teams and Slack are incredibly popular corporate tools, and neither of them offer this feature. WebEx is the only reasonably popular tool I can think of that supports it, and any security department that cared strongly about E2EE, would be asking questions like "do you perform key escrow" if they were thinking of migrating off something like that.
Because in order to operate a business (or any organization), you have to at some point decide on a group of service providers and other 3rd parties that you trust. For most organizations, trusting a major videoconferencing vendor is going to be within their risk tolerance. For some organizations (or for some use-cases within organizations) this wouldn't be acceptable (or perhaps trusting Zoom wouldn't be acceptable, where a different vendor might be), but at this point you're starting to stray outside of Zoom's target market and into a set of more specialized requirements.
Defending against sophisticated state-level actors goes even further beyond the requirements of most businesses. Unless you had a specific reason to believe that you were a target of such actors (dealing with national security, or matters of significant national strategic importance), you couldn't justify investing much resource into such defensive measures.
Users were unaware this was happening. "It just worked" because it would install itself in the background unbeknownst to the user, thus obviating the need to take time to install it when needed.
> It was stability - it just worked and at the same time competitors didn't.
This is absolutely huge. We've tried Teams (and I have previously used Webex and Hangouts).
It seems like there is _always_ one person that struggles with other video services. Can't join, video/audio issues, CPU usage, latency, etc. Painful when 10%+ of a meeting is consumed by getting one last, key person trying to fix their issues.
It's much easier to make a stable communication product if you don't need to worry about security and privacy.
Just look at the troubles and hurdles Signal messenger need to overcome to implement some features, while the competition that is not so security focused has them since forever.
I think you may be viewing history through slightly rose-tinted glasses there - I used pre-MS Skype a lot and it was never anywhere near as reliable as Zoom is and didn't support group video chat at all. And the fact that it was P2P meant that some features that everyone would expect to work these days (offline messages, mobile support) were simply not possible at all.
I'm not sure what would be accomplished if the source leaked. Someone would still need to maintain both the client and now a new set of servers. This would be difficult given that Microsoft would almost certainly use whatever means they could to stop this from happening.
The client application was also the server application. Clients with good connections which appeared to always be online became super nodes which were the directory "servers" you would connect to. The code base contained a long list of previously known super nodes and would attempt to connect to those on first start. As it ran it would keep syncing the list of close super nodes. There were many hundreds of super nodes, so the odds of all of them changing or going offline were pretty slim.
I imagine some people at Skype probably kept a few instances of Skype running at the office. So they technically hosted a few super nodes, but it wasn't necessarily that they were running some vastly different server version of the app. It wasn't until Microsoft decided to cut down on the P2P aspect of the app and hardcode only Azure-hosted super nodes into the application that this changed.
I wish that was true, but in practice I think it wouldn't matter. Zoom was the only one ready with infrastructure, multiple clients, automatic quality adjustment, screen sharing options, scheduling, and many other needed features.
Otherwise we had hangouts/meet with very basic features and jet-taking-off Mac behaviour, chime which is really good but nobody heard of it (Amazon is not interested in that market apparently), Skype which aims for social chat consumers, slack which works only within the org, jitsi, and a thousand of me-too apps with very basic feature set.
Zoom could kick your puppy at the end of each call, and it would likely still be the best choice at the time :-(
You can care about privacy yet still prioritize not killing your company in a pandemic.
Very few things that are hosted are immune to employee buggery, that’s why companies invest in third party risk management; to assess those risks, which are always material and non-zero and determine if they are within the appetite of the organization.
Amazon doesn't seem interested in that app being used by random consumers. There's very few accessible guides around it. It's technically good, but it's not even a competitor as such.
The market being $50B means there are $50B of sales to do per year.
Market cap is a multiplier of revenues, easily 10 or 20 for a tech company, that means a $1T market cap to be taken across the videoconference companies.
Wondering how numbers can be so high? Count $10 per month * 12 months in a year * 100 million employees in the US... that is $12B per year going to video software!
Actually, price / earnings (P/E ratio) is typically 10-20 for _any_ company in the S&P 500. When you look at big tech, the numbers are drastically higher:
- AMZN: 92
- GOOG: 34
- FB: 33
- NFLX: 76
- AAPL: 35
- MSFT: 35
Compare this to, say, 3M, at 19, or GM with 17.
edit: incidentally, apparently Zoom's P/E is... 527, which is grossly inflated even for a tech company. Tesla is also in the same category with a P/E of 834.
It is, but markets can stay irrational longer than you can stay liquid to paraphrase somewhat famous quip. I was also one of those people who tried shorting TSLA since I believe they are way overvalued. I agree with you, but the market has spoken.
P/E ratio formula is listed above correctly, however, earnings is earnings per share, not revenue. So the parent's market valuation rationale is whacky.
Side note - Go read about Japan's lost decade and you'll see how dangerously close our (US) current speculative investing environment is to theirs before it fell.
Market cap is the paper value of the company. It has little to do with the market. Zoom’s planned pivot is into boring markets like business VoIP.
Zoom went bananas because they won the space at a point in time that mattered. FaceTime is too proprietary and lacks features due to E2E, WebEx is run by incompetents, Google Meet is hard to use, and Teams is too complex. There’s a thousand other competitors with a few users.
Speculators poured billions into the consort and the valuation went nuts. That could go away in a week.
> I really think there is an unsustainable distortion happening.
Yes, soon any website can have their own videoconferencing using web technology like WebRTC. And implementation will be as simple as running "npm install".
> But Zoom, alone, already has a marketcap of $117.534B
Yes. Zoom having a market cap that's more than half of Intel? Come on now ...
Wild optimism aside, you can sell one or even a thousand ZM shares at approximately the current market price, but you can't sell the entirety of the company at the same price. The pool of buyers is much smaller for such volumes.
I don't really think so. I think we are just moving away from inefficient meetings that are IRL. I would love to see all meetings go remote for many reasons. I think this will stay even once Covid is gone.
Is it really different from competitors like Cisco (webex, jabber, ...)? A big selling point of all those is phone dial in which can't be done with e2e encryption (the phone gateway run by the operator has to have the keys)
> the right fine here is their entire market cap. That would put them back at square one
I don't think Zoom has transgressed anywhere nearly this badly, but even if I did it doesn't make sense to fine any company their entire value unless your goal is simply to destroy them. The company is only worth as much as it is because it is expected to continue as a company, and there would be no way for it to continue if it owed that much money to the government. Unless it was nationalized and run by the government, but I doubt you're proposing that? Which means instead the company liquidates, and its liquidation value is far less than it's value as a business.
A good punishment is government nationalizes it, paying shareholders nothing, then immediately sells those shares back onto the public markets. The government would earn close-ish to the market cap.
Effectively, allow the company to continue as before, but wipe out all shareholders. After all, they are the people who allowed this behaviour. They are the ultimate decision makers.
No, abandoning property rights is not even close to an appropriate punishment, even for those directly responsible for the fraud, let alone for ignorant shareholders.
Is this about the audio streams? I imagine that if at any time there are a million video streams happening, and zoom wanted to sneak into 1% of them, it would pretty much need 10000 vCPUs of compute to do that? The current tech scales affordably because only the encoded packets get transmitted between callers (via "selective forwarding units") without needing server-side re-encoding?
edit: That was for video streams. For audio streams, certainly the cpus cost is lower - about 10%.
I don't think anyone except crypto-nerds cares about this. Normal people just assume everything can be wiretapped and Zuckerberg and friends are always listening.
You can be honest business or you can steal billions, get caught and pay a millions in fines. I think everyone can see a problem here. You pay back less then you stole so this is an active encouragement to steal.
Most recent example, morgan stanley fraud for bilions in profit pays fine of 1.5 mil [0].
Not only is the source closed and proprietary, the company and the product themselves have terrible reputations when it comes to security. Why would anyone even consider trusting whatever encryption they offer?
Even with open source software you will never know what is actually running on the servers. It's best to assume none of the services are e2e encrypted and you should provide your own encryption on top of the medium you communicate with if you require privacy. By own encryption I mean exchanging keys and encrypting offline using oss tools.
> Even with open source software you will never know what is actually running on the servers.
If the clients are open-source and properly implement end-to-end encryption, and you verify that they are not sending your keys to the servers, then what is running on the servers is irrelevant.
Yes, but the servers only transfer encrypted payloads for which the servers do not have the decryption keys, and you can verify that just by looking at the clients (which are open source in this scenario). That is the entire point of end-to-end encryption.
Are you saying that MITM is not possible? For example your client will receive a key prepared by rogue server and it will decrypt and encrypt conversations on the fly. You wouldn't be able to tell unless you find a way to verify the person on the other side tried to exchange different keys.
Resisting MITM is the entire point of end-to-end encryption.
Verification can be made with the security code that WhatsApp uses, and the safety number that Signal uses (same thing, different name). Other systems have other, similar methods.
You can verify that they match in order to verify that you're not communicating with a man-in-the-middle, and if the key changes then both apps show a prominent warning.
Granted, a lot of people may not actually bother to verify.
Correct, but if there is something between you and other user and can intercept key exchange then it can decrypt and encrypt anything on the fly. I think you would have to exchange keys offline to have true e2e experience.
A better analogy would be if the car didn't come with airbags, but even that is not as good because you have no way of knowing if someone listened into your conversations whereas airbags let you know just fine.
Ford once paid $300M for a faulty airbags thing, but that was negligence whereas this is fraud. Of course this isn't a lethal risk.
I would think the case would have legs. Haven't a clue how much for.
If a company signed a contract with Zoom in which e2e encryption was stated.
It sounds like OP is referring to a breach of contract. Even if they can't prove damages, they could still be entitled to some other remedy, like a partial refund. It would depend on the language of the contract, of course.
Why does there have to be damages? You paid for something, and didn't get it.
Go to the supermarket, but a box that says "ten apples".
You get home and open it, there's just five apples. You'll want money back. What "damages" do you have to prove?
It's much more complicated when other risks are involved. It's like someone sold you a bun by labeling it as gluten-free, but it was not. Maybe you're just slightly intolerant and nothing lasting happened to you, but you could still sue them.
Unlike that example, it's difficult to know if there were direct damages, but some big companies can come together and try to make a case and sue them and demand huge compensations. Zoom could have spied on your conversations and sold your information to competitors even though they sold the product claiming that they couldn't.
Yeah, I get the impression that if some guy frauds a bunch of rich guys, he goes to jail, but if a corporation frauds millions of users, they're just politely asked to behave.
It's situations like this that makes me wonder whether there should be more efforts put into education and awareness regarding ethics in software engineering. We teach ethics to other STEM disciplines such as biotechnology and aeronautics, why is it left out of software engineering?
We had a course on it as part of my degree, I believe it was required for the course to be accredited by the BCS (British Computer Society), but I could be wrong on - this was over 25 years ago.
This is like suing Hillshire Farms because their bacon wasn't as maple-honey-bourbon-flavored as they claimed. Nobody is buying bacon just for flavoring. People use Zoom because it's a free digital telephone with screen sharing. Not because it's super duper secure.
Telephones (VoIP, PSTN, SMS, etc) do not have end-to-end encryption - or any encryption - and we've been using them for conferences since always. Hell, we use them for Zoom calls!
This is some kind of government vendetta, probably pushed by Zoom's competitors who make a bundle in government contracts. Because they're currently the biggest provider, they're the biggest target. But this standard has not been (and will not be) held up to any of its competitors who make similar claims. The political party that is sabre-rattling in this article is just making themselves look good to their constituents.
Turns out that by default, BBME is not end-to-end. The initial handshake is transparent to Blackberry, and they could use that to decrypt future messages without your knowledge.
To enable true end-to-end, you have to opt in to an out of band handshake to start each new conversation, an option you can turn on in their admin console.
How many people are actually going to opt in to dealing with a confirmation SMS for every new thread?
I reached out to Blackberry at the time to update their literature as it was misleading, but no action was taken by them.
BlackBerry has always been untrustworthy when it comes to encryption:
> The defence in the case surmised that the RCMP must have used the "correct global encryption key," since any attempt to apply a key other than BlackBerry's own global encryption key would have resulted in a garbled mess. According to the judge, "all parties"—including the Crown—agree that "the RCMP would have had the correct global key when it decrypted messages during its investigation."
So the takeaway here, there isn't real significant consequence for this kind of stuff. Can I just create startup and store passwords in plaintext and lie about it so that I can focus on the core user facing features of the product? Once we get big enough I'll just hire some security engineers to do things right.
I'm exaggerating a bit with the above example, but how much corners can someone cut and how much lies can they get away with when it comes to security? Because finding the right balance seems like a serious competitive advantage in the startup space.
Pretty scandalous stuff. But to be fair it seems pretty likely that any or all of the major players (Apple, Google, MS, Facebook, AWS, etc) to be maintaining some sort of back-door access to the channels they control for spying purposes.
I suppose the risk with Zoom is leaks due to incompetence rather than leaks due to government intervention.
Apple claims that FaceTime is end-to-end encrypted (and makes some pretty strong statements about not having access to the content of communications). Facebook similarly claims that WhatsApp is end-to-end encrypted. Whilst I have little love for either company, do you have any evidence that these claims are lies?
I mean, I agree with you, and I guess the "surely Apple is not blatantly lying about being unable to read the content of your communication" argument has eroded a bit after Zoom's behaviour. But the penalties (both in terms of reputation and in terms of monetary fines) for this kind of misbehaviour are already large, and are likely to increase over time, and it seems an unnecessarily extreme risk for these companies to take.
But yes, impossible-to-verify claims are not worth very much at all.
As for fines, companies already do sophisticated risk analysis so that the average outcome would be far more than the average potential cost. I know oil companies do highly sophisticated risk and reward calculations with violations.
As for 3reputational damage, that’s a long term effect. A few events won’t have a lasting impact. If it turns out that Apple Key Chain is not e2e, or worse iOS exfiltrates key material from apps, that would be major news, but soon people will forget (if they ever cared in the first place) and keep buying iPhones unless the misbehavior is a recurrent problem. A company like Apple will make it extremely difficult to discover such misconduct.
What penalties? The NSA boasted (internally) about how much they were spying on Skype, and I'm not aware of Microsoft having been penalized in any way for lying about it, probably even the opposite?
I don't... did you even read TFA? All the order says is that they can't lie about it again. They don't have to pay anything, they don't have to actually fulfill their prior claims, and the other parts of the agreement they likely already comply with, and if not it'll be quite cheap (relatively) to do so.
Yes, but they endured reputational damage, and companies hypothetically lying about it now could reasonably expect to have to pay something in future enforcements, which is what I was trying to get at in my previous comment. Reading it now, it was really sloppily worded by lumping together those things, but I'll leave it as it was so that the rest of this thread makes sense.
> they don't have to actually fulfill their prior claims
Given that they don't claim it any more, I'm not sure that they could be forced to start doing it -- put another way, not having E2E encryption is not a crime as long as you don't claim to have it.
How much actual reputational damage could they have possibly endured? I haven't noticed any fewer people using Zoom.
It's a consent order. They're willingly agreeing to it in order to avoid other costs (like fines and a lengthy trial). There's no "forced to" involved.
I think this probably varies a lot between social groups; I know of many people (including non-technical) who were motivated to explore alternatives after reading news articles about Zoom's behaviour. A bunch of non-technical friends subsequently started to use meet.jit.si for meeting up, playing board games, etc, for example.
Zoom's revenue is in corporate accounts, just like Slack - my 10k ppl company uses branded enterprise accounts on both systems, we even have VOIP via them with DIDs. Companies of size do not pivot quickly on telecom and messaging system changes, it takes a lot more than a single issue or two for our money to not be in their pockets.
Whatsup is "end to end encrypted", but I had seen an article here on HN about how Whatsup would snatch your data before it begun transit, if needed - for "security reasons" - after performing a local analysis on the messages. I don't know if this has been implemented as of yet, but you can see the intent for circumventing actual encryption - they can do it, and since e2e has become a bother, they certainly will.
I've got https://news.ycombinator.com/item?id=25058783 . There is substantial evidence that the American spying agencies are willing to use anything with a reputation for neutrality as a vehicle for spying.
"Apple has a market incentive not to lie!" is an argument I find compelling, but the NSA has a bigger incentive to make Apple lie, and more power than Apple. If Apple & friends were ever offering a truly secure communication channel it is unlikely that was/will be allowed to continue.
Think of other popular messaging systems that claim to offer some kind of E2EE, but are proprietary software: WhatsApp, Skype, FB Messenger, Viber, Threema, Line…
Distrust by default!
I am always amazed when folks even consider the alleged support for strong E2E encryption in those apps… the value of those claims is exactly zero.
- Open protocols would be helpful. Then you can implement it by yourself and you can see if it is encrypted (and implement whatever other features you may need, including saving energy).
- If they lied to users about end-to-end encryption, then it is false advertising. It is important to avoid false advertising.
> Amid controversy in July 2019, Zoom issued an update to completely remove the Web server from its Mac application, as we reported at the time.
Surprised to see them mention the web server thing and not mention that it was so bad that Apple actually updated its antivirus software to remove the Zoom web server.
I was never confused, but am more technical. I mean, how do you terminate to POTS, do the mix-ins etc without zoom decrypting on their end? If it's E2E encrypted and I have a dial in number - it's not E2E in that sense.
If you want my popular products then nobody can answer because you'd know of them already. So I'll generalize to what group video tools are e2ee:
-> Jami (according to their website, I only ever used their chat and regular one-on-one calls)
-> Wire (client and server open source, but not community-lead development)
-> WhatsApp (if you trust Facebook, proprietary back-end)
And if you consider open source & on-premises / "can be completely locked off from the Internet so only you can access it" software to be end to end encrypted (if you personally run the server, you're one of the endpoints):
-> Jitsi Meet (full e2ee is under development, collab with Matrix I think)
-> BigBlueButton
-> Apache OpenMeetings (I never used this one, can't vouch for it)
Signal and Threema don't do group calls as far as I can quickly find online, correct me if I'm wrong.
Anyhow, plenty of options whether you like to self host (saves a ton of CPU on encryption and lets the server do stream mixing) or have full end to end encryption. Why do you care whether they're used by a billion people / "popular"? You can still choose to use them and improve the status quo because why not?
Oh right sorry, Telegram indeed doesn't do group calls. Removed them from the list. Thanks!
As for Jitsi, BigBlueButton, and OpenMeetings, no indeed they don't do encryption currently, hence them being in the second section with open source self-hostable conference software rather than the e2ee section above. To me, depending on the use-case (if you can self host on a trusted system) that would be equally secure and also doesn't leak metadata (who calls who) to some central system.
Wire's most recent system (launched a few weeks ago to make the video conferencing more efficient, bumping max participants from 4 to 12) also tries to avoid learning who is in a conference with who, but fact is that if you observe their datacenter there'll be traffic going to certain IP addresses that starts and stops at the same time.
For what it's worth, to add my experience/recommendations: I really liked the BBB setups I've been in (largest was a hundred or so people) and would recommend that if you're looking for an alternative. Wire also works reasonably and because it's end to end encrypted you don't need your own setup to get started, but isn't as open source oriented as BBB/Jitsi and the CPU load from the encryption during video or screen sharing is quite significant. Jami, last I tested, was quite buggy, but that was way before the pandemic. Full disclose: so far I've only had to decline one Zoom request and so I've never been in a Zoom® call (not a single of our clients uses Zoom, yet people use the brand name as a synonym for video call? I don't get it), so I can't compare any of these with Zoom.
Why would any company with valuable IP use Zoom after this security blunder, along with the fact they "accidentally" routed domestic US calls via China. Zoom is software developed almost entirely in China, meaning it is subject to Chinese law and the very strong influence of the CCP.
It is fact to say that Zoom could be compelled by the CCP to plant backdoors in software to siphon valuable IP for use by Chinese companies, as is usually the case with CCP-aligned companies like Huawei (Huawei had a cash incentive program for employees who delivered stolen IP to them).
Pretty ridiculous for the US to be enforcing this while they try to ban and reduce availability of E2EE worldwide. Zoom et all are doing them a great service by spreading FUD and confusion about what E2EE even is. Once it's reduced to "complex math thing" in people's minds no one will know or care when they ban it.
The "trust no-one" mantra would still apply to your own team, and unless you're putting tons of money into this project a free software platform us probably more trustworthy. There's more risk of outside infiltration but also far more bug checking and security testing.
I don't see how democracy has anything to do with wanting to secure video calls or not but anyways, how is this worse than trusting anything from the US? Not trying to add whataboutism, but curious if you have the same look on security when made by companies that share data with someone that realistically could come after you for anything done in those calls. PRC clearly can't unless you live in PRC while the FBI and CIA operate in most of the world, more often than not hand in hand with local police or agencies.
No, instead they target political dissidents and find any possible family you might have in China and threaten to hurt them if you don't either return to the PRC or commit suicide [1], much better.
Along with is growth in users, Zoom has seen concerns spike about how it is protecting users’ privacy. The Senate advised members not to use the service, according to Ars Technica and the New York City Department of Education banned its use for remote learning. A group of state attorneys general are probing the company after one of the officials was “zoombombed” on a forum about the Census, meaning the chat box was filled with profanities.
Ellison’s support could prove useful to Zoom as it wades through the new challenges of becoming a consumer tech company. Ellison is an influential billionaire with ties to the Trump administration. He has supported Trump’s campaign and even told the President about an anti-malaria drug Trump ended up touting as a possible treatment for the coronavirus, according to The New York Times. Oracle CEO Safra Catz served on Trump’s transition team in 2016.
It is was a prelude to what happened to TikTok. Or almost happened to TikTok as now with the Trump administration is gone, it makes no sense to do a deal with Oracle.
I don't understand how the FTC arrived at the conclusion they're not E2E? Or have I missed something?
>Despite promising end-to-end encryption, the FTC said that "Zoom maintained the cryptographic keys that could allow Zoom to access the content of its customers' meetings, and secured its Zoom Meetings, in part, with a lower level of encryption than promised."
Not wonderful but that still, technically, is an E2E encryption scheme. Is it not? Or do they mean one end terminates in Zoom's servers and it's not E2E through the whole pipe, but rather two pipes stitched together?
Agreed it's not as secure as they marketed, but this seems to suggest if you want to offer E2E you need a specific kind of key storage to meet this new precedent. Good in practice, but maybe the FTC are not the right people to placing such a hurdle down?
I'm not sure what you mean by "you need a specific kind of key storage". You don't need any kind of key storage for e2e. You only need to facilitate the key exchange as a server, then push the opaque data both ways. If zoom (the company, not the software client) can get the encryption key, the call is not e2e encrypted.
Most videoconferencing systems are not E2E-encrypted. They encrypt the link between each participant and the central server. This makes implementation simpler in a few ways.
A good E2E-encrypted system would involve Zoom never having the keys at all, so "key storage" would be irrelevant.
The issue here is merely that Zoom claimed to be E2E-encrypted when they were not. They could have simply said "encrypted" and there would be no issue.
Wouldn't E2E encryption of a call with 40 participants require each user to have 39 times the upload bandwidth, in order to send 39 video streams encrypted with different keys? And potentially several times the computational cost on the client, in order to downsample video according to the different available download bandwidth of every other participant?
Is there anyone doing group videoconferencing with E2E encryption, for more than a handful of participants?
Typically, the central server does not transcode. Participants simulcast a few bitrates, and the central server forwards to each other participant the sub-stream with the appropriate bitrate for the bandwidth capacity of that participant. This is compatible with E2E encryption, by individually encrypting each sub-stream. Participants can share a session key that is unknown to the central server.
FaceTime supports group calls and claims E2E encryption for them. WhatsApp does too, I believe? I'm not sure how many participants you can have.
>Wouldn't E2E encryption of a call with 40 participants require each user to have 39 times the upload bandwidth, in order to send 39 video streams encrypted with different keys?
I think you use public key cryptography to securely distribute an encryption key for that call. So the host sends 39 messages encrypted with different keys containing a shared key. Then everyone uses the shared key to encrypt/decrypt the call data.
What kind of hurdle do you see? What you've described isn't e2e encryption, the FTC is absolutely correct.
The FTC not doing something unreasonable here, I would wager you are by implying they're placing undue hardship on a company that peddles bald-faced lies.
Right now I know of at least half a dozen products that are marketed as having E2E encryption but do not actually implement this (no, I'm not going to out them. See second to last paragraph as to when to be wary). In part because executives, marketers and salespeople don't know what it means. And in part because when explained what it means they will insist on their own definition/interpretation and demand the product is marketed as E2E.
It is also important to note that quite often you are not dealing only with the company that makes a product, but the regulatory bodies that can pressure companies into complying with their wishes.
As for Zoom, I don't understand why people trust them or still use their product if they are at all concerned about security. It makes very little sense.