Hacker News new | past | comments | ask | show | jobs | submit login

Zoom has had major security issues for years, and they've always brushed them off as not a big deal. This isn't an isolated incident.

If their position is now that the Zoom software was designed for corporate users, e.g., that you're expected to only run it on your own VPN where you can guarantee there's no malicious network traffic, then it should have "NOT FOR CONSUMER USE" plastered all over it.

To me, this reads exactly like "Lol user error", except there's no "M" to "RTF" that ever said, for example, that its local web server stayed running after uninstallation and could take control of your camera, or that "E2E" in the Zoom docs doesn't mean the same thing as it means to the rest of the industry.

There's no responsibility being taken here. Taking responsibility would be "We fired all our 'security' people who told us we had best-of-breed security, and hired some actual security experts to re-architect our system to provide actual security for our users." What they did here is indistinguishable from "We're sorry we got caught!" except in verbosity.




> Zoom has had major security issues for years, and they've always brushed them off as not a big deal. This isn't an isolated incident.

But here's the operative question: were they wrong to set their priorities the way they did? In this crisis, they're wildly popular, and part of that popularity comes from optimizing for usability and advertising to close the deals that got their product in front of enough people to be a "household name" when everyone suddenly needed videoconferencing.

If people want security, GChat is built on top of Google's infrastructure, has almost no outstanding security issues, and years of engineering behind making it a quality product. And users don't care enough about security for that to be the tool people are reaching for right now.

Business is an art, and that art is the art of making tradeoffs to meet users halfway. And time and again, the product that thinks users need to be met halfway at "it's secure" gets trounced by the ones who meet users halfway at "It's usable."

> We fired all our 'security' people who told us we had best-of-breed security

Why, in a crisis, would you start by firing the people who already know the inside of your application, warts and all?


> here's the operative question: were they wrong to set their priorities the way they did?

Yes. Let me ask this the other way, in a different context.

Say your company builds rapid-assembly prefab building components. You have built the business on being supposedly greener than the competition, by using natural materials where possible. All of a sudden there is a massive surge in demand, and you find out that certain cost-cutting optimisations that used to be merely mildly beneficial, actually provide a marketing edge.

Does it matter that your fire-proofing is a naturally occurring material? Namely, asbestos?


1) Is there a better fire-proofing alternative available, one that will work as well and be as cost-effective to deploy?

2) Are we talking about 1990 (when the public actually cared, legal torts were likely, and it was a huge hassle to sell a property that was known to have asbestos) or 1890 (when in spite of evidence that asbestos may pose a health risk, industry was full-speed-ahead on it because, hey, everything poses a health risk, and lung cancer was of lower concern to the public than dying in a fire)?


Google is great at security. Privacy, ... not so great.


They've made mistakes in the past, but their privacy model is actually pretty good as long as your risk assessment includes the carve out "I'm comfortable with Google knowing a lot about me."

And if you aren't, there are plenty of alternatives. But unlike Google, they often don't have a security or privacy model to speak of because they haven't taken the lumps Google has in the past for messing up.


Maybe a blog post saying "Yes, we acknowledge that we deliberately ignored major portions of security practices and just did whatever got us market share fastest. We can't change the past, but we're going to clean house and do privacy and security for real from now on, and we hope you'll stick with us, or come back after we've fixed <pointing to all of this>."


That's pretty much what this blog post is, with just a little bit more PR speak.

And I'm honestly surprised that it is not totally watered down, it's not just PR speak and user blaming, but makes some clear points. The base defense is bad - that it was for less users and contexts before changes nothing - but it at least includes the "we will focus on that now, honestly". That's not bad already.


We could read the PR blurb from the company that engineered for security and privacy before getting features in place users wanted, but we can't. They're not making a blog post because they ran out of runway money, having failed to get a product in user's hands in time for anyone to care about their existence.


The reality of that makes my skin crawl.

Maybe the only way to get useful code that starts off secure is to start it open source? That way, even if it takes "forever", there's no profit motive or need to rush into adoption...

I'm trusting for-profit software less and less and less by the decade.


There are open-source videoconferencing solutions floating around on the Internet.

But they don't have the traction of the videochat-as-a-service options because those options have financial incentive to set up servers, configure them, solve those parts of the puzzle for users, make onboarding frictionless, etc.

I'm afraid I don't think open source would be a panacea for this problem, because if there's one thing we've observed from the world of open-source and online software, it's that most users adopting an open-source solution have to become their own sysadmin too, and a lot of otherwise-competent hackers are profoundly bad at the ever-moving arms race that is "hosting a secure software service online." Distributing the security maintenance burden doesn't make it easier to solve.

We could get there if, hypothetically, companies cared enough about security to demand that all the software running on (at least) the client machines and (ideally) the service-provider's servers was open-source so they could trust the security model via an audit by their own eyeballs. Then closed-source operations would lose out in the marketplace to open-source outfits because enterprise would only do business with the open-source ones.

They demonstrably do not care that much.


Very much agree. Look at email, a system with far fewer real-time performance demands, excellent fault and outage tolerance, and many excellent open-source mail transfer agents.

What percent of the internet user base runs their own email server? What percent of even news.yc readers do?


> Taking responsibility would be "We fired all our 'security' people who told us we had best-of-breed security,

Based on what we know (not much) it's equally likely that their actual security experts completely understood the current situation, but marketing or high-level C-suite people came up with all of this.

I can completely picture the conversation between security engineers and marketing about whether they can use the term "End-to-end encryption" because I've had very similar conversations before about (mis)use of technical terminology.

How far do you go if you're unable to convince them to change the terms? What if you escalate all the way up to the CEO and they don't agree.. then what? Do you refuse to leave the CEO's office until they concede? Quit your job in protest? (What do you suppose that would accomplish?)


I'd be interested in hearing about the conversation of the installer. Manager: Why does the installer require so much interaction with the user? Dev: That's part of the OS protection efforts. Manager: Can we make it require less interaction? Dev: Not without hacking together our own too that once installed will allow the computer to run any script with root privileges. That's a bad idea. Manager: If it means the user doesn't have to do anything, then do it. Dev: But it is a bad bad bad thing. Manager: meh

Or was it closer to Manager: Can't we do something. Dev: Sure (with an evil grin), we can do something Manager: Great!!


I highly doubt, if they even had a dedicated "security team" at the time the platform was architected as such, that they would have told the rest of the company they "had best-of-breed security". They would have understood their shortcomings and communicated them up the chain of command. And firing them and hiring an outside team of "security experts" to re-architect their system wholesale would be a patently absurd course of action.

The form of responsibility taking you're demanding is actually just business as usual, reactionary scapegoating.


> I highly doubt, if they even had a dedicated "security team"

I can only agree, from what I have seen on previous security vulnerabilities it often seemed to fall either into straight out negligence or intentional ignorance because it's easier "that way".

I believe security had never and will never bee a top priority for zoom. At least while they can get away with it, which they currently seem to be able to do.

Also I have seen it more then once that a Team originally had good intentions into making good secure software (but not necessarily enough expertise) but due to frequent changes in priorities or wrong time estimates they end up with a software which "works" but internally is broken with a promise from management that if they produce something like that soon then they will get to fix security issues in a view month. But then they never get that time and shitty security becomes the norm. Following that people with security expertise get demotivated and move on (either literally by changing the job or metaphorically by just accepting writing not so secure software).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: