Zoom is the Windows of conferencing apps: It is the most popular one, so researchers actually look at it and say it's shit, but the alternatives aren't much better [1].
I disagree with that example. This bug is exposing a server side infrastructure password and has nothing to do with user safety. Of course, as an operator of servers on the Internet, one must be aware of risks of bugs. This article talks about the security of the client application, which has been in the news recently. There was little news on the zoom infrastructure security, so we might as well guess about that.
Apart from that, the bug you mentioned was found and fixed by the community, and someone scanned all servers and notified the server operators. I'd say that this is about the most professional response to a vulnerability I've ever seen.
Here is another example: Tracking in the mobile apps for Jitsi Meet without the ability to opt-out (which the Jitsi developers apparently consider to be fine): https://github.com/jitsi/jitsi-meet/issues/5799
> which the Jitsi developers apparently consider to be fine
I expected the devs in the link to get caught saying something like "Yolo! Move fast and break things!"
Instead they had a pretty reasonable response considering the circumstances.
You don't have to use the app stores you don't like to use the service, they don't actually seek or store personal data, they don't use any data they do have for marketing, the use of those services falls under an identified GPDR exception, the other services they rely on for the service also have GPDR obligations with respect to any data they should collect, etc.
But even after all that, the devs realized they aren't professional lawyers. So what do they do? Ignore it all? No, they hire some. What did those lawyers say? Looks good, looks compliant.
So you've hired counsel, they say, "Good job, green light." Then someone on GitHub says, "But I have a different legal theory than the people you specifically hired to tell you how the law works!"
What do you tell someone in that situation? Fire your counsel and instead rely on advice from random strangers on the internet? Or... there are a lot of people with crazy theories on the internet.
I thought it was big of emcho to, even seeming pretty confident Alvar was mistaken, still put him in touch with his legal team. Because, I'm not a professional GPDR expert, maybe the random internet stranger is right this time? But if Alvar's right, GitHub isn't really equipped to figure that out. This has to be a conversation with lawyers.
Backing up a step...
Zoom rolls their own crypto, routes calls through China, and has a mysterious 10 char hard limit on passwords. They're also massively more popular.
Zoom's CEO has been admirably up front about how they're fixing some of these issues, but I'm not sure why a speculative GPDR complaint about a much less common service should get the same prominent media coverage.
I’m on iOS though. I read through the Q&A and it kind like the tracking is done because Crashlytics, something they integrate with, got bought out by Google. I don’t know how much solace that brings me, though.
use the webapp then, or use the jitsi sdk for ios or android and build your own app. They even don't enable some tracking when you are not using their servers and use your own.
Anyone who has published an app to any of the app stores, recognizes the valuable information than services like crashlytics provide and the hundreds of ways an app can crash on the user devices.
You may be don't like how it is, but as the parent said, that's not an app issue neither github issues is the place to discuss it.
But honestly, the main reason people use Zoom is that it is by far the most reliable video conferencing app out there. Anyone who has used other video conferencing systems on uncontrolled networks knows the pain that Zoom seems to magically avoid.
Now, it seems lately others have been getting better, but I'm not really sure what the source of that is; when I've been pulled into Zoom calls over the last five years, they've been absolutely rock-solid.
Separate from whether it works or not, I will not install it unsandboxed on any endpoints, nor use it deliberately.
Their security architecture is ???, and their excuse for its use of servers in PRC to move encryption secrets makes no sense, and honestly gives me the impression that at some level, somebody working on Zoom made that decision with the conscious intention to make secrets available to the PLA.
Yea, agreed. I've had tons of problems with Windows, and over the years I ended up using Zoom because it was rock solid compared to the competition[1]. The comparison seems unfair, perhaps too kind to windows[2].
[1]: This was several years ago, competition these days may be more on par with Zoom.
[2]: I imagine windows is becoming more stable these days. Again, my comment may be unfair to modern windows.
One reason it works so well is that it makes your computer wide open. If you dispense with security and privacy, of course everything works just so much easier.
This is not a comparable example. It only relates to the possibility of conference content being intercepted. The linked article's concern is over every client machine which has the software installed being exploited.
This is not something possible (or at least anywhere near as likely) with browser-based conferencing. And zoom really only seems to offer this as a last-ditch option, the vast majority of users I'm certain ending up installing the software.
It's something which I think has received far too little attention amongst the recent security focus over zoom. How traffic is routed is a bit of a red herring if you ask me. The galling thing is how the web community put a lot of effort into figuring out a sensible security model for apps' access to webcams and microphones, and the first thing popular services do is lead people to completely circumvent that and grant permanent high level privileges with seemingly little thought.
As someone who has recently adopted Zoom and who has to justify this decision quite a lot, my question is where are all the exploits? If any of this is easily exploitable there would be such a shitstorm about it. Considering the current usage everybody would know about it.
To me, these look like things that could be used for local escalation or MITM attacks. This is not good but frankly, for most of Zooms use cases, it's not an issue. The only frightening thing is the turbojpeg.dll one. A POC that leads to an RCE or even a crash would be devastating for Zoom, especially considering the amount of edu setups that don't enforce passwords even now.
IDK, for me and the edu organization I'm responsible for Zoom has been a great offering (especially considering the pricing they were able to offer by default for edu and after very little negotiation) but we are actively looking at teams as a successor for the next semester. Zoom has had 3 killer features over teams (virtual background, easy dial-in, no effort guests) and all of them have gone away now with the recent teams changes. If teams finally gets customer skype calling figured out Zoom will most likely be done in the edu field because that's quite a big part of switching to teams for an all out integrated comms solution, especially since you can't use your office 365 account for consumer skype.
There are exploits though - for example, the lowest barrier to exploit vulns like 'zoom bombing' are being exploited quite often.
Others, like perhaps an RCE, are not being seen. This is for a lot of reasons.
* Many are being found by whitehats/ researchers, so by the time they're made public an attacker is already playing catch-up - it can take days or weeks to build a good exploit chain, so starting from "A patch is out" or "The vuln is disclosed" is not encouraging.
* In general, exploitation of vulnerabilities is actually quite rare. Patching practices, mitigation strategies, etc, have radically improved over the last decade. It isn't that the attackers can't do it, but the majority of attacks will just phish you, install malware, and try to make money the simplest way possible.
Does that mean you accept that risk of vulnerable software? These are not strong mitigating factors and are mostly about risk profiling and motivation. So that decision is up to you.
The Jitsi alternative is open-source, so it's much easier to take a look at its security. That link just proves that they're doing their best to fix security issues that the community finds. You can't prove the same with Zoom; it had to wait until their default of password-less rooms were exploited to the point of the media catching on for it to be changed.
And for a sister comment noting their usage of app analytics; all three services that they use are GDPR-compliant, and you can install a "libre" version via F-Droid for Android.
Very, very minor issues in comparison to Zoom's privacy & security issues.
I feel like this article is conflating things... it is absolutely possible to do SQL string concatenation safely. I've done it many times to work around aggressively-bad SQL APIs. Assuming SQL concatenation is automatically bad is the kind of thinking that makes me roll my eyes at security researchers.
> Building SQL queries from strings is possible safely, in the same way that C's memory model is safe.
But still you wouldn't say that every software written in C is
automatically insecure. You have to look into the actual case.
Just grepping for SQL-string-concatination or uses of sprintf
doesn't say much.
The difference is that there are sometimes no real alternatives to doing memory management by hand - which means there are prople that are really really really good at this.
With sql that's different, every sane sql client library supports some form of prepared statements out of the box and they are really simple to use. There is no reason whatsoever to do this by hand, except for people that don't know better.
I disagree with the part that there is always an alternative to building strings for SQL statements. Yes, all client libraries support prepared statements. But none of them support placing prepared statement fields in any part of the SQL statement.
Prepared statements work fine as long as you just have simple SELECT statements with a few WHERE values as parameters. They completely fail if you need to do any advanced SQL with anything dynamic. Like optionally add more sophisticated calculated properties to the SELECT fields, conditionally JOIN in extra tables, use any of the DB engine's more advanced XML or JSON parsing features in a conditional way, support choosing <, > or =, etc.
The real problem is doing ad hoc SQL string concatenation. In the situations you point out, one could (and should) write a simple formatting library that makes it easy to join string literals (the operators) with dynamic data, the same way one would write a basic prepare wrapper around a SQL driver that lacked prepared statements--a simple (and proper--not regex hack) format string parser that quotes and concatenates its arguments. Doing this ensures you'll end up structuring your code for easier auditing of code and data admixtures--there should literally be a single line in the entire code base calling the abominable "escape" routine for quoting and escaping special characters. Ironically, this is the type of thing that's really trivial in C because it's so simple to write a small state machine (while + switch loop, with a variable for escape state) for parsing the format string and building a new string character by character.
I'm not sure I would classify a sql client library that doesn't support arrays as "sane". This may have been annoying in the past, I would expect modern libraries to handle basic features like 'IN" automagically.
list = ['bar', 'baz', "Robert'); DROP TABLE Students;--"]
Foo.where(name: list)
# Foo Load (0.3ms)
# SELECT "foo".* FROM "foo"
# WHERE "foo"."name" IN (?, ?, ?)
# [["name", "bar"],
# ["name", "baz"],
# ["name", "Robert'); DROP TABLE Students;--"]]
For every person that gets SQL strong concatenation right 20 get it wrong. And there are simple and safe ways to write arbitrary SQL queries without using concatenation. Parmeterized queries are available everywhere. There is a reason we tell everyone not to do it and that reason is that it’s dangerous and almost everyone screws it up. I have found your case to be the exception in 13 years as an infosec researcher and consultant.
I agree. SQL string concatenation that combines programmer-controlled strings with potentially-user-controlled numbers, for instance, is immune from SQL injection. And it’s a connection to a local SQLite database for session data and configuration info, so SQL injection wouldn’t really be able to do much (later the report says the database has history, logs, and “probably also sensitive data such as passwords” but doesn’t say what evidence there is that there’s sensitive data in the database).
The sprintf example acknowledges that it’s not clear if the strings involved are user-controlled. It’s definitely possible that the programmer knows all the strings that may be involved and can prove that buffer overflow is impossible.
There’s the complaint that Zoom crash reporting reads registry values that the application has permission to read. This is only problematic because it sends that information to Zoom’s crash reporting service; i.e., somebody other than the user. But programs if the program can’t write sensitive data to the registry, then there’s nothing sensitive to disclose.
I work at a bank, and we have strict rules about what data can be logged (and what data can never be logged), to avoid accidental information disclosure if somebody ever gets our logs (which, I assume, would be about as hard to do as getting somebody’s password database, which happens all the time).
So, while that’s not about remote code execution, I will agree that what is actually logged in this SQLite database could be important information. But I would expect a little more than “OMG! They’re logging to a SQLite database, and you could log bad things to a SQLite database!”
* Zoom uses SQLite to store message history and other remotely-controllable data
I've seen enough. I don't need to see the entire chain - which in this case would be that the message history queries specifically are built in an unsafe way - to know that Zoom is software that I absolutely should not trust, and that I do not want to have installed on any of my endpoints if I want the other data on them to remain secure.
Unfortunately, I do not have a choice. I am working with two telehealth providers who use Zoom. So the next best thing is to be as angry as possible about it. The hiring of Alex Stamos is a good move but we need to keep the pressure on.
Put another way, I work for an EU-based fintech. We have similar guidelines (plus a strict log rotation policy, partially due to GDPR but also for many other reasons), but i'm pretty sure our head of security would shit bricks if he found out that our logging framework had an RCE.
Any head of security would shit bricks if they discovered software being used across the company that had a remote code execution vulnerability, but the blog post doesn’t actually identify an RCE. It points to some code that hypothetically could be the basis for an RCE.
Zoom isn’t open source. The blog post doesn’t actually have the entire call chain for the potentially problematic code. It doesn’t determine if the strings have been sanitized before the string concatenation, or if they come from a known limited set, etc. It’s entirely possible that the concatenation is in the lowest levels of some enterprise-specific framework code that is guaranteed to only be called after relevant safety checks have been run.
We have a blog post that points to unconnected facts, says “I can imagine evil ways to connect these facts, but I’m not going to spend any more time determining if my concerns actually exist in the rest of the codebase.” We can decide how uncomfortable that makes each of us. I will say that SQLite is used in the most surprising places. I’m almost certain it’s used by whatever browser you’re using right now. If you refuse to use software that relies on SQLite until you can verify that every call site is safe, you’re going to lose a lot of sleep.
Of course, I’m influenced by my background. I still see stories about software installed without management knowledge, and while I remember working at places with lax enough policies that was possible, I haven’t had that kind of access on my work computers since before the Sony Pictures hack in 2014. I’m not in finance, but I’ve written software for people who are. In the US, financial compliance departments care about the software traders and financial advisers use for their jobs, and any software installed on the same computers (this was an issue at one of my jobs because our software generally wasn’t on the approved list, so our customers had to have it installed on a separate computer for unapproved software; and that was apparently common practice in the industry). I don’t lose any sleep when my kids use Zoom on their school-provided laptops, but I personally can’t use it for work purposes because my work laptop is locked down, and I expect it to be locked down.
Setting aside the security ramifications of getting SQL string concatenation wrong, your RDBMS query planner may be able to cache a paramaterized query plan, whereas a query built by string concatenation may be seen as a new query to be compiled and optimized.
Maybe it's just Microsoft's SQL server, but I've come to become extremely suspicious of this. I've run into far too many cases where, despite our DBA playing with cache-clearing strategy and statistics and the like, we found that queries would run fine in the unparameterized query-console and would run with pathological performance in the actual application.
Every time it's "hey mr senior developer, I'm trying to reproduce that slow query but I can't, even though I'm doing the same thing as the application" and I'm like "you must be doing something wrong" and sure enough: no, it's the SQL server doing something wrong. And every time we found out "oh, running it un-parameterized makes it fast". So then I have to talk down the juniors who are like "well fine, I can fix it by running it unparameterized, let's just do that!".
So then we have to figure out how to trick it to generating a non-idiotic query plan without abandoning the security that parameterization provides.
... basically I'm deep into "the emperor has no clothes" on MS SQL server in general.
This is like a window into my current problems. We've never used SQL Server, but a vendor for a client requires it. We are constantly running into this problem. Right now we have a query that takes 300ms when we back out the parameters, or ~12 minutes when we parameterize it.
It is extremely frustrating. Never ran into problems like this with Postgres or MySQL.
Security bloggers are all about attention - shiny thing. That's why there's a semi-annual story that gets some traction with a headline like "Experts from FooCorp warn that the honeymoon is over, viruses have arrived on MacOS!" Zoom is the new Apple for this shit.
Zoom has lots of issues, but the pile-on is just dumb. Everything has issues. Do you use a phone? Landlines are not encrypted, mobile is encrypted with defective security, and carriers have access to all metadata, all data, and provide CALEA access as a service.
So yeah, there's a risk that your sales call, or 5th grade english class, or other meeting could be compromised by some malicious third party. How does the probability/impact of that stack up against the probability that your salesman or 5th grader's grandmother will be infected and die from exposure?
I've worked and architected systems for people with very high confidentiality requirements. Guess what? They don't use internet-based remote collaboration systems to remotely collaborate. The only common system that comes to mind that you might be able to use in these scenarios is an on-premise version of Webex, assuming it still exists!
> Zoom has lots of issues, but the pile-on is just dumb. Everything has issues.
Why is reacting to known security vulnerabilities in the software you distribute so controversial? It's not rocket science to keep your openssl dependency up to date. I agree that there will always be unknown vulnerabilities that they introduce themselves, but if they knowingly ignore public vulnerabilities it's just negligence imho.
For the average user, a privacy/security concern is a possible/theoretical issue, missing features a very real, instantenous issue.
Slack, Teams & co flat-out ignored calls and screen sharing until now. They were sidelined so-so features. Hell, on Slack you couldn't even see screen sharing from mobile, and you did not even got a notification that another person is sharing. Never mind missing even the most basic annotation features.
No wonder Zoom eats all the pies now. Sure, they made a lot of mistakes, but it was a huge landfall on a small(ish) company. That said, if the others top up their game, I'm happy to get rid of an additional app - but that's how market works.
Just general video/audio performance too. Zoom is the only one I've used where everyone can have an open mic, have noise in the background (or even music), and the whole group can still have a good conversation.
This sort of dynamic is something I feel the Free Software/Open Source world often misses. Yes, security, openness and ideally freedom are desirable and in some cases essential, but if the software doesn't fundamentally work well at its primary task, for its intended audience, those benefits won't count for a lot.
I think Zoom certainly deserves the criticism that they're getting regarding security. But until the alternatives figure out things on the functionality front, I don't think that's a sufficient reason to kick Zoom to the curb.
I would say that the company I work for doesn't use Slack exactly for this reason. Screen sharing and group calls are not available in the free version, so slack trial didn't went beyond the first three users. Guess what, we're paying for teams and zoom now.
"Sure, they made a lot of mistakes" - is a strong understatement given how they have been treating the users. Just look at the history of posts about Zoom here.
Upd: These are not mistakes. They did all that intentionally, fogetting about the security for convenience even when it was beyond reasonable imho.
Upd2: Why downvotes? Some recent headlines from Hacker News:
- "Zoom rolled their own encryption scheme, transmit keys through servers in China"
They're succeeding because of their security issues.
Look at the problems they've had over the past month:
- "Zoombombing" is a thing because the default meeting configuration didn't require a password, making it easier for attendees—legitimate or not—to join.
- Issues with the installer were in part caused by shady workarounds they took to reduce friction during the installation.
And yet, they're doing just fine with their "UX first, security later" approach. So I guess I'm underscoring your point: we absolutely need more consumer protection.
Side-note: I wonder if Zoom is getting into trouble in Europe over any of this?
> Side-note: I wonder if Zoom is getting into trouble in Europe over any of this?
Some European organizations are banning it from business computers.
I only use it for semi-public discussions, and run it on a Windows 10 laptop that isn't used for anything else, so I'm not that bothered by the security issues.
They might, but it will be some time before privacy authorities build a case against Zoom and start fining them.
In the meanwhile, imho they are under so much pressure to behave that in the next few months might really make some progress in this field. I mean, right now Zoom is the most independently "audited" video conferencing app in the world and many newspapers and state attorneys are investigating [1].
"Zoombombing" is a thing because the ID space is ridiculously small. It's 8 digits! Just pump it up to UUID size and encode it as Crockford Base32 + checksum. Then you don't need a password because the ID space is too big to guess. We all learned this back in the days of link shortners, but Zoom somehow didn't?
The ID number can also be used by attendees calling in via phone, so it has to be short and numeric (back to the UX vs. security tension).
Another issue was that until recently the ID number was prominently displayed in the application window. Many people (including Boris Johnson) shared screenshots on social media with the ID included.
I think this isn’t quite the right take. Security’s priority has become quite a lot higher given we now have to run our lives, governments, militaries and entire companies online. A few months ago this wasn’t true, teleconferencing was a sideshow to the main event, with many interactions still happening in real life.
Now if I can’t have a private conversation online it’s much more important to me. A month or two ago it was important, but probably not as much as usability and reliability
I think it's becoming a pattern that people (users) don't care about what they can't see. And furthermore, people (users) overall don't seem to care about security until something goes wrong. Besides, a layman doesn't understand even the simplest terms in that article, so naturally how are they supposed to care?
Many traditional organisations (schools, enterprise, etc) that care for security, in my experience, did so for GDPR compliance and to the extent that they were compliant. A lot of businesses weren't even being malicious, just negligent.
So, on the whole, regular users don't seem to care, and since users don't hold businesses accountable for it, businesses don't really care.
NB: Users refers to any random person who isn't in the tech industry. Like, probably, your neighbour or the random person walking their dog.
user's didn't care in the past. The yahoo toolbar lived merrily for years. Then came the hot business of "computer security tools". Nowadays with everything (except Zoom) working in a browser people apparently forgot that their computer still contains relevant data and they still have to think about it (not only about the "hot" cloud). I think in the late PC era, Zoom would have been surely classified as malware by some of the more zealous programs out there and whooops, there goes APT-like userfriendlyness...
We have been measuring and tracing Zoom traffic from the various client apps for the last couple of weeks.
One weird thing. It appears Zoom uses SCO Cloud [1] and HunTel Engineering Nebraska [2] to form sort of their own IPX? I have been a cloud architect for the last decade and haven't seen anything like this. The costs must be enormous if we are measuring correctly (no guarantee).
SCO Cloud though is quite the character. Apparently they are part of some group that has been trying to sue the Linux kernel for the last 20 years, until the case was put to rest in 2017 I think [3].
The SCO that birthed all the lawsuits is pretty long defunct. I doubt SCOcloud is at all related. Their site doesn't seem to match the MO of the old SCO.
As a quick note, no affiliation whatsoever on my part - I've had great success running online meetups on the LGPL project bigbluebutton. Hope it helps some members here with their pain point.
did see the same things with the Linux binary. Especially funny to see these old (and unsupported) OpenSSL-libraries. Isn't nearly every valley-company today built around the assumption of having a development model which makes things as this highly unlikely?
Zoom also does weird OS-detection which seems to fail with Debian-testing, so I can't use screensharing on Wayland. Works fine with Jitsi Meet. They also don't support dual-screen mixed-scaling (4k laptop with 1080p external screen).
Feels like using Skype: worked for a while under Linux, then never got updated and became very unpredictable/clunky.
and the very funny thing is that they apparently don't do any compression at all on screens, so in one meeting I got a 2560x1440-feed (with 4 fps...). Was horribly slow on the VM I used (because the web-interface was not working at that time). Also I wonder what's stopping them (except for them wanting to get their APT on every machine) from giving you more than one stream in the browser...
> Especially funny to see these old (and unsupported) OpenSSL-libraries.
"Never upgrade a running system" is what I guess what was driving this... probably driven by "we don't have budget for maintenance/regression testing, only for new features".
Do you have a writeup of your findings? Not that the issues with the Windows version aren't enough, but as a user of the Linux version I would like to know what is wrong with it.
It was basically the same things as with the Windows version. I just did some very basic skimming through the binary with 'strings' and basically found the old OpenSSL statically linked (because you can't really expect a distro to have it there...) and some suspicious SQL-statements. Additionally someone decidedly non-native wrote a lot of the debug-messages/even the startup-script. And yes, the latter is interesting if the response to "you are an extended arm of the PLA!?!" is: "we are fully based in SV and employ american talent"...
quite unfortunate that there is no alternative with the same great user experience like zoom.
Linux, Windows, Mac, every device all just working without a problem. Delay free screen sharing (Not like MS Teams/webex), even remote support is possible.
Fantastic product from a user perspective. I hope they will fix the issues or some other currently crappy solution will take over the user experience centric thinking.
[1] Just one example: https://github.com/jitsi/docker-jitsi-meet/blob/master/CHANG...