I agree. I'm just as skeptical as the next person about PR corporate-speak, but there's a clear action plan here and addressing of concerns. At least give them a chance to make good on their claims right?
We've become such a cynical culture, once pitchforks are brandished they appear to be unsheathable, rather they grow sharper and more wild as the chorus of internet echos grow. While there is value in discouraging certain behaviors (and thus "make an example"), and encouraging new ones (some level of noise was needed to warrant a response at all), surely there must be a path to redemption, an acknowledgement of misaligned incentives in lieu of the demonization of individuals. There are too many true enemies in the world (most of them not processes not people) to harbor such disgruntlement against a platform which is by at least basic measures doing good (providing a service that is obviously loved and used by many and jobs and economic well-being to those who make and support it).
I personally have been wary of Zoom since the first reports of their apps doing shady things (and the forcefulness with which they attempt to make you use it) and don't find it provides any value over a multitude of other services. But this wariness warrants a mere "meh, I'll go elsewhere," not a crucifixion. Yet as the frothiness grows from this current story (and it's really not much of a story) it becomes more difficult to see just what sort of blood sacrifice will satiate the mob.
I think that's unfortunately a climate that arose as a reaction to the sustained appalling behavior by FAANG and wannabe-FAANG cowboys.
When you have a handful of tech companies who systematically, unashamedly and deliberately abuse people's trust and privacy, it ruins the landscape for everyone. As it stands today, our trust has been betrayed so many times that a default assumption that the other actor is malicious and will do the wrong thing is almost always correct.
It's a shame, but this seems to be the case for every industry that is consumed by greed, and almost as a rule, every successful industry will eventually be consumed by greed. It's a local minimum that our society in it's current form cannot seem to avoid.
Well so again I think these are misattributions here. You're using language like "trust," "unashamedly," "malicious," in circumstances where "behavior," "incentives," and "value" are more fitting. This framing is why the pitchforks get yielded with such ferocity, demonization/victimization in situations that require more careful scrutiny and nuance.
> It's a local minimum that our society in it's current form cannot seem to avoid.
It can't avoid it because it is the result of incentives, not specific players. This is why it's so ineffective to brandish hostility towards individuals or even companies: others, up to and including yourself, would do the same things if put in the same positions, because that is what would be best for you (and there are plenty of rationalizations you can come up with to show why it's net good).
To eradicate this kind of thing we cannot be relegated to impotent rage with mob-issued pitchforks at industries or companies or the individuals who operate them, or play the victim card and blame the industry for "betraying our trust." The "why" of this is not an "industry consumed with greed," it is a fundamental result of technological breakthroughs in an economic environment such as ours. When you have bad incentives you will continually get bad actors, and playing whack-a-mole with them will be a fruitless exercise. We have to recognize the systems as a society, not the players, if we are to have any hope of true reformation.
I agree that this is a systemic issue and as long as the system is the same, the outcome is unlikely to change, hence why I wrote "It's a local
minimum that our society in it's current form cannot seem to avoid".
So a change in the system is required to fix the root cause, but I think some amount of pitchfork waving and torch igniting is not out of the order.
The excuse of "everyone would do it with these incentives" is not an excuse for this behavior. And I say that fully admitting that I'm part of the
system (although a different industry), and yes of course I'd do (and am doing) the same given the choice.
Perhaps the pitchforks show the individual the error of their way, and the society will change once enough individuals decide to make the change?
Maybe, I do agree it's warranted to call out bad behavior and impose some penalty on said behavior to discourage it. After all, this "feedback" has elicited a response here at least. But I worry that:
A) It misses the bigger picture and thus doesn't address the underlying cause, thus it will continue to happen over and over.
and
B) It feeds an outrage culture that permeates far too much of our online conversation that requires more nuance and careful dissection, which paired w/ the first leads to division and us vs them mentalities when more than ever we mentalities are needed.
Perhaps I'm overly sensitive to that second one because I'm more focused on systems and because outrage culture, which itself is bourn of other misaligned incentives, is a large problem underlying other issues that I've tended to notice more in my circumstantial isolation. So in a sense I'm being hypocritical when I plead for a more measured and forgiving response as I'm addressing individuals not the systems that caused them to react in this way. Mostly I'm just thinking out loud, as are we all (we're thinking at each other rather than with each other, another issue, related to outrage culture).
Anyway, as I'm in danger of severely incoherent rambling here I'll circle back to say I cautiously agree with you that some measure of pitchfork waving is warranted. But when the CEO makes a response like this, at least on its face an earnest effort to right past wrongs, can't we at least give them a chance to do so? Otherwise the pitchforks lose their meaning as it begins to look like they were out just to be out, and we're more concerned with persecution than actual redemption or resolution.
Well, yes, but now you're saying our trust is limited because other companies do bad.
But we have a pretty good reason, beyond that, to not trust Zoom - THEY clearly never gave a shit about security, given in a few weeks of people taking a slightly deeper look we've had pretty much every possible leak and bug and problem you can imagine crop up.
How long did it take Microsoft to go from "no-shits-are-given-security" to "security-is-core"? 2 decades, something like that?
So yeah, sure, 90 days, that's... well, maybe the beginnings of a start.
Your entire reply feels overly cynical. No, nobody likes it when companies invade your privacy, a la F & G.
However, saying "it took another megacorp twenty years to fix their privacy problems, so we shouldn't trust $relativelysmallcompany for the next two decades" is not fair to $relativelysmallcompany and doesn't even consider the cultural change it probably took to get $megacorp to actually care about security and user privacy.
Personally, I don't really like Zoom. I use it, and it works okay, but there are a lot of little nitpicks I would like to see addressed- for instance, it'd be really nice to be able to adjust individual member's volume levels or be able to mute them outright as a participant instead of listening to a compressor that needs a new bearing in the background for an entire meeting because they're not using push-to-talk and the host just downloaded the client yesterday. I'm also more than willing to give a company time to fix underlying architecture problems and not demand fixes in the meantime.
I would want to be given the benefit of time to fix problems, wouldn't you?
It is not possible to extrapolate with certainty the quality of code from the bugs that remain in it when released.
“There’s a misspelling in the HTTP headers spec, so obviously this was written by amateurs.”
“Browser X has an RCE, so obviously they don’t care about security.”
These are obviously faulty logic when stated about other scenarios, and apply here as well.
Has Zoom been found to have the same specific technical issue reoccurring over multiple releases, to the tune of “buffer overflow” or similar? If so, then that’s a trend to throw up warning flags about.
A series of bugs that share no commonality other than being bugs is, perhaps, not so much.
> At least give them a chance to make good on their claims right?
I’d be much more willing to cut them some slack if this was their first offense, or if their previous mea culpas had showed evidence of change. Today, it feels like “This time for sure!”
Two things: Zoom has established a clear pattern of malfeasance which can no longer be forgiven by writing it off as incompetence. Second, the stability of the world now depends on tools like Zoom, and their bush-league petty-criminal nonsense cannot be tolerated in a time of planetary emergency. Blood sacrifice is completely warranted here.
Did they actually establish a clear pattern of malfeasance? To me, a DevOps dude, it seems like they simply did not pay attention to security and privacy and focused on making features as easy to use as possible. Many security experts will tell you that security is all about trade offs between convenience and privacy. Zoom went to market basically only focused on convenience and now that the whole world is using the platform they have all eyes on their products.
It doesn't seem like anything they did warrants "Blood Sacrifice", and it doesn't seem like anything they did was criminal negligence. Security incompetence? Yes certainly, but lets be real, 99% of companies would fail under the same security scrutiny if the company suddenly had 190 million more users using their same product over night. Aren't you at least glad their CEO cares enough to address these issues? Its not like Facebook drastically addressed their users privacy issues, even with intense scrutiny over the last few years. The situation with Zoom could be much much worse. They definitely have some more work to do, and we should keep holding them accountable, but since I am forced to use Zoom for work, I'm glad they're even pretending to take these issues seriously.
While I agree that we can't continually forgive them for intentional behavior, I think you need to, at the very least, acknowledge that what they were trying to do, albeit poorly, was to limit the interaction needed from the end-user to make their product seamless when someone clicks a link. To go from there to being a tool of worldwide necessity and critique in a matter of weeks is unfair, in my opinion. It wasn't incompetence so much as it was something that worked and wasn't a big issue when they were just a small link in a chain. It's important now but that doesn't mean that the response is blood sacrifice. That's just as bad as the knee-jerk cancel culture that's everywhere.
We've become such a cynical culture, once pitchforks are brandished they appear to be unsheathable, rather they grow sharper and more wild as the chorus of internet echos grow. While there is value in discouraging certain behaviors (and thus "make an example"), and encouraging new ones (some level of noise was needed to warrant a response at all), surely there must be a path to redemption, an acknowledgement of misaligned incentives in lieu of the demonization of individuals. There are too many true enemies in the world (most of them not processes not people) to harbor such disgruntlement against a platform which is by at least basic measures doing good (providing a service that is obviously loved and used by many and jobs and economic well-being to those who make and support it).
I personally have been wary of Zoom since the first reports of their apps doing shady things (and the forcefulness with which they attempt to make you use it) and don't find it provides any value over a multitude of other services. But this wariness warrants a mere "meh, I'll go elsewhere," not a crucifixion. Yet as the frothiness grows from this current story (and it's really not much of a story) it becomes more difficult to see just what sort of blood sacrifice will satiate the mob.