> This really feels like the beginning of the end for Mozilla, sadly.
I really feel like every time Mozilla announces something, someone gets paid to leave comments like this around. I've seen many "beginning of the end" comments like this, and so far, it hasn't happened.
What I do see is a lot of bashing, and hypocrisy, and excuses for why its OK that you don't personally try to do better...
Even as someone who is still a Firefox user - the browser now has about half the browser market share as Edge... Absolutely nobody needs to be paid to write these kind of comments!
Honestly the last 5-10 years has been a disaster for Firefox...
Perhaps not paid, but. I think even if it's natural (I myself have been known to make a disparaging remark in their direction), I still suspect some level of manipulation (why was I saying these things? Out of frustration or because I'd heard something worrying and negative news sticks better than positive?).
Sure, firefox has had some issues, and nobody is denying the market share is an issue but:
1) It has worked reliably for the past 10 years
2) Mozilla and firefox have not disappeared, in fact it has created a number of useful services worth paying for.
Meanwhile, I keep hearing these negative "the world is ending" comments regarding what amounts to a "force for good" in this world, and I have to wonder.
How many of these people making these comments recently switched to chrome, and are saying this as an excuse?
The vast majority of these people complaining are using something like Brave or just plain Chrome.
They aren't expressing genuine criticisms for the most part.
Tons of them literally work at google.
Like, there's a poster a couple threads over insisting "Brave is great, you just have to ignore the crypto shit and change a bunch of settings" and like, somehow brave doesn't get regular 600 post long threads about how it's "Dead" and "It's the end" and "I have never used Firefox in my life but I certainly wont now!"
It's absurd.
"Mozilla's CEO makes $6 million" says people who get very angry if you suggest we should pay the managerial class less of the worlds money and also never seem to complain about any other CEO making that money and don't say anything about how much the CEO of Brave makes or how much money Google as a whole sucks out of reality to do whatever they want with, including subsidizing a browser to kill any competition.
Firefox got big because every young tech nerd installed it on everyone's machine and then a few years later, google literally paid tons of installers to also bundle and install Chrome and make it the default browser and everyone here always insists that people who did not choose to use firefox and did not even notice they now use chrome are somehow going to pay real money for firefox?
Meanwhile Opera is showing how nobody gives a shit about any of this "Privacy" nonsense in the market, and the important features are things like "you can install a theme your favorite youtuber made for shits and giggles" and "Advertising to children"
You want browser engine diversity? Guess what, that's Firefox right now. There is nothing else. That's why I use Firefox. There's nowhere else to go.
I think when there's overblown criticism (does not take into account the positive things, seeks only to paint in a negative light), that's pretty clearly FUD.
Moreover, when there's a pattern of it occurring, for at least the past 5 years...
Not saying that firefox is perfect, I have issues with its project leadership too, but its nowhere near as bad as the (most vocal) critics like to claim.
Not at all. If you want or need a feature it's not some "my browser has to support it or my OS does" dichotomy.
As a couple parents up stated, there's no technical reason a browser has to have a transformer embedded into it. There might be a business reason like "we made a dumb choice and don't have the manpower to fix it", but I doubt this is something they will accept, at least with a mission statement like they have.
I've never had much luck with it either. A lot of the same pain points as the author.
As for it being used everywhere, sure, we had some bad SOAP stuff being used everywhere at one time, but that's not a good thing...
Regarding secrets etc.. on the one hand, yeah it's not much to worry about people with access to your machine reading stuff on your machine, buut*, it's kinda dumb to have stuff encrypted and left around for a public unsecured protocol. The solution would be, for apps that need security, don't use dbus.
If the API was less loosey goosey (self documentating, not so open ended, less awkward), I agree it's fine for applications at the same level of trust to all access the same thing.
The problem is more random scripts off the internet using browser apis to read stuff out of local storage containers. Forcing local containers to explicitly allow such access, and yes, using a non-dbus protocol, would be a preferred method, while not requiring overly complex authentication schemes locally...
Otherwise the main change I would have made would be to explicitly allow applications to access the bus, vs just a random app having access by virtue of running in memory...
> Be ready for a blog post in ten years how they broke apart the monolith into loosely coupled components because it was too difficult to ship things with a large team and actually have it land in production without getting reverted to an unrelated issue.
Some of their "solutions" I kind of wonder how they plan on resolving this, like the black box "magic" queue service they subbed back in, or the fault tolerance problem.
That said, I do think if you have a monolith that just needs to scale (single service that has to send to many places), they are possibly taking the correct approach. You can design your code/architecture so that you can deploy "services" separately, in a fault tolerant manner, but out of a mono repo instead of many independent repos.
They don't have a monolith: they have a service that has a restricted domain of responsibility matched to the team that runs it.
There is nothing magic about their queue service, and it seems correctly tuned to the complexity that they've got to cover: yes, just like most queue implementations, it will get different types of messages (events). If anything, their previous implementation was too complex which caused lots of waste.
With hindsight, they should have evolved their original architecture into exactly what they pivoted to now: better fault tolerance in "processors" of different types.
I would hope that my general rule of "only solve exactly the problem you have in front of you" would have avoided the approach they took, but engineers love to abstract away things and introduce indirection layers and add accidental complexity that way. And ofc, "microservices great, me want microservices" too :)
Again, I am not saying this as a slight: I believe many of us have learned the limits of microservices by, well, living through them :) And now we tune our abstraction layers differently.
> They don't have a monolith: they have a service that has a restricted domain of responsibility matched to the team that runs it.
Except, for lack of a better definition, that is a monolith.
Which there's nothing wrong with one if that's what you need.
> I would hope that my general rule of "only solve exactly the problem you have in front of you"
True. Which was the issue with everyone jumping on the microservice train, most of it was about solving problems nobody had.
When you really need an independent service, go build an independent service. Call them micro if you like (again, no good definition for what microservice or monolith actually mean).
There are better definitions. Monolith would be a full product (as we are talking Twilio, their main offering) in a single, tightly coupled codebase which does data passing mostly by directly calling appropriate code paths.
Service-oriented architecture is where you decouple parts of the full architecture into independent subsystems that communicate over opaque interfaces and without ability to break those boundaries.
Microservices architecture is a SOA taken in the direction of keeping them as small as possible, which can be very small!
I think the focus on the state machine may be the problem. I don't know much about prolog, or why it doesn't really enjoy more status in the programming world, but I suspect that while it is good at repesenting states, it is not very useful for writing programs...
Case in point, the pong programs. Looking at the impl, vs a <50 line js impl, this looks more like an assembly language for state, not necessarily something that makes state more visible or readily apparent...
Having a nice dialect for a (is this formally provable?) state machine is nice, but I'm not convinced founding the language from state machines is the correct approach vs merely using a fluent library e.g. https://stately.ai/docs/xstate
Not saying that I'm correct, but would be interesting to hear more of the philosophy of why Nova, vs just a simplisitic implementation of some card game rules...
Right here. And I think you're not quite getting it if you have to refer to "go on the internet and tell lies"...
Sure plenty of people might be on "social media" and have some idea that people fib, but they aren't necessarily generally "surfing the internet".
To them, saying "the internet tells lies" is comparable to saying "well sometimes, at the grocery store, you buy poison instead of food", and yes, it can happen, but they aren't expecting to need a mass spectrometer and a full lab team to test for food safety... to you know, separate the snake oil grocers from the "good" food vendors.
The danger is that the people most likely to try to use it, are the people most likely to misunderstand/anthropomorphize it, and not have a requisite technical background.
I.e. this is just not safe, period.
"I stuck it outside the sandbox because it told me how, and it murdered my dog!"
Seems somewhat inevitable result of trying to misapply this particular control to it...
> The bones he's picking are with the entire field of cryptography
Isn't that how you advance a field, though?
It has been a couple hundred years, but we used to think that disease was primarily caused by "bad humors".
Fields can and do advance. I'm not versed enough to say whether his criticisms are legitimate, but this doesn't sound like a problem, but part of the process, to me (and his article is documenting how some bureaucrats/illegitimate interests are blocking that advancement).
The "area adminstrator" being unable or unwilling to do basic math is both worrying, and undermines the idea that the standards that are being produced are worth anything, which is bad for the entire field.
If the standards are chock full of nonsense, then how does that reflect upon the field?
The standards people have problems with weren't run as open processes the way AES, SHA3, and MLKEM were. As for the rest of it: I don't know what to tell you. Sounds like a compelling argument if you think Daniel Bernstein is literally the most competent living cryptographer, or, alternately, if Bernstein and Schneier are the only cryptographers one can name.
In exactly what sense? Who is the "old guard" you're thinking of here? Peter Schwabe got his doctorate 16 years after Bernstein. Peikert got his 10 years after.
I don't think the people automatically updating and getting hit with the supply chain attack are also scanning the code, I don't think this will impact them much.
If instead, updates are explicitly put on cooldowns, with the option of manually updating sooner, then there would be more eyeballs, not fewer, as people are more likely to investigate patch notes, etc., possibly even test in isolation...
I really feel like every time Mozilla announces something, someone gets paid to leave comments like this around. I've seen many "beginning of the end" comments like this, and so far, it hasn't happened.
What I do see is a lot of bashing, and hypocrisy, and excuses for why its OK that you don't personally try to do better...
reply