> React's biggest lasting contributions. Describing UI in a declarative way
I feel that AngularJS (v1) did that years before React, and maybe even other MVC frameworks before that.
I can't think of anything React brought in that earlier frameworks didn't already have in a form or another. Maybe hooks? Arguably the worst and least performant part of React.
It only popularized "HTML-in-JS" but that has always been possible via strings or file imports. Even template pre-compilation was part of Backbone or Ember (I don't remember which)
> I feel that AngularJS (v1) did that years before React, and maybe even other MVC frameworks before that.
Angular markup lives in templates which are used to generate and update the UI, while React markup lives in expressions which evaluate to values. This is a subtle but key difference. I would argue React isn't even an MVC framework (though it sorta pretended to be one for a while)
> I can't think of anything React brought in that earlier frameworks didn't already have in a form or another. Maybe hooks? Arguably the worst and least performant part of React.
The virtual DOM existed only in a library or two that you had to build around, before React made it into a useable framework. The whole "automatically figure out minimal DOM changes", especially things like reusing existing <input> elements so the user input doesn't vanish or the element lose focus, were pretty much brand-new when it came out.
Best team I've ever worked with. 5k was on the larger side, sure, but technically difficult tasks often just can't be split into a series of smaller pull requests.
I am always amazed that many people with significant experience are so resistant to this idea. For what it's worth, my experience matches yours entirely: many significant changes can't be meaningfully committed in small chunks, they only make sense as an all in one.
And even more so, I've often seen the opposite problem: people committing small chunks of a big feature that individually look good, but end up being a huge mess when the whole feature is available. I hate seeing PRs that add a field or method here and there (backwards compatible!) without actually using them for now, only to later find out that they've dispersed state for what should have been one operation over 5 different objects or something.
What changes can't be committed in <5k LOC? That's a shit ton of code. If you can't break that down into smaller shippable chunks there's probably something wrong, or you're building something extraordinarily complex.
It's definitely overall quicker to ship like this, but there are tradeoffs. You are effectively working independently from the rest of your team, there is no context sharing and everything is delivered at once after a longer period of time.
We're performing atomistic simulations. The first edition of the code stored each atom on the heap and had a vector full of pointers to the individual atoms. Obviously this would obliterate the cache, so I crafted a PR to simply store all the atoms in a single vector. On its own, that was a one line change, but it was also a very fundamental change to the type system. Everything as simple as
Atom* linker = atoms[index];
linker->x += 1.57;
Suddenly had to be
Atom& linker = atoms[index];
linker.x += 1.57;
If I didn't make those corresponding changes, the code wouldn't type check and the build would fail. I think the final PR came out to about 17 kLOC.
Obviously if you make a change to something like your type system it's going to generate a very large diff, but you also aren't going to review the full diff.
You're just going to make the change with find+replace or some other automation then write in the PR description "I made this change to the type system". No one is actually reviewing 17k LOC.
Actually the example is pretty good for the kind of problem I'm talking about.
Let's imagine that instead of optimizing the pointers to in-place structs, we were taking the optimized program and adding support for dynamically allocated atoms because of some new feature for dynamically adding/removing atoms.
We could of course split the value->pointer 17k line change into a single PR. But, that PR is only doing a pessimization of the code. On its own, it makes no sense and should be rejected. It only makes sense if I know it will be followed by the other feature, and even then, I would have to see the specific changes being made to know if this pessimization is worth it.
And if it got committed to the main branch, in preparation for other PRs that depend on it, the main branch is no longer in a releasable state, since it would be crazy to release with a performance penalty and no feature gain.
So, the right way to push this is as a single PR with a 17k-changed-LoC commit + the commits that actually use it. Of course, people would only manually review the other changes, but that's easy to do even if it's all in a single PR. And anyone looking back at history would clearly see that the pessimization was only a part of the dynamically-allocated atom feature, not a crazy change that someone did.
You specifically mention "big feature" in your original comment so it's confusing that this is a good example of the kind of problem you're talking about.
This is a very different situation than large PRs containing feature code. I think most people would agree that one large PR is the correct approach for this kind of situation.
Usually new features require modifications of old code, at least in my experience. If I came across as claiming it's likely for a feature to require 5k new lines of code, then I clearly communicated badly. But a feature coming with 5k lines of modified code, while rare, still happens several times a year in a large project in my experience.
This isn't what the op is talking about. They're talking about a net new feature that would span 5k lines. Your change is trivial compared to it, and frankly would earn approvals immediately without much thought (assuming the changes were already planned and talked about)
A new feature can easily involve the kind of modifications that they are mentioning. It's pretty rare for a new feature to exclusively involve new code, in my experience. And when it needs modifications in a deep part of the stack, the new feature will easily spiral into small modifications to thousands of lines of code.
In that case I also see "large" PRs, but I point the reviewer to "file X and then 4999 copies of file Y." When doing large no-change refactors, I submit standalone PRs and merge them quickly because no one will review 1k identical changed lines (e.g. if I change the location of /lib/ imported 600 times in the project)
What if the refactoring makes the code harder to understand when looked at in isolation, but is necessary for the rest of the feature to work? Why submit it in a separate PR without the context of why it's necessary?
I've experienced this changing the API for a primary memory allocator in a frequently updated code-base. Each location updated was perilous and it needed to be changed in bulk to avoid an endless war of attrition.
We have a small feature that was put together in a rush. It "works", but only works correctly for some simple cases, otherwise there are bugs everywhere. There were very few tests. We must redo the while thing, update the UX and add tests. Tell me how we can achieve that without replacing all existing code and add new tests that cover every use case in the same change.
Why do you have to do it all at once? You can't improve the codebase piece by piece?
Shipping all the changes in a monster PR is usually not a good option. One big reason why is that you do not create any value until the whole thing ships. If you ship piece by piece you can create small amounts of value every time you ship.
Also, if it's a "small feature", why is it 5k+ LOC?
We as a team sometimes decided to PR into a PR branch, not as meticulous as a PR to develop but there'd still be eyes on the code entering the branch. Especially useful when there are dependencies and/or different disciplines contributing to the feature.
Seems logical, but as a user, I can already choose the size of the content of the screen, I don't want you to choose it for me. So we all agreed that "16px text" is ok and we started designing the world around that.
As a practical example: my iPhone lets me chose between "native" and "scaled up" sizing. This affects the whole OS including the browser. Where does your "cm" measurement fit in that? Do you want to override my (user) choice?
Additionally, every browser lets me scale the content up and down between 50% and 200%, so "cm" is broken once more.
"pixels" haven't been "pixels" for a very long time, and that's ok, they're now just a logical unit.
I might be wrong, but they're more like a browser’s "service worker" than a Node app.[0] The fact that they have to manually add "compatibility" with Node modules like `utils`[1] seems to support this.
That's exactly right, and they even use the same API[1] for accepting and replying to HTTP requests . As well as supporting other service worker APIs like caches.
People have been talking about this for decades, but it doesn't seem to be very popular now, especially since MOSE appears to be working at the moment.
Or maybe they understood the joke but weren't amused and prefer not to see unfunny jokes on this site - people may have different opinions to yours without the cause being their not understanding the joke :)
Sure. But my tinfoil hat is at the drycleaner so I will let you use your imagination to come up with a scenario where ill will toward Mozilla benefits someone.
Funny that it actually happens in a lot of locations with tourists. Also Chinese restaurants in the US often have a “real Chinese” menu and a regular menu in English.
But what you’re arguing doesn’t make sense. As a restaurant owner, I can change online menu prices on the fly. 4th of July? Don’t mind me if I +20% that bad boy.