This pattern of the virtual DOM has some parallels with functional programming. At first look, using functional programming it looks like you're sacrificing the performance that you gain from mutations -- which to some extent is true. But when you look deeper, you'll find that the data structures for functional programming languages have been adapted to work in a very performant way with functional programming patterns.
In the same way, when you look closer at the virtual DOM model, you'll find that you can optimize the usage very well. For example, when you use React with immutable data structures (e.g., Immutable JS), deep equality comparisons can be done very quickly to reduce renders to the minimum subtree required, and to issue the renders for a given change in a batch operation.
With React, you often end up with even faster code than using manual DOM manipulations. Of course, your performance will vary depending on how you implement your code. Depending on which model you're using, and how you program, you may end up with a faster or slower product with either methodology.
But at the end of the day, the performance gain from running code using state mutations just doesn't seem worth the effort of having a significantly harder time reasoning about the code, and having to deal with the challenges of scaling a project where the complexity grows much faster with each line of logic.
And my feeling is that due to the time invested in managing a more complex project that deals with mutations, the "performance per engineering hour" gained for projects using a virtual DOM is more favorable for many projects.
I think the main problem with your argument is that you assume that the world looks like a shallow tree. For UIs this may be, to a great extent, true. But for non-frontend code, the world looks more like a deep DAG (directed acyclic graph).
Also, suppose I have a list of thousands of elements. Now suppose one element is added to the list. React will still perform a comparison operation on all of those thousands of elements.
That's going to be a problem regardless. You shouldn't have so many elements on one page (can a user even parse through so many at once?). Use pagination or occlusion culling to show a few at a time instead.
Premature optimization is the root of all evil. Using immutable objects means that the overhead for checking a component that has not changed what it's rendering is a couple of function calls plus an object identity check. Modern JS can do that very, very fast.
And, if you find that it's still too slow, there are other strategies you can employ to fix it without having to turn your whole application into a stateful soup. Odds are, though, that this is not going to be your bottleneck.
Eh, kind of. If you're rendering, for example, a Table with a lot of TableRows and change one of the values in the data array being passed to Table.props, you'd return true from Table.shouldComponentUpdate, and then each child TableRow would need to run its shouldComponentUpdate, even though only a single one really needs to update. So the argument is GP is making is that it's more efficient to directly update that single DOM element rather than update the data and then perform the calculations to determine that we need to update that single DOM element.
True in theory, but as long as each TableRow implements the PureRenderMixin and the Table render itself is efficient, you're going to need a lot more than a thousand rows before React has any trouble hitting 60fps in my experience.
But if you can't meet both those conditions, that sort of thing definitely can get quite slow.
Are modern CPUs really going to choke on a few thousand comparisons?
My impression has always been that, for any constant-time operation, you're gonna need to start getting into the 100Ks or even the millions to start noticing, but I don't have the data to back me up here :/
React is okay for normal form based stuff, but it breaks down very quickly with many (<200ms) renders.
I like the abstraction, it is super easy, has a small API surface and most of the time it is "fast enough". But it's no panacea. Often I have to throw D3 in, to get "realtime" stuff done without blowing up the browser.
In the same way, when you look closer at the virtual DOM model, you'll find that you can optimize the usage very well. For example, when you use React with immutable data structures (e.g., Immutable JS), deep equality comparisons can be done very quickly to reduce renders to the minimum subtree required, and to issue the renders for a given change in a batch operation.
With React, you often end up with even faster code than using manual DOM manipulations. Of course, your performance will vary depending on how you implement your code. Depending on which model you're using, and how you program, you may end up with a faster or slower product with either methodology.
But at the end of the day, the performance gain from running code using state mutations just doesn't seem worth the effort of having a significantly harder time reasoning about the code, and having to deal with the challenges of scaling a project where the complexity grows much faster with each line of logic.
And my feeling is that due to the time invested in managing a more complex project that deals with mutations, the "performance per engineering hour" gained for projects using a virtual DOM is more favorable for many projects.