Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My first thought when looking at the benchmarks is that I find it strange that Backbone is faster than React. Not that I imagine Backbone to be slow, particularly, just that this article is about one of React's key features - the virtual DOM - and that's something which Backbone doesn't have. I'd expect to see React up there with Om, Mercury & Elm.

I've just had a look at the code and React is using a localstorage backend, while Backbone is using its models + collections with a localstorage extension... so I'd expect there to at least be some overhead there, but apparently not.

Does anyone have any quick thoughts on what might be happening here? I can't shake the feeling that these benchmarks might not be terribly useful.



> I'd expect to see React up there with Om, Mercury & Elm.

Not by default because it's missing laziness and immutability: because just about everything in javascript is mutable, React can't prune out the changeset as the developers could be modifying the application state behind its back in ways unknown (or worse, could be using state which is not under React's control, neither props nor state, but globals and stuff).

That is, for safety/correctness reasons the default `shouldComponentUpdate` is `function () { return true; }`. The result is React has to re-render the whole virtual DOM tree (by default) and the only possible gain is when diffing the real and virtual trees.

Because Clojurescript and Elm are based on immutable structures they can take the opposite default (and if you're going behind their back and mutating stuff it's your problem).

Also, I'm not sure React defers rendering until animationFrame.

An optimized React application (or at least one which uses PureRenderMixin[0]) ought have closer performances to the others's (Om is a bunch of abstractions over React after all, so you should be able to have at least as good performances in pure React as you get in Om).

[0] http://facebook.github.io/react/docs/pure-render-mixin.html


Without looking at the benchmark code:

* Development React is slower than production React. There are a bunch of extra checks all over the place along with a profiler [1].

[1] http://facebook.github.io/react/docs/perf.html

* Speed isn't the top priority of the framework, predictability is. There's a virtual event infrastructure and other browser normalization work going on. Om is using React under the hood and more or less represents the best case scenario.

* React isn't magically fast. The diff process still has to visit all the nodes in the virtual DOM, generate the edit list, and apply it, which is a substantial amount of overhead. I'm used to seeing React even or behind when benchmarked with a small number of nodes. The explanation I've seen is that these benchmarks aren't considered important since the goal isn't to be as fast as possible but rather to never be slower than 16ms.

The trick behind most of the "React is fast" articles is that React is O(N_dom) instead of O(N_model) so if you can shrink the size of the output DOM, React goes faster. The Om line demonstrates this and doing screen-sized render over a sliding window of data in a huge data set (most grid/scrolling list demos) is another common example. There are perf knobs that probably aren't being turned here but if the app renders fast enough why would you waste your time turning them?


I've seen some AngularJS vs React benchmarks a while back. I believe it was http://jsperf.com/angular-vs-react/5

I consistently got the result of Angular utterly and completely destroying React. Initially I blamed the virtual DOM approach, but after seeing other frameworks utilizing it and outperforming Angular by a huge margin, it seems to me that React is not written for performing well on small DOM documents. (There might be a turning point, considering how bloated Facebook pages it was designed for.)


Our performance benchmarks suggest that application performance is most certainly better off with: (a) DOM reuse (b) calculating expensive things only once (c) reducing GC pressure == not discarding/recreating things (d) coordinating actions that may trigger reflow. It is independent of what framework you are using.

My limited understanding of React is that it fails in (a), (b) and (c), and only limited measures can be applied to improve them. Re-creating the entire DOM on each update probably does not help. I have no information if (d) is possible with it.

I am using Angular.dart for a while now, and it can be used to get all of them in an optimal way.

Disclaimer: I'm working at Google.


(a) You can use the key attribute in order to get DOM reuse. If you are looping over N keys then React is going to reuse the nodes and move them around.

(b) You can implement shouldComponentUpdate in order to have a quick way not to re-render a sub-tree if nothing changed.

(c) See (b) but we're also working on changing the internal representation of the virtual DOM to plain js objects that can be reused[1]. We were super worried about GC but it turns out that it hasn't been the bottleneck yet for our use cases.

(d) If you are writing pure React, all the actions are batched and actually, it's write-only, React almost never reads from the DOM. If you really need to read, you can do it in componentWillUpdate and write in componentDidUpdate. This will coordinate all the reads and write properly.

A really important part of React is that by default this is reasonably fast, but most importantly, when you have bottlenecks, you can improve performance without having to do drastic architecture changes.

(1) You can implement shouldComponentUpdate at specific points and get huge speedup. We've released a perf tool that tells you where are the most impactful places to[2]. If you are bold, you can go the route of using immutable data structures all over the place like Om/the elm example and you're not going to have to worry about it.

(2) At any point in time, you can skip React and go back to raw DOM operations for performance critical components. This is what Atom is doing and the rest of their UI is pure React.

[1] https://github.com/reactjs/react-future/blob/master/01%20-%2... [2] http://facebook.github.io/react/docs/perf.html#perf.printwas...


I am currently writing an implementation of React in Scala.js. It's inspired by React and by the documentation of React, but I have not looked at the actual source code so far.

You seem to be an implementor, so two questions that maybe spare me looking at the source code :-)

1. How do you batch updates? 2. I am currently using an algorithm for longest increasing subsequences for avoiding superfluous dom insertions of children when diffing lists. I also make sure that the node containing the active element will not be removed from the tree during the diffing (if possible at all). Are you doing the same?


1. The boundaries are currently at event loop. Whenever an event comes in, we dispatch it to React and every time the user calls setState on the component, we mark it as dirty. At the end of that dispatch, we go from top to bottom and re-render elements.

It's possible to change the batching boundaries via "Batching Strategies" but we haven't exposed/documented it properly yet. If you are interested, you can look at requestAnimationFrame batching strategy. https://github.com/petehunt/react-raf-batching

2. We cannot really use normal diff algorithms for list because elements are stateful and there is no good way for React to properly guess identity between old and new. We're pushing this to the developer via the `key` attribute.

See this article I wrote for a high level overview of how the diff algorithm is working: http://calendar.perfplanet.com/2013/diff/


Thanks a lot, very useful info. Yes, I know that because of state "diff" is really not diffing but synchronization of "blueprints" with actual "components". Still, after the update the order of the existing children of a node might have changed, and it is possible to devise a simple, not too costly (n log n, n is the number of children), and optimal strategy for rearranging the nodes.


If you think you can make it better in React, pull requests are more than welcome. For example, we didn't have batching when we open sourced React and it was written by the community :)


1. We buffer calls to setState() and apply them all at once (they don't trigger re-renders) and mark those components as dirty. Then we sort by the depth in the hierarchy and reconcile them. Reconciling removes the dirty bit, so if we come across a node not marked as dirty we don't reconcile (since it was reconciled by one of its parents).

2. I don't think we spend a lot of time trying to make this super optimal, but git grep ReactMultiChild to see what we do.


Thanks a lot! I grepped it but I cannot really figure out the strategy from the source code. Probably you are doing something similar to what I am doing.


While each use case is different, I'd like to clarify a few things in my OP.

DOM reuse is not the same thing moving a DOM subtree to a different place. DOM reuse is e.g. getting an already-rendered table row, binding a new value to it, and modifying only the DOM properties in the complex DOM structure that actually did change. E.g. you modify only an Element.text deep in the first column, and a few other values in the other columns. Or maybe you need to do more, but all you do is delta. You don't just annotate a DOM structure with a key at row level, as it is closer to a hash of the DOM, not speaking of the data-dependent event handlers.

Calculating the DOM (virtual or not) is expensive, compared to not calculating at all. Creating a virtual DOM structure and not using it afterward creates GC pressure, compared to not creating at all. We are talking about optimizations in the millisecond range. A large table with complex components inside will reveal the impacts of these small things.

DOM coordination is not just making the DOM writes in one go. Complex components like to interact with each other, depending on their position and size on their page, and the changes in the underlying structure. They read calculated style values, and act upon those values, sometimes causing reflows. And if such things happen at scale, forced reflows may cripple the performance, and coordinating such changes may be more crucial than the framework you are choosing.

I am sure that people who are familiar with React may have their way get these stuff. I have looked at it, and I haven't seen it to happen automatically, while with Angular.dart, I get it without effort.


You get all of this for free with React.

DOM node reuse is perhaps the central theme of React so it's odd that you bring this up as a criticism (see https://www.youtube.com/watch?v=1OeXsL5mr4g)

Calculating the virtual DOM does come with some processing and GC overhead, yes. But any system that tracks changes for you (data binding) comes with overhead and React makes the right set of tradeoffs for real apps (since it is a function of how large your render output is, not your underlying data model). React has about a 70% edge in CPU time on Angular in the "long list" class of benchmarks (which drops to a mere 40% with the Object.observe() performance unicorn). And steady state memory usage is almost always better with a virtual DOM approach since again it only tracks what you actually render which is usually smaller than your data model (https://www.youtube.com/watch?v=h3KksH8gfcQ).

DOM coordination boils down to non-interleaving of reads and writes to the DOM. React manages the writes for you which happen in one go. Components have a well-defined lifecycle which is also batched and are only allowed to read from the DOM during specific points in the lifecycle, which are coordinated system-wide. So out of the box if you follow the guidelines you will thrash the DOM much less (see http://blog.atom.io/2014/07/02/moving-atom-to-react.html)


On the DOM reuse: could you help me out? I'm sure if I watch all the videos I may be able to figure it out, but I'd be interested in a trivial example. Let's assume I have the following structure (additional cells and rows are omitted for cleaner display, please assume we have 1000 rows and 20 cols):

    <div class="row">
      <div class="cell">
        <div class="align-left">
          Value.
        </div>
      </div>
    </div>
I want to reach the following:

    <div class="row">
      <div class="cell">
        <div class="align-center">
          Value B.
        </div>
      </div>
    </div>
What do I need to do in React that on updating the underlying data, only the innermost Element's class attribute and innerText would change, and the rest of the DOM will be kept intact?


Can't respond to you on react (though my impression is that the entire point of virtual DOM diffing is to do exactly what you're after), but can you justify in some way your HTML markup using <div class="row"> and <div class="cell"> instead of <tr> and <td>?


As I am working on large tables, I may have a different goals than most UI developers are looking for. Diffing a huge structure is just a waste of time compared to not-diffing. Don't re-calculate things that you already know of, and in case of the table, you know pretty much upfront.

On the HTML markup, there are many valid reasons you may want to use non-TABLE based tables:

- it allows better rendering control for infinite scrolling (DOM reuse, re-positioning, detached view for sticky header and column)

- it allows you to have real (CSS style-able) row groups, or if your structure is hierarchical, it allows you a better control to create a treetable (reduced rendering time if you expand a node and insert a bunch of rows in the middle).

- it allows you to have multiple grid systems inside the table (e.g. a detail row may use up the entire row, and it may have its own table inside, which you'd like to synchronize across multiple detail rows). I guess this later benefit is just redressing the fact that you do need to implement an independent grid system anyway :)


It's automatic:

http://jsfiddle.net/bD68B/

I tried to make the example as minimal as possible, so I don't show off a lot of the features (i.e. state, event handling), but I did use JSX, an optional syntax extension for function calls.


Thank you, this seems to do it for the innerText. Would it be too hard to apply it to the class attribute too? (I've tried to just copy the {} binding, but it doesn't work)


Here, I gave it a try: http://jsfiddle.net/bD68B/1/



Thank you both! I now have a much better understanding on how React works. I need to update the related performance benchmarks, it would be interesting to see how they compare side-by-side on our use cases.


Don't forget [PureRenderMixin][1], it can give a big perf boost when used in right places.

[1]: http://facebook.github.io/react/docs/pure-render-mixin.html


All of those points are precisely what React addresses.

The virtual DOM determines which mutations are necessary and performs only those, batched. It also reuses DOM nodes as appropriate. DOM nodes can even be keyed against data in the case that, between transitions, it isn't entirely clear which nodes correspond to which data.


React can handle all of your items just fine depending upon usage.

(a) Using the key property, will give React a manner to determine likeness of elements

(b) Don't calculate the expensive things at render time, do them when loading or modifying state.

(c) Is related to a, but I haven't run into large problems with this personally.

(d) React does batch changes to an extent I believe.


None of the things you have guessed about React are true.


> Re-creating the entire DOM on each update probably does not help

Perhaps you meant virtual DOM here? (in any case, the actual DOM is not recreated on every update)


That benchmark confuses me; is it measuring the entire run of that script, i.e. is it measuring the setup (creating class, inserting into dom) on every run? If so that seems like the wrong way to go about testing performance.


It's worth noting that Om uses React internally[1]. React, like almost every tool out there, can work very well when used appropriately, or poorly if use in appropriately.

[1] http://swannodette.github.io/2013/12/17/the-future-of-javasc...


(Edit: posted this comment before I read the article, doh.)

These is certainly something wrong with the benchmark. Since Om is a layer on top of React, it is obvious that React itself cannot be necessarily slower than Om. (Perhaps idiomatic React usage is slower than idiomatic Om usage for this case, though?)


See the article's discussion of immutability in Elm. Om has the same immutability property, so configures React to take advantage of that property, skipping vanilla React's property diffing.

edit: Rather, see masklinn's comment that describes what actually happens. Point being, vanilla React does extra work to account for anything a developer might do, but allows Elm and Om, which have more rigorous standards for their users, to override that behavior.


Vanilla React doesn't do property diffing by default, it re-renders the whole tree then diffs the whole virtual DOM, because it can't rely on data immutability or on components purity.

Since Om or Elm assume immutable inputs and pure components they can skip rendering components altogether when the inputs have not changed (mentioned in TFA's "Making Virtual DOM Fast").

React can do that, but it has to be opted in component by component either by using PureRenderMixin or by implementing shouldComponentUpdate.


Gotcha—I'd assumed property diffing was the way to go because it hadn't occurred to me that folks would be writing impure components. Thanks!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: