React's unfair and one-sided patent grant aside (personally, I will never build a real app with React unless that situation changes), it is not a complete MVC framework, and to my knowledge there is nothing that fills in the gaps to make something close to Ember feature-wise. So while React may end up faster, Ember will hopefully be quick enough that its programming model wins out.
What makes the patent grant unfair? I think saying "you can use our pretty powerful technology until you sue us" is a great way to defang the patent system.
It's unfair and one-sided because you cannot sue Facebook for infringing an unrelated patent, but if Facebook sues you for an unrelated patent, you will lose your rights if you try to assert (defensively) that their patent is invalid. In other words, you can never sue or even defend yourself against a Facebook lawsuit.
For a more balanced approach to patents, see the GPL, Mozilla Public License, or Apache 2.0 License.
He doesn't agree with your interpretation. The license is very clear in that everything a company would typically do to /defend/ against a patent suit brought by Facebook will terminate its patent rights on React.
Let me quote the second paragraph (added line formating for readability), the (b) part is the broadest I think:
The license granted hereunder will terminate, automatically and without notice,
for anyone that makes any claim (including by filing any lawsuit, assertion or
other action) alleging
(a) direct, indirect, or contributory infringement or
inducement to infringe any patent: (i) by Facebook or any of its subsidiaries or
affiliates, whether or not such claim is related to the Software, (ii) by any
party if such claim arises in whole or in part from any software, product or
service of Facebook or any of its subsidiaries or affiliates, whether or not
such claim is related to the Software, or (iii) by any party relating to the
Software;
or (b) that any right in any patent claim of Facebook is invalid or
unenforceable.
So why word it like users cannot defend against Facebook while Facebook doesn't intend to be a claimant?
I read that discussion. The patent grant is very asymmetrical. Why should an entity using React be vulnerable to being sued by Facebook for unrelated patents and be unable to challenge those patents?
IANAL and I have nothing else to add to the discussion here other than adding the context that Facebook hates patent trolls and doesn't really compete on technology, so even if your interpretation is correct (which I don't believe it is) it'd be purely defensive anyway.
That's not just what it says. If it just said that, that would be fine.
It says making "any claim" against a Facebook patent will terminate your right to use the software. Facebook gets to protect its own software patents: "that any right in any patent claim of Facebook is invalid or
unenforceable."
The other lines are okay, it seems.
I am not an IP law expert or lawyer though, but the writing seems pretty simple.
"Claim" has a specific meaning in the sense of the invention "claimed" by a patent. But Facebook's patent grant is clearly not using it in that context: "any claim (including by filing any lawsuit, assertion or other action)".
It's also worth noting that we're getting to the point where fastest doesn't really matter. Both Ember and React will be fast enough. Performance shouldn't be a deciding factor.
One of the motivations for this change was that a real-world application was too slow. As the original presentation says, choosing a technology and then finding out that it's limiting you is really rough when it happens to you.
Performance _does_ enable certain kinds of applications, and it does absolutely matter, even on the desktop.
Performance matters, but if Ember can have 50-60% of the speed of a performance-optimized React app without the associated cognitive overhead, I suspect many will see the trade-off as worth it.
Agreed, for sure. I wasn't trying to make a framework-specific statement, just disagreeing with the notion that performance doesn't matter at all. It may only be for certain kinds of applications, but having more options is always good.
I should have added the caveat that there will always be a handful of cases where you do need absolute top performance. However, I do think that for the vast majority of apps we'll end up at a point where performance isn't the deciding factor. It doesn't mean that we shouldn't keep improving performance, just that being the fastest doesn't matter so much if all the options are very fast.
i suppose i presumed a difference of 2x either way (worse/better) to justify picking one over the other -- given that frameworkless is ultimately the most optimize-able.
do you need something to be "at least X(ms) fast" or "faster than what company Y is doing" or "as fast as possible, no exceptions"
Basically, if you treat render functions as "pure functional" functions, you can employ memoization and avoid recalculation. I hope this isn't the subject of patents, because it seems pretty basic.
That's awesome to hear! It takes an unbelievable amount of energy and sometimes frustrates us with how slow and deliberate we need to be with features.
I always have a problem with these kinds of frameworks. Every time I try to use one, and want to do something more in-depth for fancy than a lot of the functionality out-of-the-box, then I end up having to hunt down the way to extend X feature, and it often takes me more effort to do that then just writing in the root language would be.
That is to say: The abstraction isn't as expressive as the root language, and when I need something more expressive, then it's more troublesome than writing than the root language.
The rendering engine here does seem to perform admirably, and I have to congratulate them on that. Maybe I've just been burned a few too many times from this kind of language.
I've built many apps using Ember and the time I save not writing boilerplate code or jquery spaghetti is well worth the time I spend understanding how to do more complicated things. Every time I get a bit frustrated with something seemingly complicated, I always realize it was better that way in the end.
Often I discover that I can do exactly what I wanted in just a few lines of code and writing it in pure javascript or jquery would have been many, many more lines of code and much less maintainable.
Developers smarter than I have struggled with these problems and developed optimal solutions that might not be easy to fully grok at first, but are always worth the effort to understand.
I didn't take his comment to be about a learning curve. I think the complaint is that the abstraction on top is not as expressive as the thing it abstracts. This means that when you break the abstraction you have the complexity of the thing you abstracted away on top of the abstraction layer itself, which means you might have been better off without the abstraction in the first place.
That's what I like about the whole WebComponents movement. Whether you use Polymer or X-tags, it's just new DOM elements, and they function just like regular DOM elements, and you can drop down to vanilla JS easily without throwing the productivity benefits of a framework out the window.
I have been following Ember since its 0.9 beta stage and believe me until it released around 1.3 or 1.5 there has been lot of change and was a huge learning curve. But now that I know how exactly ember works, it is easy and fast to write Ember Apps since lot of boilerplate can be easily avoided.
Yeah, I think what you just stated is an issue for any type of abstraction. You really just have to examine an interface in depth before you start writing code with it to see if it's capable of supporting the functionality you need.
Only a few months ago while developing a REST API in Ruby, I implemented MongoMapper for my ORM/Database access layer. Turns out it didn't support a few things I wanted to do, so I hacked my code into little (ugly) pieces just so I could continue using MongoMapper (Didn't have time for a rewrite). I think if I would have just stuck with using the native Ruby Mongo driver, development would have taken less time and my code wouldn't look like a steaming pile of shit!
I've been working on a large app using Ember, and this has been my experience over the last several months. I'm still excited about embers future, and hopeful our choice to use it will one day pay off, but for this first project, we would absolutely have been further along, and happier, had we chosen a less all-encompassing framework and built the required abstractions ourselves (for the very reasons you have pointed out).
Glimmer is a revolutionary improvement in Ember's rendering performance. I'm incredibly excited by the progress and promise being realized in Tom and Yehuda's PR.
For more context, here are a few quick slides from the EmberConf keynote:
Yes, you're right. It's my fault for not pitching in. I should have worked my way in to the core team and fixed it myself, rather than selecting one of the other performant frameworks available.
> the stability and reliability of Ember's process
I'm aware you're one of the people who finally got it done, kudos. But the chances of going back to Ember are nonexistent. This was my experience with Ember:
* 08/2013: Unusable performance when displaying listviews with over 20 items, without infiscroll (unofficial and poorly supported, or roll-your own). Promises that it will be resolved in next months.
* 08/2014: Still getting the runaround about when those "improvements" will arrive.
Only one of a dozen things which made working with Ember horrific. Apologies for being bitter, but it is what it is.
For the curious, here's a port of the Dbmonster demo to a simple Underscore template — the kind of base-level rendering strategy you might start with in a Backbone app. (And vintage 2009 technology.)
To head off grumbling at the pass — It would also be easy to do a slightly less-simple version that keeps the flickering impossible-to-read popups open (putting redundant tooltip DOM into each table cell isn't how you'd actually write this), and the server names selectable ... but those particular "features" don't really seem relevant to this particular UI.
Unfortunately your example isn't a full reproduction as both the Ember/React examples reuse existing DOM which is important for selection state and the popover functionality.
It's pretty cool to see that Ember is still in the same ballpark, especially when you realize that Ember does a ton of stuff that Backbone doesn't do for you :)
Tangentially: (yet somehow related), you could write [1]:
> The process of converting a string template into a fully compiled HTMLbars template function that emits DOM nodes is somewhat involved. The purpose of this document is to shed some light on the process and describe where in the HTMLbars codebase these steps take place.
or you could just write:
> OVERVIEW
If the parallel is not obvious, it feels like many JavaScript frameworks are written in this overly verbose style.
This demo is taking up an entire core of an i5 2500k.
To update a table. It barely updates once every two seconds on a three years old phone.
My terminal is faster than that and doesn't even use 1%. A native application is faster than that and barely uses 1%.
I know this is a nice advance for client side rendering, but can we stop pretending it's in a good state ?
(Although if you're updating this many times per second your application, you might be having a problem already.)
In regards to taking up an entire core, it's updating as fast as it possibly can, it's basically a stress test. In practice, updating this quickly isn't useful, you would want to throttle it for production use.
It's reasonably quick on an iPhone 6 (though obviously slower than on a desktop). In actual use you'd obviously design things a bit differently. Again, this is a performance test, not a real application.
Back in college I used to see 486 Bloomberg terminals updating more complex tables in real time with no perceivable lag. It's a sad reflection on bloated HTML5 technology that it can only deliver this level of performance with CPUs that are literally thousands of times faster.
Do Bloomberg terminals use an off-the-shelf framework (vs. custom, proprietary code) and run on the open Web (vs. a private network)? What is easier for someone with no degree and little experience, programming a Bloomberg terminal or building an Ember app?
Using custom code or running on the Web are both not valid arguments for poor performance; in the given example, rendering in something else than a table (like, say, a custom canvas view) would already improve performance by miles. Running in a terminal (stream via ssh) would also be fasterer.
I'm not making excuses, but comparing a custom solution like Bloomberg terminals to a versatile platform like the Web is not a fair comparison.
Despite the inherent performance disadvantages, people are stepping up and working on pushing Web applications to run faster. The efforts of the Ember team and others should be commended rather than ridiculed.
Are there demos and/or source code for the Angular and React implementations used in that presentation? I'd like to compare React to this new Ember implementation, because the latter, while better than the Ember demo in that presentation, is still noticeably sluggish on my machine.
>building virtual DOM nodes for static areas of your markup that will never change for every change requires many more allocations and increases GC pressure.
I'm not familiar with Ember, but why not just store the constant value in a variable to solve this problem? For example, in MithrilJS, you write templates in plain JavaScript, so I just stash large, static parts of the tree in variables and only rebuild vdom nodes for dynamic content. Simple.
I believe they're saying that a virtual DOM is created and updates that would cause a re-render are first applied to the virtual DOM (thus avoiding the over head of a render). The virtual and actual DOM are then compared to see which parts have changed and need to be re-rendered, which results in a net saving of CPU time.
There is no constant value to store here. Perhaps you are referring to the templates, which are static and are already stored in memory?
You appear to have been downvoted; while I don't know whether your potential solution is right or wrong your downvoters should have been polite enough to explain their reasoning.
> virtual DOM nodes for static areas of your markup
refers to React's virtual DOM implementation, not Ember's.
> I'm not familiar with Ember, but why not just store the constant value in a variable to solve this problem?
Ember is a complex framework. Suggesting a "solution" to challenges with an acknowledged lack of context, and adding "Simple", shows fairly poor attitude.
> I just stash large, static parts of the tree in variables and only rebuild vdom nodes for dynamic content
This sounds pretty much like what is already described in the PR.
Again, still confused, and still don't know what is being suggested.
It's pretty cool that a year ago, React was far and away the best choice for new JS development for reasons that seemed to be architectural (no dependence on the DOM/server-renderable, isomorphic, intentionally minimal DOM modification), and in the intervening time, the Ember guys have taken the good ideas from React and brought them to Ember. Congratulations! It's great to see the best ideas lifting all boats.
Looks like Glimmer's wits regarding the distinction of static and dynamic parts of the template should be applicable to JST, HAML and the rest as well. The dynamic parts are clearly marked with template tags, and local (changed) variables would be easy to scan for within the dynamic parts of the code. This would probably mean that the template engine should decompose the template into smaller bits, and provide metadata by which the view can map DOM fragments to template fragments (DocumentFragment, DOMNode, DOMAttribute, TextNode) and related Model attributes. Attrubte-level change events could then either directly expire the relevant fragments, or the View onChange/render function would skip repainting the unchanged parts and use appropriate (previously decomposed) fragments of the template function to render changes.
Interesting it's using Handlebars - I wonder how it compares to the diffing done in Ractive, which also uses a similar syntax (http://www.ractivejs.org/).
I see a lot of talk and comparison to React but nothing about Meteor's Blaze engine and its HTMLJS virtual DOM diffing approach which seems even more comparable
Yes. They should. Currently DOM works in an imperative, immediate way. Kind of like a Basic program or OpenGL immediate mode. After a write to a property you are guaranteed to get the same value back after you read the property. The dependent properties are also guaranteed to be updated immediately. Surprisingly, this imperative way of programming is actually not efficient at all because subsequent writes and reads may cause reflow.
To prevent this the programming model should be changed. There are two ways that I can imagine.
- Introduce "DOM batching mode". In this mode remove the immediate mode guarantees. If you specify an element width you are no longer guaranteed to read it back until the layout occurs. So store your intermediate element width somewhere if you want to use it. Of course you don't need to specify batching mode for all the DOM tree. Just the majority of it that doesn't require custom layout.
- IIUC the majority of times that you need to perform multiple reads and writes of the DOM properties is due to special layout requirements. In some cases CSS layout may not be enough. There should be an API that allows to specify custom layout strategy for a parent DOM element. JavaScript should be fast enough. The additional benefit is that we would no longer need to wait for e.g. Flexbox adoption. Just roll your own.
It is obvious that we are trying to turn HTML into a GUI framework. So let's do it properly.
The problems we are facing with DOM have already been solved by multiple game engines and GUI frameworks.
> Introduce "DOM batching mode". In this mode remove the immediate mode guarantees. If you specify an element width you are no longer guaranteed to read it back until the layout occurs. So store your intermediate element width somewhere if you want to use it. Of course you don't need to specify batching mode for all the DOM tree. Just the majority of it that doesn't require custom layout.
Some sort of DOM-like buffer[1] that you could render into and then "flush"/insert, maybe?
You'd think that with a proper background rendering thread, they could get away with a retained scene graph maintained via dirty bits. But obviously, I'm missing something: what is it about the DOM that makes changes so expensive that they have to be batched via a virtual one?
Can't it batch layout calculations like is typically done in a retained scene graph? You don't do the layout calculations on each change as they occur!
Is it an artifact of the DOM API? In WPF, they have to maintain two sizes because of this: a set size (if specified) and a layout computed size that is filled when layout computations are done in batch. This adds some complexity (e.g. ActualWidth is not always equal of width, and so on), but the perf is pretty good.
> Dirty checking is slower than observables because you must poll the data at a regular interval and check all of the values in the data structure recursively.
You don't have to poll your dirty bits! When you dirty something, you put it into a dirty list/set. You only re-render if your dirty list/set is not empty, clean deeper elements before shallow elements, and its quite optimal.
> A virtual DOM is nice because it lets us write our code as if we were re-rendering the entire scene.
Totally: they are basically turning a retained model into a not-so-slow immediate model, which is a nice programming abstraction, but it is not a performance win over an efficient retained model.
> DOM operations are very expensive because modifying the DOM will also apply and calculate CSS styles, layouts. The saved time from unnecessary DOM modification can be longer than the time spent diffing the virtual DOM.
So layout calculations in normal DOM aren't incremental, but are made incremental in virtual DOM? Assuming this isn't related to batching, it sounds like the concrete DOM is just a bad implementation? Or does the virtual DOM avoid doing layout calculations at all and somehow magically fixes the layout when things change?
I am not as smart as lot of guys here..but What exactly is Glimmer's additional optimisations other than Virtual DOM?...
What exactly does this mean "the programming model is equivalent to render every time, but we take advantage of the declarative nature of Ember's APIs to reduce work."??
Ember's templates allows the framework to determine which portions of the DOM will never change, and so only needing to analyze the portions that might.
<div class="container"> <-- this won't change
<h1>Hello World</h1> <-- this won't change
<div>{{name}} <-- this might change </div>
</div>
Wow..Thanks a lot..Now I understand the sentence..
But I think now this may force us to use more handlebars. Manipulations in didInsertElement may get affected as well. Like updating classes which I sometimes prefer doing in hooks like click, didInsertElement.
Additionally, rather then having a "virtualDOM" we build a tree of the dynamic data. This is more or less diff'ed similarly to how the virtualDOM is diff'ed.
But where it get interesting is when it comes to actual DOM interaction. To create DOM, we use document fragments + cloneNodes, but for granular updates we utilize property/attribute/textContent updates. When used correctly, this combination turns out to be very fast.
As a bonus, we are typically able to utilize the browsers built-in XSS and sanitization (or just lack of parsing) rather then having to implement this slowly in JavaScript ourselves.
Ultimately, I am extremely happy with how the various front-end frameworks keep pushing the envelope. Getting faster, easier to use, and more secure. Ultimately regardless of the framework the ecosystem moving forward benefits the end users the most.
Thanks Stef. This explains a lot. Its great that Ember responded with best way after many started comparing with the performance lag of Ember. I was lil' skeptical when you declared this in December, but now I am looking fwd for the release.
This is irrelevant to the changes from the PR. Manual DOM updates are not managed by HTMLBars anyway, regardless of the rendering algorithm.
That said, I think that binding classes (like `class={{foo}}`) and updating it through HTMLBars is a safer way to do it, comparing to direct DOM manipulation.
I feel like the techniques React uses to deal with state and performance are likely to be addressed in a simpler way by solutions that work better with the existing stack. I'm not an Ember user but I like the way they've approached this, without re-inventing the DOM wheel.
So what happens with all those JQuery plugins that manipulate the DOM directly in didInsertElement?
I'm guessing Glimmer doesn't have a clue how to optimize that use case. So a lot of components need to be rewritten from scratch in Ember-style?
Well, if you need the performance increase, the yes. Otherwise your options are to (maybe) tweak your current implementation so it continues to work or rewrite the functionality to be compatible with the view layer. This is the same trade-off you make with React since it's basically the same technology.
You shouldn't be both manually manipulating the DOM _and_ using Handlebars mustaches for the same element. Since there wouldn't be mustaches, Glimmer would ignore these cases. In general, you should only be using jQuery plugins for special cases not covered by Ember anyway.
I dont know if this is exactly the same app, but it looks like it: http://youtu.be/z5e7kWSHWTg?t=5m07s (ember is on the left, then angular, then react)
The usual DOM-diffing algorithm compares every single thing: "has this div changed? Has the class changed? Has the <p> changed?"
This uses the knowledge of Handlebars to make the diffing algorithm smarter: you don't need to check if the <div> or its class has changed, it never will. You don't need to check if the <p>'s contents have changed, it never will. This means less to diff, which means more speed.
This is the advantage of using a declarative syntax for templating: this analysis can be done entirely at compile time.
I imagine we'll get something soon. Ember is having it's developer conference this week, and they were planning on announcing this and I believe something related to a server-side trampoline of sorts.
So how does this compare with Ionic/Angular on mobile? I recently made an app with Ionic, but the 1.0 migration caused gestures to no longer work. Can Glimmer run well on mobile?
I'll go ahead and plug myself here only because it's very relevant, but if you want to play with a full-stack for building mobile apps using virtual DOM, check out http://reapp.io
My try with it was anecdotal, but the demo[1](which is a pretty brutal demo) ran at a fair speed on my phone- about 3 updates per second.
Currently, on Android devices most complex JS apps have performance issues(some problems were found in how Android processes JS), but this should go a long way in mitigating that problem while it's worked on by Google.
What kind of DOM model Ember is using like React.DOM?...I don't think handlebars create DOM Objects?...great to see How Ember accepted the change and implemented that...
Handlebars doesn't, but HTMLBars does indeed [generate][1] DOM objects from Handlebars AST - it's basically another compile step. Check out section 2 and 3.
I see that HTMLBars is building more of Document fragments rather a single DOM tree. Would be interesting to check if all these document fragments are independent to each other like 1000's of HTMLBar views inserted directly into the Ember Application as siblings rather as a tree. There will be lot of memory involved than when you use a single DOM tree. isn't it?
Interesting..it has been 10 days Yehuda made that comment and now we come to know about it...or there was discussion before on same which i might have missed?
Emblem compiles to Handlebars so it will gain all of the benefits of Glimmer. FastBoot is only concerned with the initial render which is done on the server. Glimmer is concerned with updates so it will not directly affect FastBoot but will work with it.
Reuse Constant Value Types like ReactElement: https://github.com/facebook/react/issues/3226
Tagging ReactElements: https://github.com/facebook/react/issues/3227
Inline ReactElements: https://github.com/facebook/react/issues/3228