Am I the only one that kind of thinks most websites should just be static pages? Like, I get pretty irritated when I go to read a blog post on Medium or wherever and it loads a header and a blank page, and then loads a bunch of javascript (mostly tracking and analytics frameworks), and finally goes out and gets the actual content. And then if I scroll down, it has to load some more garbage from Disqus or something.
PLEASE JUST GIVE ME A STATIC PAGE WITH YOUR CONTENT.
I really do not care if the comments don't refresh live.
So, you read an article about how easy server-side rendering is with React, and your first impulse is to rant about client-side rendering? Did you not read the post or did you just feel an overwhelming urge to posture?
> The article just says it's easy. It doesn't provide any concrete examples.
It doesn't provide code, but it points out that you can render to a string. It is literally that easy — you can just call React.renderComponentToString instead of React.renderComponent, though in practice there are Node libraries that make this even easier.
> I imagine it would be easy in Node to do this but what about Ruby, Python etc.
Well, it's a JavaScript library. If you can run JavaScript, you can render React templates. For example, Instagram is a Django app that runs a Node subprocess to render templates.
I think the author largely agrees with your beefs, since he's proposing this as a client-side and server-side rendering solution.
> I've thought for a long time (and blogged about it previously) that the ideal solution would fully render the markup on the server, deliver it to the client so that it can be shown to the user instantly. Then it would asynchronously load some Javascript that would attach to the rendered markup, and invisibly promote the page into a full app that can render its own markup.
That is, everything is rendered server-side for the reasons you outline, but that rendering logic also lives on the client should something need to update. This post doesn't detail exactly how to do all that, but it sounds like it's scheduled as a future topic.
My understanding is that since React has a virtual DOM, it can just render everything on the server with Node.JS, instead of evaluating it in a browser.
initial rendering should definitely be on the server, his premise to render content on the server side and then async load it in is flawed, he adds latency and the dom still has to parse a string at the end of the day and put them on the display tree, so the only bit he offloaded was a bunch of string concatenation (given he used a sane templating system) at the cost of latency. bad trade off IMO.
the downside to his reasoning is that he is tying himself to a single back end (node) if he used something like mustache he coulkd use any backend, but mustache implies parsing templates on the client. googles closure compiles to javascript functions and I am sure something like that exists for mustache and this would seem very sensical.
Converting a large string to a dom element is not expensive, it is when that dom element is added to the actual display tree when things go a bit awol, this is to be expected, redrawing any GUI costs a lot.
React is an excellent library and I am glad people use it, none of the ideas are radical however.
The important thing with this technique is you can render before any JS has been downloaded, parsed and executed, which is a huge amount of time relative to HTML parse and paint.
fwiw, React isn't tied to Node: I've used it with Django via PyExecJS (which just uses raw JavaScriptCore).
I started with this line:
>initial rendering should definitely be on the server
this technique is called pre-rendering, and yes, I am amazed people don't do it.
"PyExecJS (which just uses raw JavaScriptCore)."
this seems pretty slow or can you cache the execution function. (aka you don't have to parse the js multiple times)
Our backend is jvm so we decided on closure, with this on startup we compile our templates on the server and then pass the rendering functions around pumping the data into it to render initial pages.
At the same time, we compile the same templates into javascript functions which are included into our app via the closure dependency system to render on the client.
This has the advantage of not having to parse and "compile" a template at the point of delivery.
This is why I think react is an excellent library, because it allows anybody to do this and brings this notion into the minds of developers who jump easily on bandwagons.
I also think that the react team are well on their way to building a library which would fair excellently under all sorts of static analysis
Our approach is going to make js interop performant with our stack (which is not jvm). We don't have a concrete plan at this time. You would need more than jsx to make this work through (jsx is just sugar)
I get your gripe, but there's a good use case for this stuff in building dynamic web applications. Just because a technology is misused doesn't mean the technology is to blame.
Browsers aren't general purpose applications. They can't fill every application role on the computer. The more "web dev" tries to push them in that direction, the slower, more difficult to develop and maintain, and generally poor they will be, and then the "dynamic web applications" that run on them will suffer.
What, for instance? Your comments seem heavy on condemnation but light on substance. Sure there are certain donation specific tools such as Photoshop or 3dsmax or ableton etc that wouldn't work in the browser, but for your day to day business 99% application, the user generally does not need to do anything that the browser cannot.
The fact is, the web world is evolving at an extremely fast rate, and leaving the native world in the dust.
I don't argue that the browser cannot be made to do something. I merely point out that making it do "everything" will only end up in disaster for browser creators and maintainers, and for app developers alike.
The "web world" hasn't even caught up to the native world yet. It may be evolving very rapidly, faster than native, even, but it's still behind.
For content sites, you are correct. But what about an in browser app, like Trello. Obviously in those cases javascript significantly improves the experience. So its just a case by case scenario. But I agree with the general point you are trying to make, and I work as a front-end dev who builds a lot of js heavy web pages/apps for a living.
Actually, it gave up on performing _everything_ client-side, and now renders the first page server-side, and then handles subsequent rendering client-side.
It found a performant happy medium, which is something I hope React will enable a lot more people to do with a lot less work.
It doesn't. Subsequent requests are also rendered server-side and are passed down as HTML embedded in JSON that is splatted into the DOM. It isn't a lot of work to do this.
To a large degree I agree with your sentiment. I have never had to manage a site with thousands of pages but I would assume even then any template changes would not be that hard to render and then upload to a CDN.
Not all use cases are equal. Static pages might work great for static content, but what about something like GitHub? Wouldn't it be nice to see the issues list refresh live when somebody else closes one?
I find such stuff annoying, at least using it everywhere just because you can. Imagine you're reading hackernews, and halfway through reading a comment, it gets deleted. Or, say, you want to compare a page with the changed page one hour later -- easy, just open it in a new tab (and maybe do a hard refresh), right? Well, if everything is "real-time", that's not even possible.
Yes, there are use cases for it. But doing it just because you can, because it's supposedly "cool", is also hurting other use cases.
Why not deliver an RSS/Atom feed with the page and a standard feed reader within HTML that could present the content, polling at specified intervals.
In the same way that HTML finally standardized on HTML5 semantic page markup (not that HTML1.0 was grossly inadequate), and we're finally getting integrated video / multimedia support (though lacking the ever crucial "off" switch), once you dig through all the bullshit and hype, most of Web 2.0 corresponds to "we can update the page you're looking at while you're looking at it".
Some client-side capabilities to sort and filter content would finish off about 90%+ of the use cases. Good enough.
And you'd end up with far fewer monstrosities such as Facebook, Gmail, G+, etc., in which a simple content stream has a client-side footprint of 1-2 GB. SRSLY?
Yes, the standard problem of having users upgrade their client software to support the featureset remains, but there are still huge numbers of what are effectively Web 1.0 sites (Craigslist comes to mind) that are phenomenally useful and successful.
I've had a very late and passionate come-to-Jesus love affair with RSS/Atom feeds. Taking Craigslist, for example I can get an RSS feed of any given category search:
Remember "Web agents", you know "a personal online assistant that would scour the Internet for you"? Well, this is it.
Combine that feed with rsstail and multitail, and you can track items of interest in a console window. Find something particularly useful? Fire off an email alert to yourself.
I've got Chromium running on my Thinkpad T520i, sucking down about a gig and a half of RAM. newsbeuter tailing 85 feeds is around 300 MB (a single Chromium browser tab), and rsstail weighs in at about 145 MB resident. Since they're only intermittently active, they swap out with very little performance overhead.
The existing Web design model is a fucking trainwreck waiting to happen. The browser-as-app-platform metaphor has its advantages, mostly in rapid development, though that's also a weakness (users HATE change), and it's a bastardization of two competing uses (content vs. app).
Much of the design and feature set serves advertisers and NSA water-carriers far more effectively than it does users. Sadly, that's where the funding comes from, so it's no real surprise. Be careful what you incent for, you'll get it.
With the amount of inane FUD you're spreading I can't work out if you're genuine or a troll. Your text only/image minimal feed reader is using less RAM than a browser designed to load hundreds of images and get the most out of the RAM available to it? Shock and horror! And this "train wreck" you're referring to... I don't see it. I see happy clients that can pay once and get an application that is useable on any platform, from anywhere in the world, is as fast as a native application, and can be seamlessly scaled up to meet demand. Where is the downside in that? Game over, purely native apps, you lost the war (except for specific things that need to operate locally). Sure there are native phone applications, but they are essentially thin clients for web applications anyway.
You're absolutely correct that browser-based applications offer some very compelling advantages: they're run-everywhere (or at least, many wheres), they're rapid-deploy, they're particularly well-suited to data presentation and interaction tasks (which constitutes a large fraction of all apps), and rather importantly for small development shops, they offer a compelling path out of legacy support hell. These are all true.
So are my critiques. In particular, that the Web alternatives aren't as fast or light as native apps.
Purely native apps haven't "lost the war". They are the battleground for the most part on mobile devices, as you note, though you omit the observation that these typically operate via an API, which is in fact the direction I'm leaning as a hybrid browser/app model. Fully native apps retain crucial advantages in many spaces.
I'm curious as to what aspects of this you consider to be FUD? I'm reporting actual memory utilization for apps doing comparable tasks, at least as far as rendering raw information is concerned. If you want to consider graphical presentation, I've also got a number of PDF files open using xpdf and evince, both of which perform full graphical rendering including images. The maximum RSS for these is 14 MB, and the 20 or so processes running here have far less system impact than my chromium child processes.
The train wreck I'm describing is precisely that. Light websites are generally not an issue, but the full-fledged app instances, again, Google+, Gmail, and the like, cause wild amounts of swapping and instability, to the point that I make minimal use of them, and where possible find alternatives. The RSS/Atom readers I mention aren't doing the full work of a Web browser, but that's precisely the point: for keeping me informed of an information stream, they're far more than sufficient, and require far fewer resources. I can open an item in a console-mode browser (and yes, that's old-school and an acquired taste), or pop over to a browser and read the item. It's far less overhead than keeping the stream in my browser at all times, and as I noted, the RSS readers offer hooks to perform other local actions if I choose.
I've written recently of the frustrations I'm increasingly having with browsers in general: they serve neither the needs of application users nor of content readers particularly well. Quoting myself:
"It's neither a good reading environment -- for that you'd want something like Readability, Pocket, Instapaper, or an eBook management tool such as Moon+Reader, Kindle, or (bad as it is) Calibre -- nor a decent applications environment: it's bloated, crash-prone, slow, full of security holes, and underfeatured relative to native applications.
"However in both cases the browser's ability to load and display or run arbitrary content makes it convenient."
I see a few possible directions things could head:
⚫ Continue down the current path. This is the course of least resistance.
⚫ I don't know where the HTML working group(s) are headed, but continuing the pattern of HTML5 of offering highly semantic markup and leaning in the direction of an API model of HTML rather than a designer model (or in addition to) could be useful.
⚫ We now have <header> <article> <aside> and <video>. How about a <stream> as I described, and perhaps native <plot> features which could present graphics based on realtime data updates? Again, a huge class of applications now essentially consists of "poll regularly for new data, update stream, present graphics, respond to user inputs".
⚫ A content-oriented browser which strips out virtually all distractions, and provide vastly improved content management and referencing capabilties (see: zotero). Readability, Instapaper, Pocket, etc. approach this. I presently have a local "unstyled.css" stylesheet which I apply to many sites. It works best on bare-naked pages (without any native styling or table/frame based page layout) but works pretty well on most minimally-styled pages. And it is, for my purposes at least, almost always a huge improvement over native presentation. In other cases I've extensively modified how pages present themselves. See: http://www.reddit.com/r/dredmorbius/comments/1tniu3/user_sit...
⚫ An application-development platform based on (mostly) standard APIs and an HTML/SHTML transport back-end. This would be the area you're most interested in.
I'm not convinced this is the way things should go, but it seems it would resolve some of the present tensions in Web development and use.
Kids these days, not around in the 90s when all web pages were static and every link refreshed the entire page.
There's a happy middle ground between heavyweight javascript SPAs and static pages.
Frankly I love it when I interact with a web page and it doesn't have to reload the whole page just to update a single element, article, etc that I requested.
But neither am I fan of overly heavy frontend JS apps, like how Twitter used to be [1], or Quora seems to be, for example. Especially since I'm a chronic browser tab abuser where JS-heavy pages bring my browser to a halt or make it chug.
So yeah, aim for that sweet spot happy middle ground when possible.
But not for all use cases. A completely blank static document will have the best possible performance characteristics, but the lowest possible utility. Writing needlessly slow things is stupid, but the blind pursuit of performance is missing the point.
Completely agree with your point. It doesn't mean that we don't want/need dynamic user interface, of course. But the Web was designed to be static mostly. That's why the best experience we can enjoy is from the static web sites. Let's just recall what HTTP term means: Hypertext Transfer Protocol!
>Am I the only one that kind of thinks most websites should just be static pages?
No, the majority of people agree. And contrary to what javascript happy dumbasses keep repeating, the majority of new development is absolutely not doing everything client side. It is sad that the web is so fad driven, but this stupid fad will pass just like flash intros and spinning under construction animated gifs.
As long as enterprise continues to shift to a no-install web application based IT infrastructure, fad driven client based "web dev" will continue to increase in frequency and prominence. If (when?) people realize that the browser actually can't do everything this might change, but I wouldn't hold my breath. It's too easy to save a buck now by minimizing application development and support costs (browser-based web app vs. native) and let the poor idiot who takes over management five years from now worry about the fallout.
This is ridiculous. Building business applications as web applications have many advantages such as portability, network accessibility, speed of development, ease of development, speed. Over native apps. The sheer fact that you can hand off processing to the server and not block your entire ui makes the majority of web apps far better than the majority of poorly coded win forms garbage I've used.
Building an application by rendering static webpages is just a horrible cludge. How did we even get here? How are we still even considering that the godawful turd that is string concatenation of DOM descriptors polished by a layer of templating languages is actually a decent way of writing applications?
Do you seriously believe this?
The virtual DOM is probably the biggest elephant in the room. It begs the question: why the hell isn't the REAL DOM like that?
If you have to pose a really terrible strawman to argue for client side templates, you are basically conceding the argument. It is even worse given that moving it to the client doesn't change anything you complained about.
Haha, this got parodied by the horse. Anyway, I use angular.js. It tries to suggest a reasonable path forward for evolving the the DOM into something which is usable for applications. (resulting with web components, shadow DOM etc)
But yes I realized I'm arguing about the wrong thing - about web applications, not (most) websites. Whoops.
But you can also use your existing Backbone Views (if they have some nice, say, formatting or data-munging-for-display logic in them) and simply replace your render function with React.
The next step you can take is replacing your HTML templates with React's JSX templates, if you'd like...
Thanks for replying and for your work on CoffeeScript and Backbone. I might give this a try as I love Backbone's Model layer. I think I like the second approach you suggested better. That might almost turn the Backbone.Views into ViewModels, right?
I'm still up in the air on how I feel about the whole JSX thing but I'm willing to give it a try.
Wouldn't this make it even more difficult for search engines to index your pages?
Also, I'm not sure we want a full rendering on the server. That will make the page appear to have a longer loading time rather than the other way around. Unless I'm misunderstanding what you're trying to say.
It does sound interesting though. I'm looking forward to your follow up posts.
> Wouldn't this make it even more difficult for search engines to index your pages?
If the string rendering is your initial page[0], why would it be difficult for crawlers do index pages?
> Also, I'm not sure we want a full rendering on the server. That will make the page appear to have a longer loading time
Servers tend to be beefy and have caches up the ass. Serving a pre-rendered "home" has been found time and again to be faster than generating it from the raw data on the client, and definitely gives the impression of faster loading.
Various companies (in one famous case, Twitter) have found that server rendering beats client rendering for initial rendering speed. React is designed so that if you care about server rendering, you can have the best of both worlds -- using Node you can render a component to HTML on the server, then pick it up the prepopulated DOM in client-side JS once your page loads. Since the server sends down plain HTML, search engines are happy too.
People have used goog.ui.Component with Soy Templates to do this for a while. You call call goog.soy stuff in your base component's createDom() so that decorateInternal() is always dealing with the same tree, whether it was rendered on the server or in the browser.
Exactly this. Serving fully formed html and picking it up by a client-side framework is great for crawlers and also great for performance. In addition to React, you should check out Rendr (https://github.com/airbnb/rendr) - it's a really cool library for doing this via server-side rendering of Backbone views.
Downloaded rendr, fired up the first example simple_00, opening links - Users and Repos, was painfully slow. With server MVC frameworks at least there is an indication by the browser that page you clicked on is currently loading. With client-side MVC frameworks pages usually load instantly and data arrives shortly after. In rendr, maybe it's just a terrible example, you wait 2+ seconds for a page to load and there is zero indication that page is loading.
Can someone explain to me what "isomorphic" means in the tagline for the linked Director library[1]? I have no idea how isomorphism is supposed to play into all of this?
It means that it can be used on client and server. Its a term coined by (I believe) the nodejitsu people in this article [0]. Airbnb also used the term to describe their rendr library [1] which aims to allow exactly that: rendering on both client and server
> You can choose to use this, but after getting over my initial distaste for it ("Ack! Who got markup in my code!"), I could never go back to not using it.
Or you could just use HTMLbars/Handlebars. Seems like "JSX" is just a more complicated version of a Mustache-esque logic-less template.
The big difference is that Handlebars always produces strings that your browser has to parse as HTML. Handlebars also doesn't know how to mutate between states, meaning if you add a class 10 levels deep in the template the best you can do is re-render the entire thing.
JSX + React produce functions that return React's representation of the DOM, their "virtual DOM", and React knows how to make small mutations based on state. If a change in state only needs to add a class 10 levels deep in the hierarchy, that is the only change that happens in the real DOM; it doesn't have to re-render the entire template.
Thanks for the demo link; I hadn't heard of HTMLBars.
It makes for a good demo I suppose, but I prefer React in a couple places: there's no separate template, style construction is done with an object and not a string (constructing that style string for HTMLBars seems error prone), and the connection between the values on the Ember object and how they will be used in the DOM is guessed only by naming convention. You could use `this.set('hotdogs', count % 255)` and reference 'hotdogs' in your template in place of 'color'.
There is a very nuanced set of performance (mem and cpu) and usability tradeoffs you make when adopting a dirty checking vs change tracking system. I'm working on coming up with a reasonable talk about it.
Most of React's novelty, I think, comes from the Virtual DOM. Your view spits out a DOM tree using the React.DOM.* objects. The JSX allows you generate the appropriate calls to those constructors by writing something that more resembles HTML. You can't just use Handlebars, etc.
Also, fruitmachine (Github)- Financial Times - Graphics Lab. They say it handles multiple pages better then React. They also claim to have had the React concept first.
Fwiw the Lift web framework has been diffing a server-side virtual DOM back to the client-side DOM via Ajax since 2007. And I wouldn't be surprised if the concept existed elsewhere even before that. Claims to be the first to have had the idea should be taken with a grain of salt.
I've gotta agree with the author on this one. Upon first glance at the example code, JSX looked really ugly and unnecessary. As I started to use it, I really came to appreciate it in the context of React. JSX syntax allows you to do React-y things, like call child components and pass in props, in a very concise way.
Being slightly more complicated is a feature, IMO. I always get the impression that the logicless templates end up as quite logicful languages once you add all the control flow directives. But its all very ad hoc so some simple things end up being tricky to do, like passing parameters to a subtemplates or giving different CSS classes to the even and odd rows. If you just use a regular programming language from the start these are just a matter of calling a subroutine or using an if statement instead of being something you will need to go to Stackoverflow to figure out.
I come from an Ember.js background and its router / nested layout management is the best I have used. I like react, and have been diving into OM as well in my free time, but I'm not sure of the best way to approach routing / page transitions.
When I start an app, I do not think from the bottom up, which is the react way. I want to start with the login page and redirects, or get the major page transitions down. I'll be interested in seeing how you approach that.
I think you could break it down into several benefits.
1) Performance -- React, with it's Virtual DOM, and the new idea to treat the DOM as an expensive remote rendering pipeline will give you a huge performance boost.
2) Compose-ability & Modularity -- While React only serves as the "view", there's huge potential for really decoupling your application.
3) Library, not framework -- The notion that React is simply a library is awesome. While I'm not a huge fan of the 1000nd super small libraries to make up your app, without any magic or complexity; I enjoy not being locked-in to a particular solution. Especially when new (potential) solutions are being presented regularly. I think modularity will win in the end. If React doesn't serve your needs in 6 months, then you can easily (still involves work, though) switch it out for something better. But, if you use something like Ember.js, you're tied into it's ecosystem and framework.
I'm just reading about React now, but it seems it might be appropriate when you have a lot of stuff that needs to be reused
Then again, Knockout would also be pretty good at this, so I dunno. There are a lot of criteria for evaluating a framework... development speed, maintainability, simplicity, functionality. And a lot of it is better evaluated experientially rather than theoretically.
I think the biggest reason to use React is how it handles the DOM. In most framework views, DOM access is slow and can be a bottleneck. React addresses that problem by using a virtual DOM. In order to make a change to the real DOM, React computes a diff between the newly virtual DOM and the previously computed virtual DOM, and then it applies the diff to the real DOM. The triggering to the virtual DOM computation is done explicitly, hence React has a lot better DOM rendering performances.
Everything you can do in React, you can do with other frameworks (Angular's directives and isolated scopes for example), but React is a lot faster and easier to learn, and definitely worth looking into.
The lack of understanding and amount of hatred against web applications (different from web pages) is both sickening and rejuvenating. It is sad because the article is fairly clear, it describes the future of most computing (since most "human" computing will happen on the web) and it really shouldn't be hard to understand. But I am glad that most of the top commenters here instead focus on hating "webdevs". It means these people will stay on the desktop and won't create clones of desktop applications in the browser (see google docs for a missed opportunity of a better office system). History will show which way things shifted, no need to flame about it here now.
I am learning how to make desktop apps with Reactjs + Backbone + PouchDB and Brackets-Shell just for fun and learning purposes, so far it has been a very exciting experience.
You can probably skip explaining client.js (which seems to be stuff one can learn from React's docs) and just dive in to server.js and the npm packages that you wrote that this app uses.
I've been uninterested in React but really like the idea of server-side rendering for js-heavy apps. Besides Rendr, do you know of any other similar libraries/frameworks?
Can anybody explain to me how to use react.js without their "guard" feature. I need my app to throw the error natively and not wrapped by the framework.
Can you be more specific with the problem you're seeing? An example jsfiddle/jsbin would help. If you're referring to the fact that React sometimes catches and rethrows exceptions making it harder to debug, that's something I'm planning to try to fix soon. In the meantime you can use the blue stop sign in Chrome's Sources panel to catch the error as soon as it's thrown.
There aren't any library features for TDD per se but standard JavaScript testing practices work on React objects (barring some weirdness around autobound methods)
God I am sick of the passive-agressive "Finally, we have something GOOD! for X" titles that disparage everything that already exists for X.
I saw a reddit article yesterday about "Finally a way of doing X that doesn't suck" despite there already being libraries to do X.
This casual dismissal and disparaging of existing work is the kind of thing that causes people to give up on stuff (WhytheLuckyStiff for example)
It also causes me to be instantly antagonistic towards said new library / feature. It raises the bar that I expect them ot reach. "Oh? You are the ONLY good way to do something? Prove it"
You are making the same sort of evidence-less dismissal that you accuse the OP of (whereas the OP actually explains the reasoning behind his assertion).
If you want to question the post's premise, at least point to some of these libraries that you believe are on par with react. Personally, I'm not aware of any that have nearly as compelling a model with regard to performance, composability, and minimization of mutable state. But if they exist, I'd love to know about them.
There's no dismissal in the GP's post. They were merely stating that pretending that some project X is an island is off-putting (and it never is one), and that by claiming that it is, you put some readers already familiar with the field in a defensive mood rather than ready to read about something new.
Honestly, articles like that get more attention and discussion on HN.
After the first person comments about how they're sick of articles that are titled like that, a real discussion starts among the other comments which is the goal.
If the author named it: "React: A great way to do this and that". It'd get 5 upvotes, 10 minutes in the primetime and die off.
It's cool, you don't have to use JSX, you can just use the React.DOM functions. Or build a backend for whatever your preferred template system is, as long as the backend ends up creating nodes of React.DOM objects. Or you go with Om[0] and use EDN[1], Enlive-style[2] or hiccup-style[3] tempting.
Now if you want to physically separate the view logic and the corresponding markup generation, that's more debatable: they're extremely strongly coupled (and in fairly small chunks ideally) so you often can't trivially change one without the other, and thus keeping them together makes logical sense. See Pete Hunt's presentation which lumpypua linked, it tries to make that point fairly nicely.
Separation of concerns is a good goal, and your reaction is popular with people first seeing React (including me, several months ago!).
But consider that usually JS to control a view is inextricably linked to the underlying HTML and you need to modify both whenever making any change regardless -- since that's the case, you might as well combine all of the code and markup for the view in one place. Work to separate concerns, not programming languages.
Yes, yes you do. Pete Hunt's talk introducing React makes a really persuasive argument that in large frontend applications, having something markup-esque along with your JS is the best way to reduce component coupling and increase cohesion.
You don't have to use it - it's a convenience provided for designers (and arguably developers who have realised that templating provides a false separation of concerns). Om notably (https://github.com/swannodette/om) ignores it.
I still don't get it that you would like to write in an inferior language such as js on the server side. Cm'on its broken by design. There are a lot of better alternatives.
The only reason to use .js on the serverside is that you don't know better.
PLEASE JUST GIVE ME A STATIC PAGE WITH YOUR CONTENT.
I really do not care if the comments don't refresh live.