Let's say you have a website and let's say that website serves a lot of traffic. Now let's say you relegate Rails to act as just your API because clients are fast now and you want to do some new cool stuff with Javascript also, you're convinced your site is more like an "application". So you build this system and it works really well, except for one thing. The first time you load the page it takes a second or two minimum to get the page loaded. That's ok at first, but really soon it becomes annoying. So you do everything you can with your client, caching whatever possible. You get the load time down to something like 1s. That's better but still not great. Some static sites can spit back pages in like 200ms (across the network!!) and that 800ms difference is crucial to the overall feel and snappiness of your "application". Eventually you give up, you keep the parts of the site that work well as a client-side app, push the rendering back to the server because even though the server is a long ways away from the client and even though clients are fast, you can directly control how quickly the server builds a page and you can optimize that to your heart's content. Now you've gotten your page loads way way down to like 400ms. You're doing great, people love you and you've saved the day. I've fabricated the numbers, but this story is based on truth, because you know who you are? Twitter, Alex's employer.
That's not to say that Rails doesn't make a great API and that Alex doesn't have a point. There certainly is a place for frontend applications out there built on frameworks like Spine, Backbone or Ember (all great projects). Those types of applications have their advantages in some cases. But, it's prudent to be pragmatic and to recognize that the times where you truly need to build a client driven application are few and for the other times Rails is still great at serving up HTML.
Why does a client-side app hand wavingly take a second or two minimum to load compared to the 400ms server-side rendered one?
Exactly the same work is taking place on the server apart from template processing. It makes no sense. If your API is going to be taking 2 secs to build a response, so will your server-side rendered one.
There's no real overhead to DOM insertion and simple template output. If you're doing crazy stuff, it's going to take long wherever you do it.
Admittedly it's still very tricky to get it right at the moment. I also don't really agree with the author's statement about desktop experiences in a browser. I'm beginning to believe it's impossible, especially without the native UX of the OS. I don't think we ever will have that experience as long as there's an address bar, back buttons and all the other cruft at the top of the screen. I think users may always see it as a web page and expect it to behave like a web page. Hopefully I'm wrong.
Good question. I should be clear and state that when I say 1-2s page load times I meant the time from when one requests the page to when one can see it. In general API requests are fast, what is slow is having to always deliver a large JS bootstrap across the network (JS files, templates, etc) and then having to make the relevant API requests to produce the UI. Caching is generally touted as a good solution to this problem, but in my experience it tends to be insufficient. Yahoo did an (older now) study showing that 40-60% of visitors arrive at your site having an empty cache (http://www.yuiblog.com/blog/2007/01/04/performance-research-...). Another solution is to pre-fetch data for the UI and deliver that with the bootstrap but it's still difficult to speed up a ~1 or ~2 MB download. This is what really ends up killing the experience.
Only pull the bare minimum to render the initial page on the first request, load everything else you need, when you need it, asynchronously in the background. There is no way the 'bootstrap' is 1-2MB. There is absolutely no reason to load every single bit of javascript you could ever need on the first page load just like you don't load every image on your site on the first request.
It's really easy to armchair-engineer and just declare bad design. What's harder is to actually build a system that scales well in the very unusual circumstance that sites like Twitter, Google and Facebook find themselves in.
Let's take your solution: it's really difficult to build a development environment that allows a large team to work efficiently which is not based on having essentially a single bootstrap file when deployed in production. When you break apart your files into smaller chunks you are asking your developers to understand the intricacies of asynchronous loading of each file and impose dependency management on everyone. This is a huge problem at Twitter's scale.
I do agree it's very hard still and that many of the 'it's just an API' posts are wishful thinking for the next year or two.
However twitter is a perfect example of how not to do it, they inadvertently shot themselves in the foot with the hashbangs so they never know in the first request what's being asked for. That's kind of a mix between HTML5 history being so slow to come out and Google encouraging the # stuff in the first place. But everyone's still learning. Full page postback-less js applications are still a pretty new field.
Twitter were a very early trail blazer. As much as people hate it, kudos to them for having the balls to try it and let us all learn from their mistakes. I've got a project that's a horrible mix between postback and postbackless stuff. It's almost dailyWTF worthy, but I don't regret doing it as it's getting closer to getting it done right. And there are people using it right now and it works as much as I cringe about it.
For example if you look at the 1.5Mb js file they send when I do an anonymous request, it's got every possible action I could ever do regardless of whether you're on a tweet page, the main page, a profile page, etc. It doesn't matter if I'm signed in or not. They're all in there as templates. And then for some reason they wack in a load of compressed jQuery stuff, etc. It has how to sign up, how to add people to follow, congratulation screen for when your first few follows, etc.
That, to me, is just nuts. Why is there no split in the js dependant on the kind of user they're going to be? On reflection they probably think that too.
With regards to development environments, I do think you should force your programmers to understand your infrastructure. Give new programmers a couple of simple examples to play with on how to manage dependencies and loading files and that's that. In my view it's no different to saying 'we use these parts of C++, not these' or 'this is our coding style, you must follow it or your checkins will be rejected'. Or 'this is how the industry this app is for works, these are the general business rules most of them have, this is the general workflow'.
New programmers to your organisation always have to pick up some contextual information over time. It's your job to train them in the most relevant key information asap. I think far too often it's just a case of 'you're a programmer you're smart, let's just throw you in at the deep end and see if you can swim. Oh, why did you swim into the shark pen? Silly n00b.'. Very relevant: http://news.ycombinator.com/item?id=3736800
One reason would be if you use a compiled language server side that could be a reason its faster than javascript. another reason would be that rendering pages is usually lighter weight, you already may have a lot of objects in memory on the server, especially if using orm layer caching, etc, as I understand it, a framework like backbone has a slight delay in configuring itself
And even in Firefox and Chrome. In fact, I'd ask most of these web application developers to try browsing the modern web on a relatively modern computer: my 1ghz athlon with 1GB RAM… Even twitter is dead slow. DOM manipulation is slow, CSS transitions are slow, infinite scrolling is a memory hog, fake smooth scrolling (disregarding the browser's native scrolling) is slow… everything is just a pain.
> also, you're convinced your site is more like an "application"
I think thats the big thing. I randomly arrive at tweets all the time, I open gmail once or twice a day and leave it open. Even if you are a heavy twitter user, you are still going to end up loading the page multiple times organically from browsing, even if you leave a main twitter tab open.
The lines are blurring between dynamic content driven websites and web applications, but they are still there. IMO Twitter (and Gawker) got distracted by the new hotness, even if it wasn't appropriate for what they actually are.
No, I'm not twitter. Since when is their incompetence a metric for the capabilities of a technology stack?
You could hand them a top10 supercluster and they'd still failwhale it.
Remember, this is the same company that failwhaled for the better part of 2 years(?) on a pubsub app (one of the most researched and understood areas in computing, cf. telco industry, financial industry).
I don't fully understand your point. My quote you're using is meant to be rhetorical. So you're not Twitter... are you implying that you could build an all JS application and avoid all of the problems that Twitter ran into at Twitter's scale? If so, I'd love to see you execute your ideas successfully and explain them so we could all learn from you.
Yes, I'm implying that. I have personally done it for smaller apps. Google and others are demonstrating it with gmail et al, at a significantly higher complexity than twitter.
It's nonsensical to conclude "Twitter can't do it so it isn't possible".
I maintain twitter.com is slow because twitter is incompetent or doesn't care about their product.
When you look at what the page actually does then there's no reason why it has to be a 2MB download. There's also no reason why any javascript (other than a couple hundred bytes) needs to be loaded synchronously. There's no reason they can't serve direct tweet links as static HTML and upgrade the page post-load or when the user starts interacting with it. There's no reason the page needs to flicker for 2-3 seconds post-load before becoming usable.
Isn't "they could serve direct links as static HTML" the whole point of this discussion?
For me the point of the discussion was the claim that the first-load performance of a fat-client app is inherently terrible.
This is false.
It is a straightforward optimization problem. Twitter didn't care to optimize.
I don't think it is under debate that the overall responsiveness (after first-load) in a client-side app is heads and shoulders above anything you can achieve in the request/response paradigm. Network latency is real, AJAX can mitigate it but instant response is only possible when you don't hit the server.
So all we're talking about here is the specific (important) case of the initial page-load. As said above, that case has tons of optimization potential, up to the point where it's near indistinguishable from a regular HTML page-load.
Gmail is not a good example to support your case
This may be subjective. Yes, GMail is slow. But it loads faster than twitter for me, despite being significantly more complex. I also imagine google has less incentive to optimize the first-load because unlike twitter users rarely follow deep-links to gmail.
Now imagine twitter applied only the little optimization that google has to their much smaller app - the latency problem would probably not exist. And if that's not enough, I can only repeat: It's a valid approach to serve static HTML for direct tweet links (or just for everything) and upgrade it asynchronously.
My point is that it's very possible to optimize this problem away where it matters (deep links to tweets). I've done it myself a couple times. It's nasty gruntwork, involving endless Firebug sessions. Crying Foul and "this is not possible" is a lazy cop-out.
You're making a lot of assumptions that are unfounded.
What you're suggesting is usually called progressive enhancement. Yeah it's great, but sadly when you are using a system like Spine, Backbone or Ember, it's not just a matter of "endless Firebug sessions" to resolve the initial bootstrap problem.
No one claims that this is not possible, nor is anyone "crying foul". On the contrary I know first hand how much of this work was done at Twitter and how difficult it is. But as you stand by your petulant claim that "twitter.com is slow because twitter is incompetent or doesn't care about their product." I stand by mine, that there are tradeoffs when embracing one style of development over another.
Feel free to disagree with me but back it up with experience, not false assumptions, blanket statements and the denigration of many good people.
but sadly when you are using a system like Spine, Backbone or Ember, it's not just a matter of "endless Firebug sessions" to resolve the initial bootstrap problem.
Oh, what else is the matter then?
back it up
You mean like you just backed up your claim of Spine/Backbone/Ember having some mythical, unspecified problem that prevents bootstrap optimization?
You're assuming your app has to be served as one large chunk of stuff. There are many JS libraries for asynchronously loading additional content, and it is simple enough to use one of these (or implement your own) so that your app is served as one small bootstrap and a bunch of additional content that is loaded as needed.
You say "static sites can spit back pages in like 200ms (across the network!!)". This time is entirely dominated by network latency. There is no way you're getting data to the client in 200ms if they're based in Australia and you're in, say, east coast US. With the client-side model you can push your app's bootstrap to a CDN to get ~200ms latency just about anywhere. You can render the UI and show a "Loading data" spinner to hide the latency of the initial request to your application server. This gives the user the appearance of a snappy site even though network latency might be 1s or more. You can't do this if you go the server-side route.
I'm not making any assumptions, I'm telling a story about what I watched happen first hand. As you say, you can split up your assets, that does help significantly. You can also put stuff on CDNs, that helps significantly too. Believe me, Twitter engineers are super smart and they've done all of this. But at the end of the day if your business model depends on people looking and interacting with your content, or if you just want your site to be super fast, having a loading spinner while you wait and download the rest of your application in the background isn't going to cut it.
Twitter did not think Rails was particularly adequate at serving up anything (for their extreme needs), so they dumped it for the JVM.
Charles Nutter is doing genius work with JRuby, btw, that's where Ruby has a future with the enterprise, on the JVM. Twitter dumping Rails for Scala was a major win for Odersky & friends, and a big hit to Rails (though Ruby/Rails continues to innovate, nothing has changed there)
Not sure why client-side applications take a second or 2 to render a page, but if that is the case, cached html on front end server would be dee way 2 go, just avoid hitting the application server entirely...
Yes, you're correct, it was not an either-or proposition, I'm sorry, they dropped them both.
Seriously, I'm curious, what public facing, or any facing, components does Twitter use that is written in Ruby and/or Rails?
A Google search for "twitter rails" brings up the usual got dumped threads. A similar search, but this time, "twitter scala" brings up, as a first result "Scala School" for Twitter engineers, followed by a bunch of threads on the Twitter + Scala marriage.
Don't worry, Scala has its own issues (Yammer, for example, ditched Scala for Java due to, ironically enough, scalability issues).
The backend (i.e. Rails) still does almost everything it used to do: validations, access control, session management, data crunching, and everything else that you can't blindly trust a client to do. The real difference with client-side applications is that instead of stitching together view templates and sending back HTML documents, JSON objects are sent back for the client to represent.
More (sometimes duplicate) stuff is added to the client - things like client-side validations, and all the business logic and template code for the purposes of presentation. But no one in their right mind would let their backend blindly consume whatever it gets and persist it without question.
There's still plenty of responsibility for the backend.
Nowhere was this implied. The article says that Rails makes excellent RESTful APIs. Of course your API should do validations, access control, etc. But, Rails doesn't have to do (much) of the HTML given client-side MVC/MVVM libraries like Backbone and KnockoutJS.
Leave the rendering of the app to the client. Leave the ins and outs of the data to the API (driven by Rails, Node/Express/Railway, Django, what-have-you).
That's definitely applicable to mobile and web apps.
Right. I guess what I meant is that the backend is still responsible for everything it's always done except for generating HTML views. The rest still applies.
So, tell me again why we should be using HyperText Transfer Protocol for doing something that has nothing to do with transferring hypertext?
It would make more sense just to pipe JSON through IRC since it's much less verbose and was designed to be symmetrical (and session based) from the start.
The caveats in moving state to the client is that it's a huge perceptual shift for developers, with a steep learning curve.
From my perspective we're coming full circle back to client/server desktop apps... only instead of C++, we're doing it with js inside a browser container... I've done ActiveX controls and Flash components... It's not that much of a stretch.
Agree, althought I'm not fully convinced the web is fit for "web apps", or that even "web apps" make sense. At least not with current technology.
Developing apps to run on the browser still implies a lot of redundancy: you are forced to work with solutions that weren't created for interactive applications in the first place (HTTP, DOM), limited to one language designed by comitee (Javascript) and code reuse is minimal. Everything just feels hackish (at best) compared to native frameworks (e.g., Apple's Cocoa), and in the end it's hard to achieve good UX and compatibility.
I would rather see more people deploying native aplications with great UX backed by REST APIs than shoe-horning apps into browsers (which break the web).
1) Developers are worried that a user won't try their app if they have to install something on their computer.
2) Compatibility , support anything that can render HTML , although this may no longer be the case with the amount of chrome only apps I see.
3) Firewalls, you can get out on port 80 pretty much anywhere, that random port number you decided to use for your app not so much.
4) A few years back web programming was seen as the "easy" way to get into development since writing relatively limited PHP was a lot easier than wrangling C++ and the Windows SDK. Therefor web developers reached critical mass.
Java applets and Flash did a reasonable job with 1 and 2, but they seem to be some of the most hated parts of the web, I think part of this may be because they are see as "too powerful".
People want the web to be lightweight.
The problem with Java applets and Flash is that they are a broken model inside another broken model - they shoe-horn their own VMs to run applications inside a browser. It's the ultimate hack.
The web, as it was originally envisioned, makes perfect sense: HTTP, URIs and hypertext to provide navigable content, period. On the other hand, building interactive applications by manipulating the DOM while reinveinting UI patterns and widgets over and over again seems like a hack that grew to enormous proportions.
I feel the reason why "web apps" are so popular is because it allows small startups to develop more-or-less cross-compatible products faster and with fewer resources than developing a webservice and then hiring 4 engineers: one for a Windows client, one for a Mac client, one for a iOS client and one for an Android client. The fact browsers all follow more-or-less the same standards and they all run Javascript turned them on the "write once, run everywhere" platform that Java failed to deliver, but in my opinion it's still (very) far away from ideal.
Frankly, I'm wondering how nice a GUI you can write inside Inferno. Just have everyone install an Inferno client and serve your app as a mountable filesystem containing the VM code to JIT as a binary file.
Java applets and Flash aren't hated because they're powerful. It's because they're perceived as slow. Java applets used to make my entire browser freeze up for a few seconds when they loaded. Before installing AdBlock, I would regularly see Flash ads chugging along at 100% on one of my CPU cores.
It's a shame that your post is disappearing into the gray background - I feel something similar every time I have to use Javascript & HTML for a rich interface.
Every time that happens, I wish I was using GTK or QT instead.
This is a recurring cycle in application development. We've gone from mainframes and minicomputers serving plain text to dumb terminals, to programs running on personal computers accessing applications and data on servers, to web servers serving structured text to 'dumb' browsers, to powerful in-browser runtime engines accessing applications and data over the web.
It is another iteration of that cycle, only this time we have something to lose, namely the web. We are regressing to client/server with a bunch of ever-changing single-site APIs and shoddy client code third parties can't readily fix, and these are destroying the world-wide web of repurposable content in open formats at stable addresses.
Aren't we just embracing the difference between a site and an API? It's hard to do both well at the same URL. The API provides the repurposable content in an open format, and the site itself is free to experiment with different presentations. Is that so bad?
Before devs started experimenting with client-side rendering, all sites' content was amenable to the same set of tools. Now there are more and more services with broken frontends and unique APIs which are incompatible with everything else and aren't even stable—when you can rev your own client js instantly, you don't know or care whether any other clients broke. I'm stuck using only one client (your js) that works at all and I can't fix it, which is almost everything that client/server got wrong the first time around. It's not impossible to carefully implement a stable form-compatible API in common with a bunch of similar sites, but I don't see it happening.
It's the prisoner's dilemma: yeah, everyone on the web would be better off if businesses weren't doing this, but individual businesses will continue to do it as long as it's in their best interest to retain very tight control of and limit third-party access to the data and meta-data they capture from users.
I'm not sure how or even if the infrastructure of a distributed system like the web could be engineered so as to prevent this kind of situation. Perhaps the solution is to build in a system of financial incentives -- not unlike what the Bitcoin folks have done to solve the Byzantine Generals problem. It's an interesting problem.
For usability response time is everything. So for the moment if people want to build richer apps then they will have to be asynchronous. But do you think there will ever come a day when you can assume the network is always fast and we will be able to go back around the circle to the mainframe again?
We have the ways to transfer only data (versus data + markup + DOM triggers) on partial requests with the even less taxing than rendering a full static page.
is it really "less taxing"? An app page that polls every few seconds to get back an empty JSON block is going to be eating up the network radio on a smartphone constantly, vs just getting back 120k HTML page, and a few CSS/image requests one time over, that may then sit there and be read (or a form filled out) for the next few minutes.
I just submitted a talk on this very same notion, "all of this has happened before, and all of this is happening again", in regards to the rebirth of client/server.
It's in both mobile apps, browser based JS app, etc.
Now that native App-centric ecosystems exist, the client-server model is more pronounced. Supporting these different clients forces this perception shift to a more client agnostic server back end.
This feels very conciliatory towards Rails, like that of a boss praising the employee he just demoted.
I'm bearish on Rails because its maintainers don't want it to just be an API. The fight to become the best backend API is much different than the fight to become the best html server. Rails isn't even participating in the fight. Rails has a lot of cruft not needed in an API, Sinatra or Express feel much better for that.
Rails became popular because with it developers had the ability to cut down on soul-sucking activities in their day to day jobs, like building yet another authorization system, or yet another admin. That some people ran with it and scaled it to its limits, that's only because they felt in love with its ease of getting things done.
That's its main advantage. When it comes to getting things done, there's no better alternative.
The fight to become the best backend API
is much different than the fight to become the
best html server.
Basically the best backend API is the one that is the fastest / that scales better both horizontally and vertically / that's more malleable to new developments and protocols (e.g. SPDY, Websockets) / that has the least overhead. From that point of view, if you're running on top of a VM that has a GIL, or that doesn't support real thread-based concurrency, then the "battle" is already lost.
So you see, there is no fight. That fight was won long ago by the JVM and the frameworks that run on top of it. Every big website on the web right now (except Microsoft-stuff), runs either on top of the JVM or with custom-baked solutions written in C/C++.
That didn't stop people from building cool stuff on top of PHP, or Rails, or Django, or every other platform that brought instant gratification, but that's another point entirely and we are talking about "battles" here.
> Rails became popular because with it developers had the ability to cut down on soul-sucking activities in their day to day jobs, like building yet another authorization system, or yet another admin.
I beg your pardon, but... what?! Out of the box, Rails has none of this. Rails is almost a meta-framework. Honestly taking your two examples: Devise is horribly complicated and the half-dozen admin frameworks that had a hard time crossing the 3.x line (which is not encouraging for the future) are either generators, downright incomplete enough to be useful outside of a hello world, or frameworks in themselves. I defy anyone beginning with Rails to set up Devise and an admin system in less than half a day, let alone an hour. I'm not trying to pick up a fight, but anecdotally compare this to Django, where setting up both auth and admin is largely under 30min for a newcomer, easy, with ample possibilities left ahead.
So although Rails helps a lot in various areas (like resource routing, respond_to/with), but it still ends up being soul-sucking in numerous others where you have to either delve into needlessly complicated stuff or implement it yourself.
"but it still ends up being soul-sucking in numerous others where you have to either delve into needlessly complicated stuff or implement it yourself."
This is a false equivalence IMO. With rails i spend far less time on incidental complexity than I do with anything else I've tried.
Wikipedia and CL largely serve static files, so no biggie there what the application server is, front end proxy (httpd, nginx, lighthttpd, etc.) does the heavy lifting.
Wordpress is an interesting one, I wonder if they aren't doing some C++ pre-compilation a la Facebook's HipHop? Maybe not, WP is pretty light code-wise (compared to the slow, bloated dog that is Drupal), and for the most part personal blogging sites are not handling Twitter level bandwidth last I checked.
@bad_user is a cool thinker @icebraining, no hot headedness to be found, he's just stating the facts in regard to industry trends for enterprise level applications: it's an M$ and JVM world.
Can you define big? I've worked on some big sites that used MRI ruby for APIs and we served a ton of traffic with strict SLAs for a max of 250ms at the 99th percentile and things like that.
oh, Twitter (Scala/Java), American Airlines (Java), Facebook (on HipHop in C++), Stackoverflow (C#), ESPN (Java)
I think by big that's a code word for enterprise.
The big sites you have worked on are comparatively small if Ruby is backing the show. That's not to take away from the ton of traffic that you guys were able to serve; it's just that there are few enterprise level Ruby backed sites running these days.
Github, I believe is one, but lately I've been getting the "unicorn is angry" icon when viewing repositories, so I wonder about scalability issues. For the record, I have never, ever seen a "unicorn is angry" icon on Twitter. Maybe switching to the JVM got rid of all the magic ;-)
That's not a clear definition. If I guess what you are implying is that "big" means a top 100 site. Enterprise is not the same thing as big, at all. They may co-occur often, but they are orthogonal.
"The big sites you have worked on are comparatively small if Ruby is backing the show."
That's a big assumption. You have no idea who I am or what I've worked on. One of the sites I worked on was yellowpages.com. That's a top 1000 site, but even that doesn't tell the full story. When I was there we were serving ads for most requests to bing maps. Do you consider bing maps comparatively small?
I currently work for disney who runs espn.go.com. The person sitting next to me right now worked on espn.go.com before she transferred to my group. I can assure you that espn.go.com could easily be served with ruby instead of java.
I'm questioning a couple of specific claims made by bad_user. That all big sites use the JVM or c/c++ or .net. I think that is false, or the definition of "big" is so narrow as to be meaningless for 99.9% of programmers. I'm also questioning the claim that the best backend is the one that is the fastest. I'd argue that the best backend is the one that is fast enough and the cheapest, wouldn't you agree? As I mentioned before, I was working on an api that served requests for bing maps with a very tight SLA. We ran it on MRI ruby, and we met the SLA.
Here's something that people often forget, you can put things like varnish in front of your API. This doesn't work for everyone, but if you're API is easily cached, then you shouldn't have any problems scaling it even if you're using a language like MRI ruby which has a GIL.
One thing I find interesting is the balance of the Rails model and the Backbone model. On a new project, I generally have fat Rails models and skinny Backbone models. On legacy projects, I put most of the logic into Backbone (since I'd rather not tamper with a legacy backend) making liberal use of parse() and toJSON() in order to get the model the way I want to use it.
Rails/Backbone will work well for the time being but I wonder which way the balance will swing? Will we have fat Rails models or fat Backbone models? If it is a fat Backbone model, then maybe direct database queries is the solution, cutting out Rails entirely. CouchDB and others already support this and could make it even simpler in the future.
The problem with fat client-side models is that allows malicious users to potentially break your code. Inject bad things. Re-assign users. Etc.
We'll always need some sort of server-side last-mile, tamper-proof validation. So given that, I'm not sure the benefit of duplicating it on the client side.
You duplicate validation logic on the client side in order to provide rapid feedback to the user. But it's a huge mistake to not let the backend have the final say.
This is how I feel about Flask.
I've been using it with Backbone.js and they make a great combo: Flask API serving JSON, and the rest of the logic on the client.
Is there any of this solution that bundles with an easy mobile library/sdk? I was asking for something like a semicomplete backend opensource solution here http://news.ycombinator.com/item?id=3735387 but unfortunately I got no responses
I believe Twitter and Foursquare are doing this too. Their Scala-based appservers are API endpoints, serving JSON to the javascript/Backbone.js frontend.
I tried this approach and it's clearly attractive, but it also has some disadvantages. I decided to switch back to rendering static HTML in Django (not a Rails guy), with just a little JS on top for now. These are my main reasons:
- Debugging on every browser was quite painful. I tried to support IE7+ as well. If a server-side code works, it works for everyone.
- Frameworks on the client-side are not as developed as the server-side ones. Also I am reluctant to introduce more client-side libraries than necessarily, so I will end up coding a lot of stuff, that I can get for free in Django otherwise.
- The user will need to wait for the JS to load and execute until they see something on the first pageload (at least in my implementation).
- If there is a bug in the JS it's harder to get it logged.
Twitter is a perfect example of a "thick client". It's slow, bloated and buggy. When one typo is made in the JavaScript "bundle", the whole site implodes, leaving the user with simply the navigation bar. When JavaScript is turned off, most functionality is lost. I couldn't even log out after disabling JavaScript because "/logout" pointed to a hash.
JavaScript being enabled is not an axiom. The client is a very unstable platform when juxtaposed with the server. A lightweight client facilitates flexibility.
I'd love to know what people think the current "state of the art" stack is for rich and highly scalable web applications, something that has been proven to work great in production by big projects. Is it nginx + Rails + Backbone? When I think of fast rich web apps, Facebook and StackOverflow come to mind, and as far as I remember, neither use the above-mentioned tools (not that it means they're bad in any way).
Just a couple of years ago Rails-generated HTML pages were super hot, and now people are almost insulted if you don't have a fairly complex Javascript framework running the show.
So what's the future for Rails? If you talk to the likes of 37Signals and GitHub, it's pjax and server side rendering. This involves fetching a partial of HTML from the server with Ajax, updating the page and changing the URL with HTML5 pushState.
I really hope not. Rails is awesome and I love it to bits, but I will hate to have to use a framework that is massively invested in Ajax/Pjax. The reason for this is simple: UX. Companies don't seem to understand that there's more to the world than just America and America's top-notch Internet [1]. A lot of web traffic comes from places where the connectivity is simply shit and interfaces like Quora, GMail, Twitter et al that have so much functionality with server-side JavaScript as a prerequisite make it seriously difficult to have an enjoyable experience more often than not. If you've ever tried to access Google Analytics on a 1mbps shared connection in a relatively populated office, you'll know what I mean.
Yes of course Ajax and the interactivity and convenience it offers are awesome, but they come at a massive expense, and this is something that will be very saddening if Rails starts considering it as a major part of its core sooner rather than later.
[1] Alright so it's not just America, but my point stands.
Thing is, AJAX should help with this stuff. No more wasteful page reloads where 90% of the HTML is the same, etc. Granted, we don't see that all the time, but the theory is sound.
quite the contrary, pjax reduces bandwidth and, properly implemented, reduces total # of requests (since once html, css, js, images are downloaded, you just hit the server for the content that varies; usually the page's main content area and title)
can't get much leaner than that. IE 10 supports pushState(), so the future is looking bright.
I's all-in now, give poor ole' JVM beast a break, it might OOME on me if I actually put it to work...
A bit off topic, but: I find that either Rails + pjax or Clojure/Noir + pjax really hits a sweet spot of ease of coding, not too much Javascript, and an AJAXy user experience. Worth spending a little time experimenting with.
Let's talk about context switches here: OMG! it's going to be sooo hard for developers to switch from a model where all the logic happens on the server and where the client is just a dumb display to a model where the all the logic happens on the client and the server just serves as a coordinator.
Oh wait. That already happened, and it was called the switch from mainframes to PCs (or whatever the fuck you want to call it). The point is: developers adapted, and got over it, and you will too.
The question I (and everyone in this community) have (or should have) is - are there strong technical foundations for this switch? Is Javascript the right client-side interpreted language to support every conceivable application over the next 10 years? Is HTML the right display/templating language?
My answer: no, not at all. But it's hard as hell to fight the momentum of an unstoppable force. Even with an immovable object.
I didn't notice the 'kudos' button, but went to look at it, and accidentally sent one. Makes me wonder how many of them were accidentally sent by people hovering to see what it was... Aside from the hover-state triggering irreversible action, it's a nice little interaction.
My only gripe with the shift to client side MVC frameworks and such (which although I like!), is that it often makes handling security (from authorization perspective) a bit difficult, at the end of the day I cannot have my authorization based presentation logic in the front end since it can be easily manipulated.
I don't think anyone is advocating this kind of development. In general you can't trust anything the client tells you, so to release information to a client that has not provided proper authentication credentials to the server is a mistake.
The major reason I'm excited about this shift is the potential for higher scalability out of the box. Instead of rendering pages, the server is just for static pages and data. Instead of using a central server to render for each user, it's distributed out to the computing power of all clients.
Yes, agreed, generate static everything on first request; i.e. hit the application server just 1X for live data and generate static file(s) that represent the request on the front end server (httpd, nginx)
That leaves authentication and CRUD operations for the application server to work on; the rest lives in-memory (via google mod_pagespeed, for example) on the front end server.
I'm taking this approach as I like speed, and I don't yet trust the JVM enough to handle tons of live requests and OOME on me while I'm out surfing ;-)
Only if "Web" for you consist of mostly static content.
For others there is an interesting thing called Web Applications.
How do you gracefully degrade a game done with all cool stuff HTML5 and friends offer?
No, I'm not talking about static content, I am talking about dynamic content, which is exactly what Rails is there for. If you only have static content you don't even need Rails, just have the web server deliver it.
I was not talking about "Web Applications" either, which, FYI, are not the same thing as dynamic content. I realize that he mentions in the post that he's mostly talking about Web Applications, but then he's just stating the obvious and the post title should be "For Web Applications, Rails is just an API".
MVC for server-side APIs is still useful because you still need separation of concerns, validations, access control, session management, data crunching, etc on the server.
That's not to say that Rails doesn't make a great API and that Alex doesn't have a point. There certainly is a place for frontend applications out there built on frameworks like Spine, Backbone or Ember (all great projects). Those types of applications have their advantages in some cases. But, it's prudent to be pragmatic and to recognize that the times where you truly need to build a client driven application are few and for the other times Rails is still great at serving up HTML.