Let's say you have a website and let's say that website serves a lot of traffic. Now let's say you relegate Rails to act as just your API because clients are fast now and you want to do some new cool stuff with Javascript also, you're convinced your site is more like an "application". So you build this system and it works really well, except for one thing. The first time you load the page it takes a second or two minimum to get the page loaded. That's ok at first, but really soon it becomes annoying. So you do everything you can with your client, caching whatever possible. You get the load time down to something like 1s. That's better but still not great. Some static sites can spit back pages in like 200ms (across the network!!) and that 800ms difference is crucial to the overall feel and snappiness of your "application". Eventually you give up, you keep the parts of the site that work well as a client-side app, push the rendering back to the server because even though the server is a long ways away from the client and even though clients are fast, you can directly control how quickly the server builds a page and you can optimize that to your heart's content. Now you've gotten your page loads way way down to like 400ms. You're doing great, people love you and you've saved the day. I've fabricated the numbers, but this story is based on truth, because you know who you are? Twitter, Alex's employer.
That's not to say that Rails doesn't make a great API and that Alex doesn't have a point. There certainly is a place for frontend applications out there built on frameworks like Spine, Backbone or Ember (all great projects). Those types of applications have their advantages in some cases. But, it's prudent to be pragmatic and to recognize that the times where you truly need to build a client driven application are few and for the other times Rails is still great at serving up HTML.
Why does a client-side app hand wavingly take a second or two minimum to load compared to the 400ms server-side rendered one?
Exactly the same work is taking place on the server apart from template processing. It makes no sense. If your API is going to be taking 2 secs to build a response, so will your server-side rendered one.
There's no real overhead to DOM insertion and simple template output. If you're doing crazy stuff, it's going to take long wherever you do it.
Admittedly it's still very tricky to get it right at the moment. I also don't really agree with the author's statement about desktop experiences in a browser. I'm beginning to believe it's impossible, especially without the native UX of the OS. I don't think we ever will have that experience as long as there's an address bar, back buttons and all the other cruft at the top of the screen. I think users may always see it as a web page and expect it to behave like a web page. Hopefully I'm wrong.
Good question. I should be clear and state that when I say 1-2s page load times I meant the time from when one requests the page to when one can see it. In general API requests are fast, what is slow is having to always deliver a large JS bootstrap across the network (JS files, templates, etc) and then having to make the relevant API requests to produce the UI. Caching is generally touted as a good solution to this problem, but in my experience it tends to be insufficient. Yahoo did an (older now) study showing that 40-60% of visitors arrive at your site having an empty cache (http://www.yuiblog.com/blog/2007/01/04/performance-research-...). Another solution is to pre-fetch data for the UI and deliver that with the bootstrap but it's still difficult to speed up a ~1 or ~2 MB download. This is what really ends up killing the experience.
Only pull the bare minimum to render the initial page on the first request, load everything else you need, when you need it, asynchronously in the background. There is no way the 'bootstrap' is 1-2MB. There is absolutely no reason to load every single bit of javascript you could ever need on the first page load just like you don't load every image on your site on the first request.
It's really easy to armchair-engineer and just declare bad design. What's harder is to actually build a system that scales well in the very unusual circumstance that sites like Twitter, Google and Facebook find themselves in.
Let's take your solution: it's really difficult to build a development environment that allows a large team to work efficiently which is not based on having essentially a single bootstrap file when deployed in production. When you break apart your files into smaller chunks you are asking your developers to understand the intricacies of asynchronous loading of each file and impose dependency management on everyone. This is a huge problem at Twitter's scale.
I do agree it's very hard still and that many of the 'it's just an API' posts are wishful thinking for the next year or two.
However twitter is a perfect example of how not to do it, they inadvertently shot themselves in the foot with the hashbangs so they never know in the first request what's being asked for. That's kind of a mix between HTML5 history being so slow to come out and Google encouraging the # stuff in the first place. But everyone's still learning. Full page postback-less js applications are still a pretty new field.
Twitter were a very early trail blazer. As much as people hate it, kudos to them for having the balls to try it and let us all learn from their mistakes. I've got a project that's a horrible mix between postback and postbackless stuff. It's almost dailyWTF worthy, but I don't regret doing it as it's getting closer to getting it done right. And there are people using it right now and it works as much as I cringe about it.
For example if you look at the 1.5Mb js file they send when I do an anonymous request, it's got every possible action I could ever do regardless of whether you're on a tweet page, the main page, a profile page, etc. It doesn't matter if I'm signed in or not. They're all in there as templates. And then for some reason they wack in a load of compressed jQuery stuff, etc. It has how to sign up, how to add people to follow, congratulation screen for when your first few follows, etc.
That, to me, is just nuts. Why is there no split in the js dependant on the kind of user they're going to be? On reflection they probably think that too.
With regards to development environments, I do think you should force your programmers to understand your infrastructure. Give new programmers a couple of simple examples to play with on how to manage dependencies and loading files and that's that. In my view it's no different to saying 'we use these parts of C++, not these' or 'this is our coding style, you must follow it or your checkins will be rejected'. Or 'this is how the industry this app is for works, these are the general business rules most of them have, this is the general workflow'.
New programmers to your organisation always have to pick up some contextual information over time. It's your job to train them in the most relevant key information asap. I think far too often it's just a case of 'you're a programmer you're smart, let's just throw you in at the deep end and see if you can swim. Oh, why did you swim into the shark pen? Silly n00b.'. Very relevant: http://news.ycombinator.com/item?id=3736800
One reason would be if you use a compiled language server side that could be a reason its faster than javascript. another reason would be that rendering pages is usually lighter weight, you already may have a lot of objects in memory on the server, especially if using orm layer caching, etc, as I understand it, a framework like backbone has a slight delay in configuring itself
And even in Firefox and Chrome. In fact, I'd ask most of these web application developers to try browsing the modern web on a relatively modern computer: my 1ghz athlon with 1GB RAM… Even twitter is dead slow. DOM manipulation is slow, CSS transitions are slow, infinite scrolling is a memory hog, fake smooth scrolling (disregarding the browser's native scrolling) is slow… everything is just a pain.
> also, you're convinced your site is more like an "application"
I think thats the big thing. I randomly arrive at tweets all the time, I open gmail once or twice a day and leave it open. Even if you are a heavy twitter user, you are still going to end up loading the page multiple times organically from browsing, even if you leave a main twitter tab open.
The lines are blurring between dynamic content driven websites and web applications, but they are still there. IMO Twitter (and Gawker) got distracted by the new hotness, even if it wasn't appropriate for what they actually are.
No, I'm not twitter. Since when is their incompetence a metric for the capabilities of a technology stack?
You could hand them a top10 supercluster and they'd still failwhale it.
Remember, this is the same company that failwhaled for the better part of 2 years(?) on a pubsub app (one of the most researched and understood areas in computing, cf. telco industry, financial industry).
I don't fully understand your point. My quote you're using is meant to be rhetorical. So you're not Twitter... are you implying that you could build an all JS application and avoid all of the problems that Twitter ran into at Twitter's scale? If so, I'd love to see you execute your ideas successfully and explain them so we could all learn from you.
Yes, I'm implying that. I have personally done it for smaller apps. Google and others are demonstrating it with gmail et al, at a significantly higher complexity than twitter.
It's nonsensical to conclude "Twitter can't do it so it isn't possible".
I maintain twitter.com is slow because twitter is incompetent or doesn't care about their product.
When you look at what the page actually does then there's no reason why it has to be a 2MB download. There's also no reason why any javascript (other than a couple hundred bytes) needs to be loaded synchronously. There's no reason they can't serve direct tweet links as static HTML and upgrade the page post-load or when the user starts interacting with it. There's no reason the page needs to flicker for 2-3 seconds post-load before becoming usable.
Isn't "they could serve direct links as static HTML" the whole point of this discussion?
For me the point of the discussion was the claim that the first-load performance of a fat-client app is inherently terrible.
This is false.
It is a straightforward optimization problem. Twitter didn't care to optimize.
I don't think it is under debate that the overall responsiveness (after first-load) in a client-side app is heads and shoulders above anything you can achieve in the request/response paradigm. Network latency is real, AJAX can mitigate it but instant response is only possible when you don't hit the server.
So all we're talking about here is the specific (important) case of the initial page-load. As said above, that case has tons of optimization potential, up to the point where it's near indistinguishable from a regular HTML page-load.
Gmail is not a good example to support your case
This may be subjective. Yes, GMail is slow. But it loads faster than twitter for me, despite being significantly more complex. I also imagine google has less incentive to optimize the first-load because unlike twitter users rarely follow deep-links to gmail.
Now imagine twitter applied only the little optimization that google has to their much smaller app - the latency problem would probably not exist. And if that's not enough, I can only repeat: It's a valid approach to serve static HTML for direct tweet links (or just for everything) and upgrade it asynchronously.
My point is that it's very possible to optimize this problem away where it matters (deep links to tweets). I've done it myself a couple times. It's nasty gruntwork, involving endless Firebug sessions. Crying Foul and "this is not possible" is a lazy cop-out.
You're making a lot of assumptions that are unfounded.
What you're suggesting is usually called progressive enhancement. Yeah it's great, but sadly when you are using a system like Spine, Backbone or Ember, it's not just a matter of "endless Firebug sessions" to resolve the initial bootstrap problem.
No one claims that this is not possible, nor is anyone "crying foul". On the contrary I know first hand how much of this work was done at Twitter and how difficult it is. But as you stand by your petulant claim that "twitter.com is slow because twitter is incompetent or doesn't care about their product." I stand by mine, that there are tradeoffs when embracing one style of development over another.
Feel free to disagree with me but back it up with experience, not false assumptions, blanket statements and the denigration of many good people.
but sadly when you are using a system like Spine, Backbone or Ember, it's not just a matter of "endless Firebug sessions" to resolve the initial bootstrap problem.
Oh, what else is the matter then?
back it up
You mean like you just backed up your claim of Spine/Backbone/Ember having some mythical, unspecified problem that prevents bootstrap optimization?
You're assuming your app has to be served as one large chunk of stuff. There are many JS libraries for asynchronously loading additional content, and it is simple enough to use one of these (or implement your own) so that your app is served as one small bootstrap and a bunch of additional content that is loaded as needed.
You say "static sites can spit back pages in like 200ms (across the network!!)". This time is entirely dominated by network latency. There is no way you're getting data to the client in 200ms if they're based in Australia and you're in, say, east coast US. With the client-side model you can push your app's bootstrap to a CDN to get ~200ms latency just about anywhere. You can render the UI and show a "Loading data" spinner to hide the latency of the initial request to your application server. This gives the user the appearance of a snappy site even though network latency might be 1s or more. You can't do this if you go the server-side route.
I'm not making any assumptions, I'm telling a story about what I watched happen first hand. As you say, you can split up your assets, that does help significantly. You can also put stuff on CDNs, that helps significantly too. Believe me, Twitter engineers are super smart and they've done all of this. But at the end of the day if your business model depends on people looking and interacting with your content, or if you just want your site to be super fast, having a loading spinner while you wait and download the rest of your application in the background isn't going to cut it.
Twitter did not think Rails was particularly adequate at serving up anything (for their extreme needs), so they dumped it for the JVM.
Charles Nutter is doing genius work with JRuby, btw, that's where Ruby has a future with the enterprise, on the JVM. Twitter dumping Rails for Scala was a major win for Odersky & friends, and a big hit to Rails (though Ruby/Rails continues to innovate, nothing has changed there)
Not sure why client-side applications take a second or 2 to render a page, but if that is the case, cached html on front end server would be dee way 2 go, just avoid hitting the application server entirely...
Yes, you're correct, it was not an either-or proposition, I'm sorry, they dropped them both.
Seriously, I'm curious, what public facing, or any facing, components does Twitter use that is written in Ruby and/or Rails?
A Google search for "twitter rails" brings up the usual got dumped threads. A similar search, but this time, "twitter scala" brings up, as a first result "Scala School" for Twitter engineers, followed by a bunch of threads on the Twitter + Scala marriage.
Don't worry, Scala has its own issues (Yammer, for example, ditched Scala for Java due to, ironically enough, scalability issues).
That's not to say that Rails doesn't make a great API and that Alex doesn't have a point. There certainly is a place for frontend applications out there built on frameworks like Spine, Backbone or Ember (all great projects). Those types of applications have their advantages in some cases. But, it's prudent to be pragmatic and to recognize that the times where you truly need to build a client driven application are few and for the other times Rails is still great at serving up HTML.