Cool article, thanks for the insight into your stack. One thing that threw me out of the gate, though, was lumping in Backbone.js in with Meteor & Ember.
Backbone at it's heart is just a way of providing some structure to your JS models - it doesn't really have anywhere near the level of complexity that Ember, Meteor, and some of the other frameworks have, and it still doesn't provide anything for server-side persistence, synchronization, access control, etc, so transitioning away from Rails to Backbone wouldn't really be an option. :)
Big thanks for dropping the link to cache_digests too, not sure how I've managed to miss out on that so far!
Congrats for successfully getting your rails site to load faster, but it's still slow.
Honestly, what you're doing at this point is delaying the big rewrite. That's not a bad thing, but it will still be necessary at some point to offload your real-time features to systems that are intended for those kinds of things from the ground up.
Rails was fundamentally designed to optimize for programmer productivity over speed of execution, with the idea that people generally cost more than hardware. Unfortunately once you get to serious scale the hardware necessary to keep things quick starts costing a lot and operationally it becomes a big headache and stuff just doesn't work.
It's going to take a seismic shift for rails to suddenly outperform lightweight 'real-time' frameworks running on faster scripting languages. We need to be honest and aware of both the benefits as well as the limitations of our tools.
Can I ask that all product blogs provide a link to the actual site? Not everyone knows what layervault is and it would be nice to browse to the site without having to manipulate the address bar.
Here is a recent video of DHH, discussing some of the techniques and issues mentioned in this article, such as backbone vs rails, pjax, Russian doll caching:
Really interesting article. But it the final line seems slightly disingenuous Building a realtime web application does not require you to begin with a realtime framework.
Rails is used for CRUD operations but Node/Socket.io/Jquery are your 'realtime' framework. Its just simple, proprietary (and not in a bad way) and does exactly the job you need.
Very true. I see some folks assume that to get that realtime performance, you need to start with a "realtime" framework.
We probably will hit a ceiling with Rails (once we decide to take the time to update down from 500ms to 50ms, perhaps) that we might not run into using something built for the job. LayerVault wasn't built to be realtime first and foremost.
As soon as the author wrote "socket.io" the gears in my brain ground to a stop.
Use observers, pusher, and a client side MVC. Don't serialize records deeply, your client side controller or framework should handle the updates nicely. IMO use `after_commit` http://rails-bestpractices.com/posts/695-use-after_commit . Just let that data flow, brawhlings.
Why oh why is pusher better than a websocket here? Especially since pusher is a third party service and it's probably using (or is able to use) websockets as a transport for that push anyway?
Sure, you can use Sever-Sent-Events, but a Websocket would be sufficient too (and probably works in more places, no less).
Let me clarify, you can use pusher, or something like firehose.io.
Pusher uses a web socket. Pusher is cheaper when comparing against running our own dumb pub/sub server.
Rails 4 will support SSE, however we're not riding that edge as there were existing solutions to the problem.
-
I think this covers most realtime problems without having to code that much, let alone a secondary server/service to handle the problem, we outsourced it, were happy.
Actually, SSE is a better fit here: it works in many more browsers than WebSockets when you count the existence of SSE polyfills, and you don't need the bidirectionality of WebSockets here.
Nice post with lots of cool stuff. Any particular reason you're using delayed_job for async jobs + memcache for caching and not resque/redis? Not saying anything bad about the former, just wonder if there are pros/cons you considered when choosing between the two?
When first building LayerVault around this time last year, the simplicity of delayed_job appealed to me. Not having to set up and maintain a Redis application was a bonus. The simple '.delay' syntax sealed the deal for me.
That being said, the day where we need to separate our queueing system will come. But delayed_job/MySQL has brought us much further than I'd thought possible.
I think this is a fantastic illustration of how it's more important to get something 'pretty good' out in front of customers, even if you could have used the latest/coolest/hippest things. You're illustrating how you can go back and improve it later, when you're clearer on what needs doing.
You made two points that sound contradictory without further explanation, if you don't mind providing more insight. First, you mentioned that you want to stick with Rails without duplicating views, etc. Second, you mention that you're using Socket.IO (i.e.: Node.JS as opposed to Ruby/Rails).
How are you integrating Socket.IO with your Rails app? Do you have a separate Node.JS server that is just glossed over in your post, or do you use something like execjs to integrate Socket.IO directly into your Rails app?
Big thanks for dropping the link to cache_digests too, not sure how I've managed to miss out on that so far!