Set up pre-commit and post-commit hooks, integrate with your run-of-the-mill CI suite or testing framework, hook up with deployment scripts - that's pretty much it.
it's not groundbreaking, but it's something that I've slowly been getting better and better at. It seems with python at least, it's a puzzle and you gotta know about each piece to make it work.
I do something similar, with one major difference: I like to rely on environment variables to set some config parameters the author sets in a separate production branch. I usually keep these environment variables set to development values on my laptop and production values on servers. Low ceremony, but works for me.
Yep, that's how I do it too. I've got to say that using Capistrano's default deploy.rb seems over-designed for my tastes. I mean, if I'm deploying code from a version control system, why would I want to have a second version control system to track releases?
For me, I want to have a second version control system to track releases. For my personal projects, I deploy as soon as I have something interesting, which might be once a week or might be five times in an hour, and I deploy straight from the HEAD of master. If I deploy something prematurely, a symlink repoint and a restart can be done via Cap, or manually, or whatever.
Making Git also control my deploys would mean either tagging release commits or rebasing onto a production branch, which both (for me, at least) impede releasing as often as I can.
I've got the 'tag, push, rebase, restart' and 'oshit, rollback, restart' as ordinary rake tasks, so it's not like there's much cognitive load there. I've bothered with cap in the past if I've been pushing to more than one server, but it's not that much more effort to just do "ssh in a loop" and avoid a dependency.
To those who are doing this, I have a question. How distributed is your production environment? How is deploying the app, starting/stopping background services, etc... handled on multiple machines?
I use fabric http://fabfile.org for deployment and though I don't have multiple servers, it's really easy to add more servers to a fabfile (a python script). Deployment is easy and flexible, for example I use something like this:
fab release pull update touch
The above will work on release version, will pull in changes from repo (hg in my case), will update to tip (or any tag I want), remove any *.pyc files and recompile and finally touch wsgi file. All these (release, pull, update, touch) are custom written python functions using fabric library.
If I want to restart apache, I use this (could have combined it with previous example and it would work)
fab apache_restart
Fabric is joy to use, very easy to write powerful deployment scripts and has good documentation: highly recommended.
I run it on a single server. That said, I imagine in a distributed environment (where you have a couple of machines, not tens, hundreds or more) you can do something like this on each machine you have (probably not in parrallel though, if something happens you don't want to mess up all the servers?).
If someone has a link to a good article / book / video about best practices with this, please post it?
There's a lot of talk about how someone did massively scallable systems or deployments (and it's really interesting, lots of cool engineering problems & solutions there), but a real world "here's what we figured works really well for us for a smaller-scale distributed operation" would be really useful, too.
You sound like a Python guy, but that's exactly how Heroku works. It totally sold me on the idea of just pushing to your production/staging repo to update a server.
I imagine to set it up yourself would be a bit of work, but it sounds like you have continuous integration built in? That's cool.
Well, it's not specifically continuous; I do manually run the script when I want to make a deployment. But I make it a point for my (main) master to be the same code as is in the production. So I work in smallest possible changes (one feature, or one bugfix), push to main repo, and run the deployment script.
right, CI was the wrong term there, just that you're using your automated testing to make sure your deployments won't crash.
This really is the way to go for developing software, deployment should be painless. I would like to see some sort of drop-in tool you could put on your own server without having to configure all the post-commit hooks. Although, that's probably a pipe dream.
Knowing the clowns who run Git, I would be hard pressed to rely on them on a service of this magnitude. However, if it was for a hobby web application, sure, I'd try it.
We also setup pre-commit git hooks, that run PhpUnit and QUnit. So our branches are always stable. We also included CodeSniffer in the pre-commit hook to adhere to the strict html specification.
Set up pre-commit and post-commit hooks, integrate with your run-of-the-mill CI suite or testing framework, hook up with deployment scripts - that's pretty much it.