Edit: I should add that I love me some Debian packages. At my previous $JOB, I converted the monolithic codebase we had into roughly 50 Debian packages and deployed it from our own package repo. I set up Puppet on top of this to deploy changes to the servers. The biggest upside of this was that we had to write no custom code to manage dependencies, etc. The old broken shell deploy script went away, and we were able to move from SVN to git without having to update much (except how we generated debian/changelog). The biggest PITA when I was doing packaging was writing proper init scripts. If the daemon is run with an interpreter (#!/usr/bin/python), you have to update the part of the init script that checks if the process is running. Also, compiling .py to .pyc files gave me some trouble, though that has now been greatly improved.
At my last company, I built debs for our software (which was only distributed on our hardware, and customers didn't interact at the system level at all), and I just wrote a shell script to make them manually.
Just take your data and control directories and the debian-binary file, and wrap them up in an ar archive. You'll have to learn how to write a control file by hand, but that's easy. The cool thing is that building the packages manually is distro-agnostic. No need for a VM or anything. Most of our stuff was in Java or Python, so even though our hardware ran Ubuntu, I'd develop and build the packages on my Arch machine before pushing to the device for testing.
We eventually automated things some more, storing the data and control directories in SVN, with a simple GUI running on the device (we did not distribute this tool to customers) to pull down a package from SVN, build it, and install it on the device, using something based on the SVN revision number as the package version.
That led to a pretty good workflow. We'd make a change, commit it into the dev branch of our packages repo, then go over to the device and push a button to pull it down.
Quote:
The openSUSE Build Service is the public instance of the Open Build Service (OBS) used for development of the openSUSE distribution and to offer packages from same source for Fedora, Debian, Ubuntu, SUSE Linux Enterprise and other distributions..
This guide looks quite nice; there are certainly tools I've not used before.
One thing that's good about Debian packages (and I'm sure many other formats): it's really easy to get something that works; everything else is optional extras. For example (if I remember correctly; I've not used Debian for a while):
# Set up directory structure:
mkdir -p my_package/DEBIAN
mkdir -p my_package/usr/bin
# Copy in the files you want the package to contain
cp my_binary my_package/usr/bin/
# Add metadata
echo "2" > my_package/DEBIAN/version
# Write skeletal my_package/DEBIAN/control here
# Build, install and use
dpkg -b my_package
dpkg -i my_package*.deb
my_binary
Of course, such a package is never going to be accepted into Debian's repos, but that's not the point: it makes it less likely to say "screw it, I'll dump stuff into /opt manually".
There's also the excellent "checkinstall" program, which does a pretty good job of turning autoconf-style source repos into decent packages.
With any of these approaches, the package manager keeps track of the files so they can all be removed in the future (just don't write crazy destructive pre/post scripts!).
My one question: does all of this apply to Ubuntu as well? or most of it, and if not why not? I would assume (bad to do so) it should work, only issues would be you would want to package each project independently on it's respective platform / OS version.
Yes, many Ubuntu packages even come directly from Debian as the Ubuntu upstream. So building Ubuntu packages is very similar. In fact the preferred route is to get the package included in Debian first, and then Ubuntu will just pick it up as a downstream (some packages do go directly into Ubuntu for various reasons).
It uses bundler locally to wrap all your dependencies into a single package, so you don't have to care about packaging each individual gem, and supplies an `activate` script like Python's virtualenv. Something else it does, which I've not seen elsewhere, is scan your binary gems for library dependencies, so you get all the deps (say, libxml and libxslt) installed when you apt-get install your app's package.
I used to use checkinstall on Slackware to make packages, you'd do "./configure; make; checkinstall" and it made you a package to install with the system tools. It did deb packages and rpm too IIRC.
Edit: I should add that I love me some Debian packages. At my previous $JOB, I converted the monolithic codebase we had into roughly 50 Debian packages and deployed it from our own package repo. I set up Puppet on top of this to deploy changes to the servers. The biggest upside of this was that we had to write no custom code to manage dependencies, etc. The old broken shell deploy script went away, and we were able to move from SVN to git without having to update much (except how we generated debian/changelog). The biggest PITA when I was doing packaging was writing proper init scripts. If the daemon is run with an interpreter (#!/usr/bin/python), you have to update the part of the init script that checks if the process is running. Also, compiling .py to .pyc files gave me some trouble, though that has now been greatly improved.