Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Maintenance-wise and $$-wise, two VPS boxes with Cloudflare (even with an enterprise account) setup is usually cheaper than an ordinary K8s setup.

How do you feel about the benefits of using containers?

And if you do use containers, how do you feel about the benefits of orchestrating them, instead of running something like Docker Compose? E.g. having overlay networks and being able to deploy a new version of your software on multiple nodes simultaneously?

I've found that even when you don't want K8s, something like Docker Swarm or Hashicorp Nomad may still have benefits for you, depending on what you're trying to do. Swarm is basically feature-complete, boring and just like adding multi-node capabilities on top of Docker Compose anyways. Nothing to install apart from Docker and nothing to configure, apart from executing a cluster init/join command.



My tool of choice is usually Ansible, so, from the Ansible point of view, deploying to a container, VPS, bare server, or a cluster, looks pretty much the same.

I haven't mentioned Swarm intentionally, which IMHO I consider "the next iteration" of your containerized setup: Swarm is a boring, just-works, tool if you need to spread multiple containers, add some networking between them, and have to manage everything by a normal person. And boring is good :)


> My tool of choice is usually Ansible, so, from the Ansible point of view, deploying to a container, VPS, bare server, or a cluster, looks pretty much the same.

I agree that Ansible is an excellent tool! Though personally, I enjoy establishing a clear boundary between the "infrastructure" and "app" parts of any setup - the former ensuring that OS has whatever runtimes are necessary for any given application to run, user accounts, groups, folders, services etc., whereas the latter is whatever apps are running on the server.

So I'd use Ansible for most of the former, setup a container cluster and then use something like https://docs.ansible.com/ansible/latest/collections/communit... to manage actual application deployments. Why? To make the apps more throwaway and limit the fallout in case of bad configurations/deployments, as well as make sharing the base Ansible playbooks for what constitutes a production ready server setup easier amongst projects.

Of course, you could as well use Ansible to install something like JDK/Tomcat and set up its configuration which worked well for me in the past, but personally that didn't scale quite as well as running Java/Node/Ruby/Python/PHP in containers and approaching them with Ansible almost like black boxes (e.g. copy a bunch of files, like secrets, into these directories, deploy some arbitrary YAML against the Docker Swarm cluster), which was surprisingly easy to do.


> And if you do use containers, how do you feel about the benefits of orchestrating them

Why would I need that?

A container is just a fancy executable. No one ever talked about "executable orchestration". Container orchestration does not need to be a thing.

If you need anything more complex than `docker run` you are either extremely large and this whole discussion is irrelevant, or you need to question your life choices.


> A container is just a fancy executable.

Not just that, but containers also have a lot of knowledge and tooling around them for running services more painlessly. I suggest that you familiarize yourself with this excellent site: https://12factor.net/

> No one ever talked about "executable orchestration".

Actually thousands of collective developer-years have gone into developing Java EE, OSGi and also figuring out how to run "containers" inside of Tomcat, GlassFish and many other web servers, which basically were executables or modules that contained application logic that had to be run.

Both in the Java ecosystem and many others, it was eventually decided that attempts like this (as well as going in the separate direction and shipping VMs or VM images, a la Vagrant) don't really work that nicely and containers were a middle ground that people settled on - unified bundles of almost everything (sans kernel) that any given application might need to run.

Of course, other interesting projects like Flatpak, AppImage, snaps and even functions-as-a-service all concern themselves with how any piece of executable code should be run, especially in the case of latter. And then there are systems which attempt to distribute work across any number of worker nodes, actually you can just look at the entire HPC industry.

Thus, I believe that one can definitely say that a plethora of options for running things has been explored and will continue to be something that people research and iterate upon in the future!

> Container orchestration does not need to be a thing.

That's a lot like saying that OpenRC, systemd or other init systems shouldn't be a thing. Of course people are going to want to explore standardized options for figuring out how to organize their software that should be running! Even more so when you're running it across multiple nodes and the risks posed by human error are great!

Just look at what happened to Knight Capital because of human error: https://dougseven.com/2014/04/17/knightmare-a-devops-caution...

> If you need anything more complex than `docker run` you are either extremely large and this whole discussion is irrelevant, or you need to question your life choices.

I believe that this is an unnecessarily dismissive tone and a misrepresentation of what scales of work might benefit from container orchestration.

Not all of it needs to be as complex as Kubernetes. Frankly, in many cases, the likes of Docker Swarm will suffice, much like you might also run Docker Compose locally so you don't have to muck about with 5-10 separate Docker run commands just to launch all of the dependencies of an app that you're developing, or software that someone else has written and that you'd like to use.

But if you're running all of your software in prod on a single node, don't have any horizontally scaled services, or prefer to do everything manually, such as rolling out updates on a per-service basis, then the benefits that you'll get from containers will indeed mostly regard their runtimes, configuration, logging, resource limits and similar qualities, rather than any degree of automation, because you'll miss out on that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: