Hacker News new | past | comments | ask | show | jobs | submit login

Do you have any examples of what LXC does better than docker? I'm very new to the whole containerization thing but I've already come across a couple of the issues you've mentioned.



Shameless copypaste from well written piece by Flockport:

Docker restricts the container to a single process only. The default docker baseimage OS template is not designed to support multiple applications, processes or services like init, cron, syslog, ssh etc.

As we saw earlier this introduces a certain amount of complexity for day to day usage scenarios. Since current architectures, applications and services are designed to operate in normal multi process OS environments you would need to find a Docker way to do things or use tools that support Docker.

Take a simple application like WordPress. You would need to build 3 containers that consume services from each other. A PHP container, an Nginx container and a MySQL container plus 2 separate containers for persistent data for the Mysql DB and WordPress files. Then configure the WordPress files to be available to both the PHP-FPM and Nginx containers with the right permissions, and to make things more exciting figure out a way to make these talk to each other over the local network, without proper control of networking with randomly assigned IPs by the Docker daemon! And we have not yet figured cron and email that WordPress needs for account management. Phew!

This is a can of worms and a recipe for brittleness. This is a lot of work that you would just not have to even think about with OS containers. This adds an unbelievable amount of complexity and fragility to basic deployment and now with hacks, workarounds and entire layers being developed to manage this complexity. This cannot be the most efficient way to use containers.

Can you build all 3 in one container? You can, but then why not just simply use LXC which is designed for multi processes and is simpler to use. To run multiple processes in Docker you need a shell script or a separate process manager like runit or supervisor. But this is considered an 'anti-pattern' by the Docker ecosystem and the whole architecture of Docker is built around single process containers.

Docker separates container storage from the application, you mount persistent data with bind mounts to the host (data volumes) or bind mounts to containers (data volume containers)

This is one of the most baffling decisions, by bind mounting data to the host you are eliminating one of the biggest features of containers for end users; easy mobility of containers across hosts. Probably as a concession Docker gives you data volumes, which is a bind mount to a normal container and is portable but this is yet another additional layer of complexity, and reflects just how much Docker is driven by the PAAS provider use case of app instances.


> Docker restricts the container to a single process only.

This is definitely not true. I'm running syslogd inside a container (next to the actual process) without any trouble.

> ssh

I'll take `kubectl exec` over SSH any-time because it's a much more plausible way to handle credentials. Also, it does not require an always-running daemon inside the container, which reduces the TCB and the memory footprint.

> Take a simple application like WordPress. You would need to build 3 containers that consume services from each other.

It's not required, but it's a good practice to take advantage of the capabilities of your container orchestration software of choice.

> a MySQL container plus [...] separate containers for persistent data for the Mysql DB

Why would you need a separate container for data? The thing you're looking for is a "volume" (in the simplest case just a bind-mount from the host into the container, as you even explain further down).


Distributed storage is still a big issue for sure. There are some options, but none are ideal. One option is to map to host and use NFS to share across hosts. Another option is to use something like Convoy or Flocker, which come with their own complexities and limitations. Hopefully more progress is made on this front.

As for the wordpress app and other issues mentioned, it's actually very simple:

    nginx:
        build: ./nginx/
        ports:
            - "80:80"
        volumes_from: 
            - php-fpm
        links:
            - php-fpm
    php-fpm:
        build: ./php-fpm/
        volumes: 
            - ${WORDPRESS_DIR}:/var/www/wordpress
        links:
            - db
    db:
        image: mysql
        environment:
            MYSQL_DATABASE: wordpress
            MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
        volumes:
        - /data/mydb:/var/lib/mysql
This isn't a "production" config, but that wouldn't look that much different. The real beauty is that I found this compose file with a simple search and very easily made minor tweaks (e.g. not publicly exposing the mysql ports).

You might run into permissions issues if you use host mounted volumes, but I have not. Normally I prefer to use named volumes (docker-compose v2) and regularly backup the volumes up to S3 using Convoy or a simple sidecar container with a mysqldump script.


This is interesting. I'd been considering mounting drives for persistence of stateful data from containers.

Let's say I want to run a Wordpress hosting service. In my ideal world, I deploy an "immutable" container for each customer, i.e. everyone gets an identical container with Wordpress, Nginx, MySQL etc. So what to do with state info, like configs and the MySQL data files? I'm thinking of mounting a drive at the same point inside each container e.g. /mnt/data/ and /mnt/config/ or similar.

This way the containers can all be identical at time of deployment, and I can manage the volumes that attach to those mount points using some dedicated tool/process.

This is all still on the drawing-board... but what you've said here seems to suggest this approach should work. Or have I optimistically misinterpreted what you've said? :)


Yes that's a pretty good approach. Just organize the configs in a directory structure on your host and mount them as volumes (along with any other necessary volumes for e.g. uploaded media). There are more advanced methods like using Consul/etcd, but only go that route if you're ready to invest a lot of time and need the benefits.


In your example -- assuming 20 different blogs/customers -- you'd be running 20 separate instances of MySQL (plus 20 nginx instances plus 20 php-fpm instances plus ...)?

Now, let me first say that I haven't come anywhere close to even touching containers and most of what I know about them came from this HN thread so please forgive me if I'm missing something...

I, personally, would rather only have a single MySQL instance -- or, in reality, say, a few of them (for redundancy) -- and just give each customer their own separate database on this single instance.

With regard to containerization, why is all of this duplication (and waste of resources?) seemingly preferred?


You're quite right, of course.

In my scenario, I want to provide a package for easy download and deployment. Each customer will indeed run their own mysql db, if they choose to self-host the containerised software.

I plan to offer a paid hosting service, where I'll rent bare metal in a data centre, onto which I'll install management and orchestration tools of my choosing.

An identical container for any environment is my ideal, since this will make maintenance, testing, development etc simpler. Consequently each customer hosted in my data centre will, in effect, get their own mysql instance.

This way the identical software in each container will be dumb, and expect an identical situation wherever it's installed.

Now, in reality, I may do something clever under the hood with all those mysql instances, I just haven't worked out what yet :)

Actually it will probably be Postgres, but I'll use whatever db is most suited.

So yes, some duplication and wasted disk space, but that's a trade off for simplified development, testing, support, debugging, backups, etc.


In this case, a single mysql instance with individual databases may indeed be the best approach. It'd be very easy to launch a mysql container and have each wordpress container talk to it. I use Rancher for orchestration, and it automatically assigns an addressable hostname to each container/service, so I'd just pass that to each wordpress container. Or you could expose a port and use a normal IP + port.

The duplication is preferred because you can take that stack and launch it on any machine with Docker with one command. Database and all. Usually that's great, but it'd be very inefficient in this case.


> Docker restricts the container to a single process only.

No, there is only a single process treated as init in the container, but you can spawn off multiple child processes.

> The default docker baseimage OS template is not designed to support multiple applications, processes or services like init, cron, syslog, ssh etc.

If you want init, cron, syslog, ssh, and your app(s) all rolled up into one, you want a VM, not a container.


> No, there is only a single process treated as init in the container, but you can spawn off multiple child processes.

It was extremely clear that the person who wrote the text you are replying to understands this as they specifically cover this fact with respect to using a service management daemon: you are just being pedantic with the wording to complain about this :/.

> If you want init, cron, syslog, ssh, and your app(s) all rolled up into one, you want a VM, not a container.

No: a virtual machine would burn a ton of performance as it would also come with its own kernel. The entire premise here is to be able to share the kernel but split the userspace in a sane way.


You mean the way the parent('s quote) needlessly broke down an application into single-process containers, and finally breezes by "actually, you can" in order to spruik LXC instead, because 'multi-process'? Or the way the parent complains about not having init, but then says you can use something like runit?

I don't particularly like Docker and led my company's exodus from it, but the parent is being very slanted in their wording.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: