Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does anyone run docker on a production server? Can it be used for better isolation and security? I am new to docker, so apologies if this turns out to be a naive question.


At present you should not view Docker as a significant security enhancement. It makes processes less visible to each other, and you can use it as part of a strategy to isolate network traffic, but the kernel attack surface is unchanged.


> At present you should not view Docker as a significant security enhancement. It makes processes less visible to each other, and you can use it as part of a strategy to isolate network traffic, but the kernel attack surface is unchanged

How is process isolation not a significant security enhancement?


Processes already should be running as different uids so they cant actually do anything to each other, so not being in the same namespace makes no real difference. There is slightly better information hiding, you cant see command lines of other processes and other info from /proc but no "significant" enhancement.


I disagree (or I'm misinformed). Aren't just unable to see other processes, but they don't see anything of the host system: the filesystem, ethernet adapters, memory usage, cpu usage, etc. Namespaces can hide a lot of information as far as I know.


The OP was asking about process isolation in particular. You still get to see memory usage and CPU usage globally. Not being able to see the host filesystem is not a huge security benefit, again your pid should not have any significant access anyway, so this should not be a security enhancement.


>Not being able to see the host filesystem is not a huge security benefit, again your pid should not have any significant access anyway, so this should not be a security enhancement.

Nope, it very much is. The fact that "your pid should not have any significant access anyway" doesn't mean that having that made certain and very easy by namespacing is not a security enhancement.

Perhaps you mean something else by "security enhancement" compared to what others here mean. You seem to mean: "extra security that couldn't be achieved by totally finely tuned apps running on the host with all the proper pids and permissions".

Whereas by "security enhancement" people mean: "achieving the same level of security of finely tuned apps running on the host with all the proper pids and permissions with much better EASE, and without having to repeat the whole fine tuning for each new app I add".


The point is, nice as it may be, it's still pretty new, and not specifically a security product. It's not appropriate to rely on it as a significant part of your security plan for your business.

But still, layers and all that.


Remind me never to use any service you ever create.


Processes already should be running as different uids

Docker makes that way easier.

If I want 3 instances of nginx running for different projects, I don't really want to setup 3 nginx users (nginx1, nginx2, nginx3).

With Docker, I just start the container and it's isolated from everything else.


In Unix & Linux, you don't need to setup users, you can just run the processes under different uids (most process managers support this). Adding a user to /etc/passwd is only needed if you want them to have an username and password.


I had no idea this was possible - googling "process under different uid" doesn't yield anything obvious. Any hints on how to do this on a standard command linux prompt?


This is one way:

    # sudo -u "#10000" -g "#10000" id
    uid=10000 gid=10000 groups=10000


"I had no idea this was possible - googling "process under different uid" doesn't yield anything obvious.""

I am looking at the name of this website and I see that this website is named "hacker news".


>I am looking at the name of this website and I see that this website is named "hacker news"

As in "hackers"? People, that is, from all ages, that weren't necessarily born knowing everything, and are not afraid to ask around when they don't know how to do something?

If so, then this is the wrong website for this kind of snark.


So unimpressed with this arrogant, insecure behavior. See someone learning something, slap them down.


It's not so hard. I do this myself, I just ran:

    adduser one; adduser two; adduser three
I have about ten UIDs all running their own chrooted copy of thttpd, and then I have a nodejs proxy to route incoming traffic to each instance (which listens on localhost:XXX - where XXX is the UID of the user for neatness).


The processes still have unfettered access to an interface that exposes a few million lines of C code running in ring0


> How is process isolation not a significant security enhancement?

Significant compared to multiple threads running in same OS process, not significant compared to a fully virtualized OS.

You also don't need Docker to launch multiple processes, you can do it on any modern general purpose operating system.


I'm running it on a public server. There docker separates: serving a website with nginx, running some node processes separately (proxied by nginx), running postfix and running git. Note that git is also separate from ssh of the host system.

It has helped us in situations where we wanted to try new configurations of nginx and email without fiddling with the production processes. Reverting changes was also handy sometimes.

It also helped us to be more flexible with versions. We're more confident about running a very recent version of nginx, which allows us to make use of the new features instead of waiting for debian to ship packages. Also less hassle with with dependencies, although the different docker instances need to be upgraded like any other VM.

That said, when we set things up we were missing some features (connecting docker instances to eachother). We had to fiddle to get things the way we wanted and we're still missing some features that we have made workarounds for. I wouldn't recommend it running it in production. However for us it's worth the effort especially in the long run when Docker can handle all of our use-cases.


That's only a potential use for Docker.

Docker is awesome for several other reasons, one of which is "shipping."

Google uses not Docker (that I know of) but containers for a lot of their development.


> Google uses not Docker (that I know of) but containers for a lot of their development.

https://github.com/google/lmctfy

Interest in it seems to have died down.


> Interest in it seems to have died down.

Not really. AFAIK, they are right now preparing the next release and there will be a lot of activities on lmctfy in 2014.


Docker author here.

I have met with the lmctfy team, they are indeed awesome and doing very cool work, in particular around providing a higher-level and more ops-friendly interface to cgroups for resource limitations, one that emphasizes application profiles and SLAs over tweaking dozens of individual knobs.

I really want to make this available as a docker backend, and they seemed to like the idea - something was said about Go bindings possibly coming soon :)


But not the work. The lmctfy team is pretty awesome.


What do you mean by shipping?


Shipping applications. Also, building.

Lets say I have a web app - and there's a production server which runs in docker. Cool! Some port-forwarding easiness with nginx or your favorite flavor of X and it's good to go.

When you want to upgrade, you run your integration tests on the new container. To "Ship" you just, well, change the port forward once you confirm it's good.

If you want to change it to an entirely new server - easy. Just pull your image and create your container - then change your IP information to point to the new server.

Docker is to contain things, not necessarily for security, but to make a build contained. Make it have no dependencies outside the container, other than being able to run Docker.


Can you explain that further? I don't get it at all.

No application is that simple. You need services that persist between deployments. Like a database.

So you make the database run in it's own container and have the containers talk.

Now you want to upgrade the database. How do you do that?

So you have some persistent storage that can be attached to the containers.

Now you create a new container that has all the exact knowledge (in the form of ad-hoc scripts?) that it's job is to get the attached volume and upgrade the data there and then run the new version of the database?

How is that more convenient or in any way better than chef or puppet?


Data migrations are tricky no matter what technology you apply to automate them.

Also, there is no real animosity or conflicting choice between Chef/Puppet and Docker. While there's overlap, there's nothing preventing you from taking the best of both technologies and integrating them. In fact, there are projects (like deis.io) which attempt to do just that.

I have written about how to do basic data migrations with Docker. You can find the link in my profile.


First off Sorry, I didn't want to imply that there's any animosity or conflict between Chef/Puppet and Docker. I don't think there is any?

Also I completely agree that data migrations are tricky, that's why I need a tool to help me there as best as possible.

I should have asked: Given that I'm familiar with or can learn Chef/Puppet what advantage do I get from (also) using Docker. Or what advantage will I get down the road from (also) using Docker?

For example limiting resources come to mind, giving the processes in the container different IO priorities, memory allowances etc. But that's all just cgroup settings, I can do that without creating a container (via Docker) for individual processes.

I just don't see any situation where a Docker container is more useful than a Chef/Puppet recipe. And for any "complicated" setup I feel the advanced features of Chef/Puppet that allow me to fine-tune the setup for each deployment make up for the ease of use of Docker.

Yet there's a huge amount of buzz around Docker and containerization in general. What am I missing?


> Yet there's a huge amount of buzz around Docker and containerization in general. What am I missing?

Chef/Puppet don't solve the problem of running multiple apps with conflicting dependencies on the same machine. A docker image is kinda of like a more efficient virtual machine in that it isolates containers from each other. Maybe you're running 15 containers on one machine each running various different versions of rails, or whatever.

Chef/Puppet let you automate the setup of a machine so you can duplicate it; a docker image basically is a machine you just copy around, and like a VM, they're their own little worlds (for the most part).

That's my understanding anyway.


Ansible author here (http://github.com/ansible/ansible).

My view on this, basically, is that the role of automation tools in the Docker realm is going to be exactly like it is with most of the people who like to 'treat cloud like cloud' -- i.e. immutable services.

The config management tools -- whether Ansible, Puppet, Chef, Ansible, whatever -- are a great way to define the nature of your container and have a more efficient description of it.

You'll also continue to use management software to set up the underlying environment.

And you might use some to orchestrate controls on top, but the set of management services to manage docker at a wider scale are still growing in nature and very new.

I'm keeping an eye on things like shipyard but expecting we'll see some more companies emerge in this space that provide management software on top.

Is Docker right for everyone? Probably not. However I like how it is sort of (in a way) taking the lightweight vagrant style model and showing an avenue to which software developed in that way can be brought into production, and the filesystem stuff is pretty clever.


I'd like to see more innovation around Ansible + Docker being particularly compelling. Do you have any ideas on what that could look like?


> Yet there's a huge amount of buzz around Docker and containerization in general. What am I missing?

Hi, I'm the creator of Docker. You are not the only one asking this question :) Here's a discussion on the topic between me and the author of a fairly critical blog post titled "docker vs reality: 0-1", which should give you an idea of his state of mind :) I left a comment at the end.

http://www.krisbuytaert.be/blog/docker-vs-reality-0-1


Thanks for this link, Solomon!


I think they can co-exist: the market and the use cases are really broad and there's a lot of space for a wide variety of tooling.

I wrote a blog post about Docker and Configuration Management that elaborates on this:

http://kartar.net/2013/11/docker-and-configuration-managemen...

And I wrote a blog post talking about using Puppet and Docker that talks about how they might interact:

http://kartar.net/2013/12/building-puppet-apps-inside-docker...


Yet there's a huge amount of buzz around Docker and containerization in general. What am I missing?

Dependency management without having the high(er) cost of full VMs.


This is a good point. You can keep the database storage in another container, or linked in the root filesystem.

How do you upgrade the database usually? You can also do it the same way.

Not saying it really solves all the problems, just that it solves some of them.

Lots of people go years without upgrading the database that "shipped" with their application! As well, I know of many enterprise applications that literally ship with an entire _server_ as their method of production. You literally buy a server! Containers seem better than that.

They definitely aren't to be used for everything. I wouldn't use them in your situation at all - but they work well for many other things.


I've been running into this line of thought quite a bit as I explore containers, but nowhere is it addressed how contained applications talk to each other. An app seldom lives on its own - it will integrate databases, API calls into other systems, etc - how are those configured reliably and correctly when in such a transient environment? Are they also containers? If so, how are they discovered? Dynamic configuration generation or does the application have to be aware of how to discover them at runtime?

What about in a dev environment? It seems that configuration on a single local host would look vastly different, in spite of the same code running.


If you read through the redis service example it might answer some of your questions:

http://docs.docker.io/en/latest/examples/running_redis_servi...

It shows how a container can be "linked" to another container with an alias, and that alias then causes environment variables within the container to point to the correct IP address and port:

    DB_NAME=/violet_wolf/db
    DB_PORT_6379_TCP_PORT=6379
    DB_PORT=tcp://172.17.0.33:6379
    DB_PORT_6379_TCP=tcp://172.17.0.33:6379
    DB_PORT_6379_TCP_ADDR=172.17.0.33
    DB_PORT_6379_TCP_PROTO=tcp


Cool, thanks for the link.


Tried that, Docker was more of nuisance than help, so I've remained with bare lxc tools.

https://news.ycombinator.com/item?id=6959864




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: