Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Is it just me or why does Docker suck so much?
97 points by antocv on Aug 12, 2014 | hide | past | favorite | 84 comments
For all the massive hype about docker, I am hugely dissapointed.

Not being able to change /etc/hosts, /etc/resolv.conf of a container, ugh. Requiring some really ugly hacks just to actually provide an real "containement" of an entire applications environment - "uh yeah except hosts and resolv, cant do that".

The command syntax is lies, docker rmi can untag an image not really remove, and who came up with the name rmi? docker images already exist, docker images -rm someId would be sane.

The biggest flaw though, is that its a pain in the ass to setup a private repository and actually use it.

Isnt there some saner alternatives, like lxc with images and shareing?




I've been waiting for the hype to die down a bit and for the project to stabilise before properly playing with it but, from the outside looking in, I must admit I struggle to see how it's gaining as much attention as it is.

I can see the advantage for dev boxes where a developer might want to setup a load of containers on their machine to emulate a staging or production environment. But I don't really understand why you'd want to base your entire production infrastructure on it.

What's wrong with setting up kvm "gold" images for your various server types (db server, redis instance, ha proxy server etc.) and then just dd'ing those images to host machines and using ansible/puppet/chef to do any final role configuration on first boot? At least that way you've got all the security and flexibility a proper vm implies with not much more admin overhead than if you'd used docker.


The principal difference is that the virtual machine contains everything whereas you can spawn a container for all the components of your infrastructure.

- It's easy to restart a container if it crashed and container can be tested individually before being pushed in production.

- An application will not mess with the configuration of another app (that's solving the problem of virtualenv, rvm and apt incompatibilities).

- The application is not tight to a physical machine, a container can connect to another machine instead of the local one just by changing the routing or environment variables if everything is done properly.

It's the UNIX philosophy applied to applications, everything must be as small as possible, do only one task and do it properly and I think that's part of docker's popularity.

However like most things, it's not really magic and it must be designed and maintained properly to get all these benefits.


What's wrong with that is that now your application--and, by extension, your developers--has (have) to care about a lot more stuff and needs to have its testing surface expanded appropriately. The potential interactions and therefore potential failures between, say, ops's cron jobs, the wire-up of stuff like logstash, changes to your chef configs, etc.means there's a ton more testing to be done. In a Docker world, the amount of stuff running in a context that the application can see is minimal and it's all related directly to the application in question. Your inputs and outputs can be clearly defined and this lets you present a stable API to the outside world (via volumes and links) while, as or even more importantly, providing a known-good, stripped-down environment for the application to run on. (chroot jails are a fine solution too, but I do prefer the relative handholding and easier application independence of Docker and AuFS.)

The other problem with your suggestion -- dd'ing and then running Chef or whatever -- is boot time. I can spin up an AMI ready to deploy Docker apps in under sixty seconds. We work in AWS, so I can't speak to the actual time of deploying a physical server, but I can download and deploy a Docker image from an AWS-hosted private registry in ten seconds. Chef won't even get its act together in thirty seconds, let alone do your stuff; between downloading a full VM disk image, the startup time for that virtual machine, and the time to run chef, you've wasted so much time.

Here at Localytics, we're trying to transition to an environment that is very comfortable with automatic scaling and can do so fast--seconds, rather than minutes--and Docker is a big part of that effort. Right now we deploy to AWS autoscaling groups (with AMIs defined via Packer). I'm pushing hard for us to use a clusterization system like Mesos/Deimos because it'll let us save money by introducing safe, sane multi-tenancy--and it'll let us react to changes in system load in mere seconds.

(obligatory: we're hiring. we work on fun stuff. eropple@localytics.com)


I've been playing around with docker, so I will lend my two cents. If kvm gold images work for you, then by all means use them!

What I like about docker: The layered filesystem [0] is easy to work with and seems pretty smart. I can easily tweak an image, push it to docker hub and pull it back down. Only the delta is being pushed and pulled, not the whole filesystem. This seems like a pretty big win to me! Your db server and redis instance can share the same base system image. This makes updates and redeploying the images much faster!

Also the containers are much lighter weight to run than a vm. You don't need a beefy machine to run a few containers. I'm under the impression that one can run 20-30 containers on a modern laptop, but haven't verified this for myself.

[0] http://docs.docker.com/terms/layer/


>What's wrong with setting up kvm "gold" images for your various server types (db server, redis instance, ha proxy server etc.) and then just dd'ing those images to host machines and using ansible/puppet/chef to do any final role configuration on first boot?

You're leaving out the other attractive aspects of Docker/containers: performance, lower cpu utilization, instant spin up, less disk space, etc. Those factors lead to higher density of images on servers. Puppet/Chef + hypervisors can't compete with those particular factors.

In summary:

Containers have less isolation, but can be more densely packed. VMs have more isolation, but less densely packed.

The 2 technologies have different tradeoffs and economics.


I setup docker for our server deployments here at lever.co. We deploy our application multiple times a day. We test our changes in our staging environment first, and its important that we deploy the exact same code to production that we tested. Its also important that we can rollback any changes we make.

An application-image is the right way to do that. (With compiled dependancies and whatnot). And we could make an OS image every time we deploy code, but making a 600MB Ubuntu image to deploy a few lines of code change is ridiculous.

There's certainly lots of things that docker could do better, but I haven't seen any tools that let me deploy so conveniently, easily & reliably.


You can create "base" image of your application and update only code. Part of our fabric deploy script:

  with api.settings(**env_vars):
    api.run('docker run baseimage sh -c "git pull && cp /src/webapp/settings/paths_docker.py /src/webapp/settings/paths.py"')
    d = datetime.datetime.now().strftime('%Y-%m-%d,%H-%M')
    # new image from last running container with updates
    api.run('docker commit $(docker ps -lq) baseimage:%s' % d)
    api.run('docker stop $(docker ps -aq)')
    api.run('docker run -d -p 127.0.0.1:8073:8083 baseimage:%s sh runindocker.sh' % d)


Doing the kvm, plus ansible/puppet/chef approach has the overhead of making testing harder.

I have Docker containers that I rebuild on every test/dev run on some applications, because it is so fast. It means I know the Dockerfile is 100% up to date with my dependencies etc. By the time I'm done testing the application, I'm done testing the container, and it's ready to deploy.

If you need the added security of KVM or similar to isolate your app components, nothing stops you from isolating your Docker containers in a very minimal KVM - even just one container per VM if you prefer, and still get the containerization benefits (which to me is more about having "portable", deployable known fixed units with a repeatable build process than about how it is virtualised; the virtualisation/isolation is a bonus - we could have used Docker without it, by, as mentioned, just running one container per VM).


I like the single container on top of a very bare vm approach. That seems like the best of both worlds and (thanks to everyone's replies) I'm beginning to better understand the advantages to testing and deployment docker brings.

Looks like it's time to get my hands dirty and experience first hand what all the fuss is about.


That works if you have physical access to the machine/vm. What about EC2? Linode? Digital Ocean? etc.


Fair point. I guess for serious stuff I just find it bizarre that you'd want to run a load of containers in a vm (rather than just spooling up additional dropelts or linode instances). For non 'weekend project' stuff, surely it's cheaper and more efficient to lease a dedicated server and stick your own vms on that?


Isn't it just as much about deployment though? The thing about docker is that deploying your new app basically means stopping your running instance and starting a new one. You actually don't even NEED to stop the current running one if you have something above it that just knows which instances / ports to route traffic to.

I think docker is about ease of deployment. You still use the same server you just run a different command. As opposed to creating a new image and spinning up said image.

You can imagine roll back / migration is pretty easy in this case.


At Localytics, we wouldn't need to lease a dedicated server. We'd need to lease dozens. And we'd need to have the ability to spin up new hardware to accomodate more load in a minute or two.

You don't get that with physical hardware unless you want to overpay. We could overpay for depreciating plant assets or we could overpay for variable costs that we can more directly control. The latter makes sense to us.


Ok, that makes sense. For your workload (large and elastic), I guess I can see the advantage of using docker to quickly provision a newly created vm (assuming the vm's role is completely provided for by just that one container).


Totally. As mentioned in my other comment, it can also let us deploy a bunch of applications on the same virtualized node really quickly via Mesos or Flynn or whatever.

That said, I use Docker at home too[1] because it does make thinking about things easier. I dump my blog in a thin container not because I urgently desire security (though with Wordpress I kind of do worry...), but because it lets me develop and deploy using the same tools.

[1] - http://edcanhack.com/2014/07/docker-web-proxy-with-ssl-suppo...


It almost certainly is cheaper, but it's more than some are willing to learn to do, I guess.


All of those have simple REST APIs to setup/teardown instances. Libraries exists for popular languages like node, ruby, java, etc.


Setting up a private repository is a PITA? It's one command:

  docker run -p 5000:5000 registry
Want to back it with S3 or another storage provider? It's still one command:

  docker run -e SETTINGS_FLAVOR=s3 -e AWS_BUCKET=acme-docker -e STORAGE_PATH=/registry -e AWS_KEY=AKIAHSHB43HS3J92MXZ -e AWS_SECRET=xdDowwlK7TJajV1Y7EoOZrmuPEJlHYcNP2k4j49T -e SEARCH_BACKEND=sqlalchemy -p 5000:5000 registry
What's so pain in the ass about this?


did you just post your s3 secret key?


No, it's an example from the readme: https://github.com/docker/docker-registry#quick-start


It's faster to type the question than it was to look it up, like you did. Thanks for spending the time doing so!


In fact, it comes from the Docker GitHub page https://github.com/docker/docker-registry/tree/master


We should not be too harsh with people who inadvertently post their secret key. They may be very nice, well-meaning, smart people who, you know, make a mistake and post their key. Maybe it is in an environment file, and they post it to their public github. Could happen. Maybe they forgot and went to sleep, and found out their key got jacked. Maybe -- hypothetical here -- someone ran up $1000s of dollars producing bitcoins on this innocent person's account who made one mistake. One tiny mistake. You never know.


Not sure you are talking to me, but my comment wasn't harsh at all. Just trying to make sure he noticed and fixed it in case it was his real key.


No, I was talking about a "friend" of mine who got himself in trouble.


According to google, this isn't the first time that key was posted. It's here[1] too.

[1]: https://pypi.python.org/pypi/docker-registry/0.7.3


I was wondering, too..


Yes I have ran that command, I have docker registry running, then I wanted to clean that port and so I put nginx infront and let it proxypass... and have a onename alias for the host that serves the registry so its used by all the machines.

But then you see the hostname is confused for username by docker. Argh crap so it cant be clean, at least has to have fqdn. Or port number.

Its stuff like this man. Documentation should say it "we dont want you to use other registry than ours docker.io its possible but nah". Another issue is, how would you set to use private repo by default? Hm? And never hit docker.io?


+1.

The private registry story is horrible in docker. You have to tag your image with your registry's FQDN, which is an absolutely braindead idea. It's nice to default to docker.io in the absence of options, but I should really be able to do "docker pull --registry myregistry.local ubuntu:latest" and have it work. Instead I have to do "docker pull myregistry.local/ubuntu", which pulls the image and tags it as "myregistry.local/ubuntu". Great, now my registry FQDN is in my image tag. For any decent automation now you have to re-tag it without the "myregistry.local" so you don't depend on your registry FQDN everywhere. But then you better remember to re-tag it with your registry before you push!

In our case we wrote an HDFS driver for the registry so we could store images everywhere, and a service discovery layer to discover registry endpoints (using a load balancer just plain didn't work.) It's an unholy nightmare (but at least we've automated it) to continually re-tag images to go to the endpoint you want.


Libvirt has support for lxc these days, if memory serves. I'd recommend it - docker just seems heavily marketed.

What you're mentioning with hosts/resolv etc is a problem that has been "solved" with tools like etcd and zookeeper as someone else mentioned.

I tried docker with a couple of things, and found that it is an environment that (at the time I experienced it, maybe six months ago) was so unhelpful as to appear completely broken. It isn't for systems administrators, or anyone who knows how to do things the unix way; it's for developers who can't be bothered to learn how to do things sensibly. Half the unix ecosystem has been reimplemented, probably not that well, by people who didn't know it existed in the first place. That's my conclusion so far.

prepares to be flamed


I'm with you on the re-inventing the wheel thing. But: so? It happens, over and over again, pretty much everywhere. Heck, a significantly large portion of technology we see here on HN these days is, to put it bluntly, a lot of re-invention.

But this is really a normal aspect of a healthy, technological ecosphere. Kids grow up, they get interested in a subject, they ignore all the prior art, and they get on with doing things that they think are interesting - including fixing 'whats broke' (which often translates to 'whats not well-known') .. all technology culture suffers this factor. Why complain: its a principle driver of the state of the art, because only the good technology survives this onslaught. If its known-about in the first place, it rarely gets re-invented.


libvirt has had LXC support since time immemorial (2-3 years at least). Unfortunately, it is only really partial support and IIRC its abstractions don't work very well with LXC. Like the other responders, I also evaluated it then decided to avoid it. I have a spider-sense that libvirt was a project by a large Linux company that kind of failed to win traction and is slowly being deprecated.


libvirt seems horribly over-engineered to me. I can't stand it. One of the great appeals of Docker to me is the combination of simplicity, and the index/registry.

As someone managing hundreds of vm's, and who's being doing Linux sys-admin stuff for 20 years, Docker is the best thing that's happened for a very long time.


We can make it better.

SystemDs containers just need a registry of them tarballed.

Or lxc expanded with a registry/easy to copy root fs.


Maybe Docker helps the cloud provider be more efficient but as far as the actual Docker container v Debian VPS we deploy to, we haven't found any advantage. We ran into pain points like you describe and quickly dismissed Docker.

Unless you are building the next Heroku or infrastructure as a service I would be hesitant to recommend Docker.



Are you trying to say that their service is implemented using Docker?


I'm offering them as an alternative to Docker.


Discussion about the issue is here: https://github.com/docker/docker/issues/2267

That said:

With a DHCP server, you get this warning when you try to edit /etc/resolv.conf: "Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN"

Docker works in a similar fashion to assign IPs so this shouldn't be surprising.

http://askubuntu.com/questions/475764/docker-io-dns-doesnt-w... http://docs.docker.com/installation/ubuntulinux/

You are supposed to modify /etc/default/docker and use a consistent group of DNS servers per-host. Its simple and it works, honestly.

DOCKER_OPTS="--dns 8.8.8.8"

Can you tell I disagree? ;)

/etc/hosts shouldn't need modification if you control your dns server...since you can just place whatever you need there.

As for alternatives...

Docker is popular because all the alternatives are much larger PIA to manage.

I'd suggest, if you dislike their private repo system, you just use git to manage the files for each docker image and create it locally on the host. [e.g. git clone, cd, docker build .]

I honestly fine that works well enough and means I don't have to maintain more than a single gitlab instance for my projects.


All the other alternatives pain in the ass to use?

Pardon me, but like op I found docker to be quite unusable, and LXC a breeze to use.

Why go with the bad abstraction when you can have the real thing? Lxc has everything you want, with zero obscuring "magic" which you don't need anyway.

LXC is definitely recommended. For max benefits you probably want to couple it with a fancy filesystem like btrfs, bit it's by no means required.


Some problems I currently have with docker.

1. Dockerfiles are to static

You cannot start the docker build process with an variable (eg. a var that holds a specific branch to checkout) in your dockerfiles.

2. Managing the build process,starting and linking of multiple images/containers

I started with bash scripts then switched a tool called fig. Even though fig keeps things in a simple config file for whole setups I cannot use it because it does not wait for the DB container until it's ready to accept connections before starting a container that links to it. So I'm back writing bash scripts.

3. Replacing running containers

Restarted containers get a new IP, so all the linking does not work anymore. I had to setup a DNS server and wrap the restarting of containers in bash scripts again.

I had no big issues creating a private registry after the ssl certificate was created correctly.


A lot of tips for DNS and Docker were discussed two weeks ago: https://news.ycombinator.com/item?id=8107574

- serf, consul, etcd, dnsmasq, zookeeper, rubydns, skydock


Docker certainly has its pain points but I've actually really enjoyed working it into our build and deployment process.

The command syntax is sometimes a bit clunky but they are making regular improvements.

Not to repeat lgbr, but the registry is pretty easy to get running. I had some problems initially, but it was mostly because I didn't really understand how to use a container all that well. That sounds somewhat silly, but it ultimately was true.

Finally, the hype is ultimately a good thing. There's a lot of focus on the project right now, which (hopefully) means we can expect a good deal of improvement and stability in the near future.


The private registry is easy to get running, but actually using it is very, very clunky.

I understand they want the public index to "always work", but the requirement to tag images with FQDN for docker to even try to use a separate registry breaks a lot of a very common use cases, such as transparent caching and mirroring of images.


Docker is good to run application container, not suitable for virtulizing OS. Therefore, playing with system configuration files such as /etc/hosts of /etc/resolv.conf is not something that you want to do in Docker. You should use LXC, instead.

I agree that Docker is suck sometimes, but most of times are because existing application is not built in a way that docker expects. I am optimistic that more are more applications are designed to be used for container environment.


After having used docker for "real" things for the past 8 months or so, I definitely agree with you that it kinda sucks.

Docker's strengths come from the workflow you get to use when you use it... "Run /bin/bash in ubuntu" and it just works. For developers that's great. For a backend that does the heavy lifting for you when you're developing a lot of operations automation (like a PaaS), it starts to break down.

Just some of the things I've come across:

* Running a private registry is awkward. You have to tag images with the FQDN of your registry as a prefix (which is braindead) for it to "detect" that it's supposed to use your registry to push the image. "Tags" as an abstraction shouldn't work that way... they should be independent of where you want to store them.

* Pushes and pulls, even over LAN (hell, even to localhost) are god-awful slow. I don't know whether they're doing some naive I/O where they're only sending a byte at a time, or what, but it's much, much, much slower than a cURL download to the same endpoint. Plus if you're using devmapper then there's a nice 10-second pause between each layer that downloads. btrfs and aufs are better but good luck getting those into a CentOS 6 install. This is a major drawback because if you want to use docker as a mesos containerizer or otherwise for tasks that require fast startup time on a machine that hasn't pulled your image yet (ie. a PaaS), you have to wait far too long for the image to download. Tarballs extracted into a read-only chroot/namespace are faster and simpler.

* Docker makes a huge horrible mess of your storage. In the devmapper world (where we're stuck in if we're using centos) containers take up tons of space (not just the images, but the containers themselves) and you have to be incredibly diligent about "docker rm" when you're done. You can't do "docker run --rm" when using "-d" either, since the flags conflict.

* In a similar vein, images are way bigger than they ought to be (my dockerfile should have spit out maybe 10 megs tops, why is this layer 800MB?)

* The docker daemon. I hate using a client/server model for docker. Why can't the lxc/libcontainer run command be a child process of my docker run command? Why does docker run have to talk to a daemon that then runs my container? It breaks a lot of expectations for things like systemd and mesos. Now we have to go through hoops to get our container in the same cgroup as the script running docker run. It also becomes a single point of failure... if the docker daemon crashes so do all of your containers. They "fix" this by forwarding signals from the run command to the underlying container but it's all a huge horrible hack when they should just abandon client/server and decentralize it. (It's all the same binary anyway).

The other issues we've seen have mostly just been bugs we've seen that have been fixed over time. Things like containers just not getting network connections at all any more (nc -vz shows them as listening but no data gets sent or received), changing the "ADD /tarball.tgz" behavior repeatedly throughout releases, random docker daemon hangs, etc.

If as we're using docker for more and more serious things, we end up getting an odd suspicion that we're outgrowing it. We're sticking with it for now because we don't have the time to develop an alternative but I really wish it was faster and more mature.


Yeah, docker is god damn slow. The layer file system is a great idea, but its implementation sucks. When you push an image which its base image is already pushed, you will see tons of

Image already pushed, skipping Image already pushed, skipping Image already pushed, skipping ...

This is really stupid, it just cannot compare the list of layer image and push or pull the missing parts. And the pushing and pulling operations are slow like hell. It's really painful to use it in production. It slows down your whole deployment process, and it eventually becomes the bottleneck. It's really funny they pick go language which advertises for performance, but they failed to make very basic task works efficiently.


I ran into dm issues too, something with grsecurity stopping a bruteforce.

Oh and that ADD this /that will chown 0:0 that. Come on.


Hi, the reason ADD applies a `chown 0:0` is to avoid applying the uid/gid of the files on the source system, which could be anything and would introduce a side effect in your application's build.

There is a pull request for letting you set a the destination uid/gid determinastically, which is the right way to do it.

Just a reminder that we accept bug reports and patches :)


And /etc/hosts, /etc/resolv.conf are now writable.

docker run -d regisry

`docker rmi` can be tricky, but it can and does actually remove an image. The problem is if there is an image with multiple tags at the same commit (layer)... in this case it just untags until there are no other tags pointing to that commit. Alternatively, you can use the ID, I believe that always removes as expected.


Q: Why do you want to change /etc/hosts and /etc/resolv.conf (besides hacking DNS for testing purposes)?


some of us use HOSTS to write config

e.g.

    10.0.12.34 mysql_host
This allows some degree of flexbility

1. boot up new instances quickly as long as HOSTS is correct

2. Don't have to hard-code actual mysql server IPs.

3. Make mysql master/slave failover much easier.


I think most folks doing devops end up using things more like zookeeper, consul, etc. instead to perform the above as opposed to hosts.


Remember you may have to deploy third-party processes as well as apps whose code you control. Most existing software uses OS-standard APIs for doing things like resolving hosts, and you can't just point it at a path in zk. That means running a DNS server or configuring /etc/hosts.


tiny/small sites don't require heavy lifters like zookeeper, however they do require some degree of clustering and sometimes migrating.


Regarding number two, you'll just link your containers together with docker. See https://docs.docker.com/userguide/dockerlinks/

It seems that most issues OP have is mostly because misunderstanding or lack of knowledge.


Umm, so, how do you deal with Docker when you have more than one physical machine? It seems half the stuff it does just doesn't work then, and you have to pile on further abstractions anyway, using Docker just as a simple container management system (of which we have working ones already) and container build system (of which there are far better systems available).


What if the mysql server isn't in a docker container or even on the same machine?


Then you should be (ideally) using a service discovery solution or (less ideally) wiring up with environment variables. This is (one reason) why CoreOS uses etcd.


then just use environment variables (ENV keyword in Dockerfile or --env param in docker run command)


The problem with ENV is that you can't change server IP on the fly, you have to restart the web server to apply new ENV


To run unit tests which rely on certain hosts to resolve to 127.0.0.1.


> To run unit tests which rely on certain hosts to resolve to 127.0.0.1.

If you are unit testing DNS infrastructure, shouldn't you mock DNS things?


I'm not unit testing DNS infrastructure. I'm unit testing an Apache module. I send requests to different virtual hosts on 127.0.0.1:80, and checking whether the responses are what I expect.


That sounds sorta like an integration test; a unit test wouldn't talk to DNS.


Because its used by the applications Id like to contain fully, you know, for all the purposes containers are supposed to be awesome?

In this scenario I dont control the dns server and the app reeally likes aliases instead of IPs.


you know. On the project I'm working on, I thought about using the hosts file as a method of configuring the behaviour of a server. "mycomp-eventserver xxx.xxx.xxx.xxx"

In the end, we just created configuration files for it, as more often then not, there are other factors that need to be included.

That said, maybe these requirements are being set by applications that are not yours to mess around with.

In which case, may god have mercy on your soul.


You might want to try systemd-nspawn (http://www.freedesktop.org/software/systemd/man/systemd-nspa...). It is much more bare bones than Docker, but in some use cases that might actually be an advantage.


I like systemdnspawn much more than docker. Thanks.

But ubuntu is still anti systemd, garh.



I've found docker to be a revolutionary abstraction. By itself, it's not a silver bullet. But combined with the right tools, it completely shifts how you think about devops.

I recommend looking into: Quay for private image hosting and building, CoreOS as an environment to run in production.


this seems to suggest that the /etc/hosts, /etc/resolv.conf problem is being fixed: https://github.com/docker/docker/pull/5129


It may just be you, but given Docker's youth and novelty, you should expect some frustration. If everything about it seems like a pain in the ass, it might be the wrong tool for your use case.


I've adopted Docker in my main dev workflow and I'm super happy. And the 4 or 5 co-workers I showed how it works have since adopted it too.

I think it's very possible it's just you.


Would you mind sharing that work-flow? I'm interested in how this is done, since i have failed to do it on several occasions.


I have failed to adopt it as well, and am still using Vagrant, but as my desktop is Linux, vagrant is a little bit of overkill.

The primary stopper for me is run vs start I think - and persistence of the most recent change I made in an environment.


`run` creates a container from an image and tries to start it. If it doesn't start you have a stopped container in `docker ps -a`. If it does start, you have a started container in `docker ps`. You could stop a started container if you wanted, and later start it again without having to use `run` all over again.

Sometimes running fails to start the container because there's a problem. You'll then have a stopped container.


I'm interested to hear what's everybody's take on how Docker compare/compete/integrates with virtualization (e.g. VMware, KVM, HyperV).


Its not the same thing, containments and virtualizations.

Abit like comparing apples to oranges. Security wise KVM beats docker out.

I like lxc and systemds machinectl containers. But docker dissapointed me. It fails to be a full container.


https://github.com/pannon/iocage. What? You asked for the alternative.


The conspiracy theorist in me says this is someone at Docker looking for competition.


If noone else steps up to it, Ill do it myself.


The truth is there isn't that much work to do anymore so the tech employee convinces their boss' boss that some hyped up tech will make them even more efficient. After a minimum of 3 months to integrate the new tech into the workflow, that same tech employee will AHDH his way into some other tech seen at Velocity and leave the internally wiki documented mess to some poor sap to maintain.


I personally would say "It's just you."

Docker has been huge for us. We can run the same containers locally that we run in production. Dockerfiles are SIMPLE and easy for people to create and understand.

Fig has really made using docker for local dev rather pleasant.

Are there some hiccups here and there? Yes. The project is young and they are actively trying to smooth over a lot of issues and pain points.

I feel like most people who dislike docker have not actually tried it or used it. That could just be a wrong opinion though.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: