Hacker News new | past | comments | ask | show | jobs | submit login
Docker Now Part Of Red Hat OpenShift (techcrunch.com)
106 points by KenCochrane on Sept 19, 2013 | hide | past | favorite | 63 comments



Are you shitting me? Docker is a horrible little Go binary that doesn't even clean up after itself properly when you CTRL+C it. It's barely documented, it does barely anything except some macro combinations of tar+wget+lxc with a schmoozy web site on top.

Who is pushing this thing? Why does anyone even care about it? Does anyone even know what a Linux container is?


I think you could adjust your tone to be more HN-suitable, but I don't think this is worth getting upset about: it sounds like Docker made a lot more changes than did OpenShift. It's good positioning by DotCloud, but it's not likely to have any real consequences IMHO.

Kudos to the dotCloud guys for being willing to drop docker's weirder dependencies (e.g. AUFS)


Eh, AUFS isn't that odd these days. I built a chroot build utility for an open-source project on the back of it and it's super easy to work with. Haven't dealt with it in a couple of years, but I can't imagine it somehow got all kinds of crazy difficult to deal with or implement.


I haven't been able to find a clear reason as to why AUFS seems to have lost support from major distros - although the fact that it requires patching VFS in the kernel and isn't "just a module" doesn't help.


Yeah. That part is true. I think you are probably onto the reason there. For RHEL distributions at least all I've seen included are the fuse based unioning filesystems which is sad.


Don't think that's it. OverlayFS was accepted and also requires patching VFS.


If overlayFS is in-tree, then it's a possibility to be backported by Red Hat. From the discussions on the GitHub issue tracker, there's a definite problem with custom kernels vs RHEL supported w/ contract.


My understanding is that it's been unable to get into the Linux kernel because it's unstable? I agree it's a great tool if it works :-)


Docker maintainer here. I love aufs, it's been awesome for us at dotCloud. It is definitely stable in production (at least it has been for us). I really really wish we didn't have to remove it as the default. And I don't really know why it's not being merged, exactly. I sure there are good reasons but they are not immediately obvious to mere mortals like me.

In any case, aufs will come back as an optional storage backend in a future version. It still has a few advantages over device-mapper - increased memory sharing for example.

Just my 2c.


Interesting! What was the response when you tried to get it merged into the kernel?


I had nothing to do with the effort with getting it merged. But Junjiro Okajima (the author of aufs) seems to have tried a few times, and obviously it hasn't worked. At the same my observation as a "real world implementor" is that it works great and no obvious alternative is available.

Again I'm not an expert in "kernel politics" (you could even say I know nothing about them).


Union filesystems in general have had a storied and troubled past in the Linux kernel's political landscape. Valerie Aurora (formerly Valerie Henson), a former redhat kernel developer who worked on Union filesystems tried quite valiantly (no pun intended!) to get the unionfs patchset merged and ultimately left redhat to pursue personal things.

Somewhat recently there was some traction with a version of her patches that looked somewhat close to being merged, but it might have also been dropped.

http://lwn.net/Articles/482779/


Well, I imagine given the current love of Docker on this site you're going to be down-voted with a quickness, but from what I've seen and done with it so far I really don't see the great benefit of it over standard LXC. Essentially it's a neat wrapper and has some cool API features, but I don't see the benefit beyond that.

Honestly if I was/am going to hitch my cart to LXC I'd probably be going the libvirt-lxc route so I'd be dealing with something of an actual standard.

Edit: I suppose one reason people might prefer Docker over libvirt would be no XML blegh, but it's honestly not that bad.


It seems like the core of Docker is little more than a helper script that should be part of libvirt or LXC proper (which btw, already has a system for building containers from a template), but that would involve less horn-blowing and web design than it would submitting boring, well-tested old patches upstream.


DotCloud have done a wonderful job of popularizing the idea of LXC containers with Docker. I think the idea of LXC containers as software-distribution mechanism is a great one, and I credit Docker with getting that idea into my mind.

It is great to see that many of Docker's ideas are being implemented upstream; e.g. LXC added support for BtrFS snapshots shortly after Docker launched. Sounds like OpenShift is considering which Docker ideas belong in their platform.

It can sometimes be hard to work with upstream projects, who often have a different world-view. I do like the idea of releasing a "sacrificial lamb" project that is a demonstration of your ideas, even if long-term all the ideas belong upstream (i.e. LXC in the case of Docker).


I think jQuery is a good analogy. One day jquery might be unnecessary, when all browsers everywhere implement all the high-level API goodness. In the meantime... :)


A great analogy. Hopefully - just like jQuery - you can get Microsoft on board with LXC as well :-)


We're on HackerNews. "it's honestly not that bad"... the point of hacking is taking things that are bad or "not so bad" and making them better.

This argument to me achieves only the opposite, it makes me never want to consider whatever option it defends.


Here's what it looks like to create a container in libvirt-lxc: http://libvirt.org/drvlxc.html

It is pretty dang clear to me how to create it.


omg does this work even with Ruby and Go!?! :O Do you have a tutorial???! The fonts suck so bad. Omg this is not even web scale


Why yes, it actually does have Ruby bindings:

http://libvirt.org/ruby/


Here's my honest experience. Half a year ago, I used to think like that too. Then I tried everything vanilla. I used naked lxc. I used those lxc standard commands like lxc-start. I dedicated a partition as btrfs for lxc because I needed a lot of nodes based on the same base image. Then I wrote scripts to automate everything. It had been a fun experience. But it just stopped there. None of what I did was general enough to be reused. Then 3 months later, I tried docker again. I started to appreciate it. It achieve the same but was with much less effort. Documentation is not the best, but the community is really responsive. I dumped my own scripts and never looked at them again.


Have they solved the issue of non-deterministic commands e.g. apt-get update, which could mean a Docker container behaving differently.


> Does anyone even know what a Linux container is?

Judging by the number of times "when will Docker support Windows containers?" has been asked... no.


Windows is not quite there yet but... who knows :)

- Process isolation: http://msdn.microsoft.com/en-us/library/2bh4z9hs.aspx

- Network isolation: http://msdn.microsoft.com/en-us/library/windows/apps/hh77053...

- Disks:

-- Mounting volumes: http://msdn.microsoft.com/en-us/library/windows/desktop/aa36...

-- NTFS provides something like copy-on-write (part of a union file system) via "Single Instance Storage" http://en.wikipedia.org/wiki/NTFS , http://www.neowin.net/news/microsoft-details-how-windows-8-w...

-- VHD "differencing disks": http://msdn.microsoft.com/en-us/magazine/dd569754.aspx, http://technet.microsoft.com/en-us/library/cc720381(v=ws.10)...

...or just use colinux: http://www.colinux.org/ "that allows it to run cooperatively alongside another operating system on a single machine. For instance, it allows one to freely run Linux on Windows 2000/XP/Vista/7, without using a commercial PC virtualization software such as VMware, in a way which is much more optimal than using any general purpose PC virtualization software. In its current condition, it allows us to run the KNOPPIX Japanese Edition on Windows."


What is a package manager except a macro combination of tar+wget+a dependency graph?


[deleted]


Docker maintainer here. This is a common misconception, and I find it disappointing. On the one hand we glorify simple designs and the unix philosophy of doing one thing well... Only to trivialize and look down upon projects which apply this philosophy. It's so simple... it's not "real technology"!. No wonder so many projects are so complicated - that's how you prove to your peers that you're a real man!

Now to address the initial comment: docker has different goals from lxc. There's a summary of the differences here: http://stackoverflow.com/questions/17989306/what-does-docker...


I deleted my comment before you replied, because while I tried not to made it rude, I decided there's no reason to add another negative voice to this thread.

The fact is that Docker is very useful, but it really is simplistic from a technical perspective. It's a front-end that leverages the very impressive and very hard work done to create Linux, LXC, Aufs, etc. Those guys deserve 98% of the credit that Docker is getting, which I admit does irritate me to some degree.

Yes, Docker is a high level abstraction that could use OpenVZ or something else, but if it used OpenVZ I'd just say that the OpenVZ guys deserve most of the credit -- having done most of the hard work.

Docker is the kind of thing that companies create internally all the time, the way you did at dotCloud. It's very neat that you open sourced it, and that it has brought attention to another way of doing application deployment, but that doesn't make it a revolutionary technology.


The revolution isn't in the technology. You're right; Docker is just a standardization of the use of these other components, for people to rally around. But the rallying-around is the revolutionary part. If everyone builds their own Docker, we can't interoperate at a "container" level. If everyone settles on one container format, we can.

In my opinion, Docker-the-implementation is only interesting insofar as it increases the network-effect of Docker-the-container-format. :)


Likewise, it irritates me when people put something down for popularizing an idea without too much technology. This "who needs marketing? all we need is code!" code is exactly why desktop Linux never took off. Popularization and marketing by themselves are very important, I'd wager at least as important as the code. What good does your code do if few people know about it or how to use it? The Docker guys are doing LXC, Linux, Aufs etc a huge favor.


I think you could argue that Docker (and similar projects sure to come) are merely the inevitable result of creating technology that's extremely useful and publicly available. No one marketed Xen, OpenVZ, or KVM. They run on millions of servers today. Certainly no one confuses those technologies with the many wrappers for them.

And I'm not trying to hate on Docker either. I think it's cool and I'm sure the guys that work on it are nice. But I must agree with the root comment (though not his tone). From a purely technical perspective it's not very interesting or irreplaceable, the way that OpenVZ or lxc are.


> No one marketed Xen, OpenVZ, or KVM

Those are not very good examples. Xen was very heavily and skillfully marketed by its corporate sponsor, XenSource, and after that by various companies with a financial stake in the project.

OpenVZ was the flagship technology of Virtuozzo, and is now at the heart of Parallel's commercial offering.

KVM was the heart of Qumranet's offering, which was then acquired by Red Hat.

So all three of these examples are factually wrong: all three technologies were heavily marketed, both externally (getting customers to use it) and internally (getting other businesses to support it, contribute to it, and lobbying to get it merged upstream). Arguably none of these projects would be so widely adopted without the marketing efforts of their corporate sponsors.

Better examples might be vserver or qemu, who are not directly linked to corporate sponsors (that I know of).


> From a purely technical perspective it's not very interesting or irreplaceable, the way that OpenVZ or lxc are.

Obviously I am biased since I work on Docker. But I completely disagree. I believe Docker has the potential to be quite important and quite irreplaceable (in other words: useful!), although in a very different way than openvz and lxc. If I didn't believe that, I wouldn't be spending so much energy working on it. And if you don't understand why developing Docker is a lot of work - actual engineering work, not marketing, although that is important too - well one of us is overestimating his understanding of the topic :)


My guess is the overestimation is mostly on my side, but a bit on your side :-)

I created a simple deployment system using auto-generated and versioned OpenVZ templates. Achieving something pretty similar to what Docker buys you. It's pretty trivial to do that much, which I think can fairly be said to be the core of what Docker is.

I realize that Docker is trying to do a lot more than that, and those things probably will end up being very useful and valuable. I'm sure there's a lot of hard technical work involved in making them happen.

And there's a reason I deleted my original comment. I truly don't have any desire to call your baby ugly. I'm sure you guys are putting in a lot of effort and I truly am excited that it exists. Infrastructure software written in Go automatically earns my vote.

Anyway, good luck. Don't let the^Wus bastards get you down!


Thanks - it's good to see it's still possible to disagree constructively on HN :)

Yes, I think it's fair to say that a skilled sysadmin can assemble a system quite similar to the core of docker. In fact many, many sysadmins have. The result is an ocean of DIY container management tools, each incompatible with the other, and each not quite generic or polished enough to be made usable by others because, well, sysadmins have real work to do :)

So the question really is: is it valuable to federate efforts so that instead of 1000 incompatible and unpolished tools, we get a small number of tools which are more polished and more interoperable? I believe the answer is yes, because it allows the use of containers not just as a site-specific deployment tool, but as a mechanism for code distribution and re-use. We can make containers as useful as libraries! That is not possible unless we agree on some sort of standard "call convention".

Now, you may agree with this but answer "but these tools already exists: look at lxc, openvz and libvirt", then I respectfully disagree. If those tools were sufficient, sysadmins and developers would just use them directly, instead of each writing elaborate abstractions over them. These tools were not designed to use containers as units of software distribution and re-use. They were designed to use containers as lightweight servers. Basically like a VM but faster. Those are useful tools - but they serve a different purpose than Docker.


I think this is actually one of the best formulas for successful products. The productization of systems that lots of companies are already forced to individually build themselves.

Probably what colors my perception of Docker's triviality stems from the initial version that I checked out. Having just checked out the latest version now I can see a ton of work has been done.

The first version of Docker probably isn't much more than what an ambitious internal project might look like. The latest version is exactly what you'd get if a company put real effort into it.

And I actually completely agree about creating high level interfaces. I'm not someone who argued that Dropbox shouldn't exist because I personally know how to use rsync quite well.


"This "who needs marketing? all we need is code!" code is exactly why desktop Linux never took off."

No - it absolutely isn't. I cannot emphasise that more. It isn't at all why "desktop linux" didn't "take off".

Let's not open up the whole desktop linux thing, but a desktop linux that had "taken off" is almost certainly not a desktop linux that I'd want to be using. As it is desktop linux works seriously well for very many people.


Hi there,

> Docker [...] doesn't [...] clean up after itself properly when you CTRL+C it.

Sorry about that. It looks like nobody reported this issue yet. Would you mind filing a bug report on the github repo? Feel free to join the #docker channel on Freenode and we'll be happy to help you out.

> Does anyone even know what a Linux container is?

Yes.


This is not the best analogy, but I liken Docker to a high level language. I am not one of those hackers that started programming when I was a fetus, nor did I write my own LISP before I could walk. I started off with Basic in the early 90s (at the ripe old age of 17). I got frustrated with some speed and moved to C. I got frustrated again and moved to Assembly (as an aside I then did my first network enabled program and realized I could write in Basic and it would still be fast enough).

How does this relate to Docker? I knew nothing about containers and was first introduced to them by Docker. So I started off using Docker seeing what I could do with it. When I got frustrated, I started looking at the core components of Docker which lead me to lxc.

So yeah - it has bugs. But it is opening up a "whole new world" to some people.


> I liken Docker to a high level language

I like this: Using LXC directly is like using a (pointer, length) struct in C. Using Docker is like using an array-slice in Go. They're not that far apart, but they're on different levels of abstraction, and come with different guarantees (even if those guarantees might not, in practice, be enforced yet.)


There was a time that you could have said that github was just "a schmoozy web site on top" of git.


It still is. ;)


Redhat trying to ride docker hype I guess


This was an awesome collaboration. Over 15 senior contributors at Red Hat working with us at Docker.




Will you continue supporting Ubuntu / Non-redhat kernel versions as well as previously? Or does this portend a move like what happened with Gluster?


No, Docker is not becoming red hat - specific. From a technical standpoint this announcement means two things:

1) Docker 0.7 will run on vanilla kernels out of the box. This means virtually all distros will be supported. It also means wider support for hosting providers which don't allow custom kernels (Google Compute Engine for example).

2) Future versions of Docker will optionally support some of the technology used by Red Hat - most prominently libvirt-lxc and selinux.

The more places you can use Docker, the more useful it is :) So we have no intention of locking it into a single distro or paas.


I'm assuming by SELinux you mean the work ontop of SELinux for virtual machines and containers with libvirtd named sVirt[1]?

[1] https://fedoraproject.org/wiki/Features/SVirt_Mandatory_Acce...


That seems like the most probable path, although I can't speak for the people making that contribution.

What I mean, regardless of how we actually implement, is having an elegant way to deploy containers in environments where the sysadmin relies on SELinux contexts and labels to implement security.


Yes, we will still support Ubuntu, and other non-redhat kernel versions.

This is basically just expanding our current abilities to include areas where we didn't have great support before.

The goal is to have Docker running on all linux platforms, so that everyone can use it regardless of their distro choice.


What do you mean by "what happened with Gluster"? Every project tends to be better packaged for some OSes/distros than others, based on the developers' own expertise. GlusterFS was a bit more rpm-friendly than deb-friendly since long before the acquisition. That hasn't gotten much better, unfortunately, but neither has it gotten significantly worse. There's enough software in the world that barely even runs on any distro other than Ubuntu. Is it really worth complaining every time that's not the case?


It's a slightly case than what's presented here: different in that my previous company was paying for Gluster support - when RedHat purchased Gluster, they created a "storage product" and refused to support it unless we used "RedHat Storage Server" on Redhat and not Ubuntu. That was inconvenient.


Ah, I see, you were talking about Gluster (the company) rather than GlusterFS (the project). Got it. Yeah, some of the stuff about Red Hat Storage and commercial support and "recommended configurations" is not my cup of tea either, and I'm on the GlusterFS development team at Red Hat. The upstream community project has a lot of potential that can be hard to reconcile with a cautious "what can we support with minimal training" mindset.


Any info on the switch from Aufs? What is the new solution, and how does it compare?


We're using device-mapper thin provisioning technology. Same copy-on-write capabilities, but more compatible with upstream kernel versions


Cool. Are there any open issues where those of us who are interested can follow the details?

My main question is whether this will require users to create a fixed-size filesystem for each container up front, like you would have to do if you were using LVM snapshots directly.


Each container will have a "fixed-size filesystem", but:

- it will be thinly provisioned (i.e. it can be 10G or 100G but still use only a few MB on disk if it's essentially empty, like a sparse file), - it can be grown easily.

On the one hand, it's a bit less convenient because you have to care about the disk usage.

On the other hand, it's great because a single container can't eat up all your precious disk space (and if you want to run some public/semi-public stuff that's quasi mandatory).

If you want to check the current code, you can look here: https://github.com/alexlarsson/docker/tree/device-mapper3


No, the goal is for the current user experience to remain the same. By default docker will create a sparse file, loop-mount it, and use that for all containers. There is some magic in thin provisioning which allows for dynamically resizing when needed. We will add more details in the docs and mailing list in the coming weeks.

The best place to follow progress on this is to track docker-dev (https://groups.google.com/forum/#!forum/docker-dev) and the #docker-dev irc channel.


This is good news, docker is getting more serious. :-) Thanks god they got rid of aufs!


Why "thank god"? I'm not trying to be snarky or rude, I'm wondering what's so bad about it?


See antocv's message, that's what I meant more or less.

I don't know how good aufs is (nor do I care really), but if it's not in mainline/upstream then it'll never be taken seriously.

If you ask me to start building custom kernels to support your stuff, then I'm not going to like you. :-)


There isnt much bad that I see about it but it isnt supported by most vanilla kernels, so for example to get it on ArchLinux I spent a little time to get that into a kernel, only later to find out I forgot my virtualbox modules. Hassle.

I think the idea of unionfs should be in a kernel.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: