Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As I understand it, containers are just a set of concepts and kernel features put together to provide an abstraction that's not that different from virtual machines for common use cases.


It is a way to create reasonably isolated environment without the VM overhead. On Linux it is usually called container and implemented in terms of clone() and cgroups, BSDs have jails, Solaris has zones and for Plan 9 that is trivial concept. What is special about the Linux and Plan 9 cases is that the implementation is based around stuff that is generic and not strictly tied to this use case.

As a side note: I’m somewhat bewildered by the docker-controlled-by-kubelet-on-systemd architectures as there is huge overlap in what these layers do and how they are implemented.


I'm not an avid container user, but I think at least part of the popularity for certain dev/ops/CI use cases is how easy it is to get stuff in and out of them using bind mounts and the like. For example, having a run environment that's totally different from a build environment. The only way to do this with conventional VMs is to access them over some in-band protocol like SSH (or to have to re-build the run rootfs and reboot that machine, which is typically extremely slow).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: