Hacker News new | past | comments | ask | show | jobs | submit login
Hyper Is Docker Done the Right Way (thenewstack.io)
98 points by mrmrcoleman on Nov 4, 2016 | hide | past | favorite | 36 comments



I like the initial idea very much. As I can see this is a wrapper over QEMU and controlled by QMP [1][2], so they are providing a kernel, initrd and a image. It is also configuring a machine profile (memory, vcpu, bus, devices, etc.). There are also interfaces for libvirt, xen and others virtualization technologies.

This also looks similar of what Docker is doing now in Windows with HyperV.

My question here is: from where comes the speed improvement in relation with a classical VM approach?

[1] https://github.com/hyperhq/runv/blob/master/hypervisor/qemu/...

[2] https://github.com/hyperhq/runv/blob/master/hypervisor/qemu/...


The most significant improvement comes from the BootFromTemplate, which improves speed and saves memory. And the boot sequence and guest kernel have been optimized.


How has the guest kernel been optimized? Just recompiled vanilla kernel and stripped all unnecessary parts?

What do you use for storage, how is the hosts storage accessed from the guest-container? virtfs?

Doesnt that add a bottleneck/performance-problem, even if youre using virtfs thats several layers of translation from VFS/block-device-virtual(what kind do you use?)/PCI-bus to VFS/block-device/pci and so on.


How is this different than just using rkt with the kvm stage1? (which to my understanding does basically the same thing)


His final comments on K8s & GKE were the same conclusion I came to while migrating my personal website to k8s: http://www.eggie5.com/82-rails-docker-app-deployment-kuberne...


The value proposition (difference) of Hyper is that while other CaaS try to manage your VM cluster, Hyper makes the cluster just go away.


Hey eggie5, you mean the point he makes about scale?


Other than using the Docker format / API, how is this not just a variant VM deployment strategy?

If every container is running its own kernel on a hypervisor, doesn't this eliminate one of the key benefits of a true Container system, the kernel/memory overhead?


With HyperContainer, we make the performance/overhead of virtualized container similar to linux container --- 130ms launch time, and sharing the read-only part of memory (kernel and init). As a result, we can use it as a more secure container.


Can you summarize how you're doing this, e.g. the similarities and differences from Linux containers?


Interesting. I'll look into this further. Thank you for the insight.


Agree with teilo here; pragmatically speaking we run into size of images needed for each container as well (think windows down the road); ease of use is there certainly but at scale cost in future...


What overhead?


The benefit of a container over a full VM is meant to be that the container scheme has just one kernel running.

This scheme would effectively be just one container per VM, thus negating the benefit.

(Edit: So.. it would appear that the benefit is that they're read-only-sharing the kernel between VMs)


That is the traditional thinking indeed, but not all VMs are born equal. Check out the underlying tech here: https://docs.hypercontainer.io/


In terms of a public cloud service, you (customer) don't care. You pay for the amount of memory you used, whether there is one kernel, or many of them, is irrelevant.


It is absolutely relevant: If your container is running its own kernel, then it is consuming memory for that kernel, not to mention the CPU overhead of the hosted kernel. Additionally, every container must also boot a complete OS. More overhead. This is why traditional containers use a shared kernel with process group isolation. You pay for what you use. A traditional container only uses what it needs for the app itself, and it starts in a fraction of the time because it doesn't have to boot anything. Performance and cost may be acceptable regardless, but that's not the point. Containers are more efficient.

Based upon responses from Hyper, they appear to address these concerns in a manner I have never heard before. I will certainly be looking into their core technology.


The value proposition of Hyper is that the overhead of guest kernel is greatly reduced to the point of being insignificant if you run any significant apps inside HyperContainers, meanwhile you gain the benefit of kernel isolation and ditching VM cluster management altogether.


They dont boot a "complete OS", they boot the kernel and a process - that is it, not your systemd which brings in Firefox kind of "complete OS".


But you don't own the server, thus you don't pay for the overhead (if any).


I'm sorry, but you don't know what you are talking about. You pay for what you use. I don't own Amazon's servers either, but I pay for every bit of memory and CPU time that my EC2 instances use, and that includes the memory and CPU time consumed by my instance's kernel.


Ok, I'm a bit lost too. So, your question is the memory consumed by the VM kernel? The tradeoff is the ops overhead to manage the VM instance.


> "CoreOS, DigitalOcean, and Docker are sponsors of The New Stack."

Interesting and somewhat funny


Another funny thing is that Docker just got slapped in the face "Docker in Production: A History of Failure" (https://news.ycombinator.com/item?id=12872304).


I wouldn't consider that a slap in the face. It's a rant, it's not the first about Docker and it won't be the last. Docker seems to be doing just fine despite that and many posts like it. Though the author has some good points it's riddled with statements that range from dubious to factually false.


Maybe I'm late to the party, but this looks almost exactly like what VMware did last year, with their vSphere Integrated Containers.


VMware has a similar project. The HyperContainer project is opensourced in May last year, same as Intel clear linux, and is prior to the VMware project.


Is this not what ECS aims to do?


The difference between Hyper and ECS is that ECS (and Docker Cloud, and GCE) all require that you provision and manage your own Docker server cluster.

With Hyper this is abstracted away which has a few benefits. Hyper also runs on HyperContainers (https://docs.hypercontainer.io/) which provides secure (VM like) multi-tenancy with container agility.


But at some point doesn't one need to think at the granularity level of clusters? In other words, there are natural limits to how big the hosting environment can grow, at which point performance and efficiency with respect to network topology and other host-system factors (in the broadest senses, e.g. at the level of data centers and the tech-pieces from which they're built) must be considered.

For reasons of convenience, in the case of some (many?) applications, perhaps those considerations can be ignored and the "container fabric" can simply be thought of as perfectly homogeneous, is that the idea? And if one grows beyond the point where that simplifying assumption doesn't hold, then it's time to switch to an architecture where clustering is explicitly addressed?


Broadly yes.

You could apply a similar argument to Digital Ocean or other providers. It is feasible that you could outgrow any provider, but in practice they would be able to scale before you hit that problem.

Is that what you meant or did I misunderstand?


In that sense, shall we care whether the EC2 instances in the ECS cluster are located at the same server, or the same rack?


It would depend on "how things are wired together". Now, it's great if they are "wired together" in such a way that one need not give it any thought. But whether we're talking EC2 instances or containers, at some point one has to think about it, e.g. two instances talking to each other, one being on the East Coast USA and the other in the Midwest, or West Coast, vs. their both being in the same datacenter. That's at the extremes, for sure, but maybe even intra-dc clustering has to be considered explicitly for certain applications? Maybe not?


The point is that you should stop thinking about VM cluster, but container (application) cluster, aka microservices.


Excuse me for my ignorance, but what does "container agility" mean?


He was talking about the boot speed, and the push/pull image workflow.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: