Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>I'm almost willing to bet money VM has been used in production for longer than you've been alive. And of course machines had MMUs back then: They built a special one for the IBM System/360 model 40, to support the CP-40 research project, and the IBM System/360 model 67, which supported CP-67, came with one standard. IBM, being IBM, called them DAT boxes, because heaven and all the freaking angels forfend that IBM should ever use the same terminology as anyone else...

You clearly know more about the details :) however my point is that modern requirements aren't the same as those of the past, particularly with regard to security but also workloads, use cases, etc. are generally very different.

>The difference between this and Windows Me is that we knew Windows Me was a bodge from day one. Windows 2000 was supposed to kill the Windows 95 lineage. (What's a "Windows 2000"? Exactly.)

You said "The best argument is this: We've done it" - the point of these examples is that, yes we've done many things, so it's not a very good argument. If you want to skip over ME, then 3.1 - it used cooperative multitasking. Arguably this might be more efficient than pre-emptive multitasking (I'm not saying it is, rather saying maybe somebody _could_ argue this), and it was good enough for the time, but the fact we've done it doesn't mean we should do it.

This applies even more to security - for many years there were little to no efforts made towards hardening software. We live in a world where this just cannot happen any longer.

I'm not saying by the way that the past use doesn't have value or demonstrate the usefulness of the approach, it might do, just that the fact it was done before doesn't _necessarily_ mean it's a good idea now.

>Anyway, the hypervisor design concept came from people who'd seen what we'd now call a modern OS; in this case, CTSS, the Compatible Time-Sharing System (Compatible with a FORTRAN batch system which ran in the background...). They weren't coming from ignorance, but from the idea that CTSS didn't go far enough: CTSS was a pun, in that it mixed the ideas of providing abstractions and the idea of providing isolation and security into the same binary. The hypervisor concept is conceptually cleaner, and the article gives evidence it's more efficient as well.

Interesting. Not sure the article does demonstrate that though, it does suggest performance penalties, some serious, due to the abstractions of a modern OS. I'd want to look more closely at these before I believe the penalties are THAT severe, other than in the case of networking where it seems more obvious the problem would arise.

>You missed my point. My point was that it works acceptably fast (yes, it does, I've used it, and you won't convince me my perceptions are wrong) even though it's operating in the worst possible context: In a userspace process on an OS kernel, where everything it does involves multiple layers of function call indirection and probably a few context switches. Compared to that, getting a stripped-down unikernel written in OCaml to be performant has got to be relatively easy.

I raised the performance issue because this seems to be the main selling point of a unikernel, but now we're losing performance because it's acceptable? Ok fine, but I think a 'normal' kernel in most cases has acceptable performance penalties. This is something that really requires lots of data, and maybe even ocaml is nearly as fast anyway (I hear great things about it), but I just wanted to point out the contradiction.

>First: Only the unikernel would be written in OCaml. The hypervisor would have to be written in C and assembly. >Second: I never said Linux performs terribly. Linux performs quite well for what it is. It's just that what it is imposes inherent performance penalties.

Absolutely, and agreed there are inevitable perf penalties (as the article describes well.)

>Third: Although the article focused on performance, the main reason I support hypervisors is security. Security means simplicity. Security means invisibility. Security means comprehensibility, which means separation of concerns. Hypervisors provide all of those to a greater extent than modern OSes do.

I really find it hard to believe that security is really wonderfully provided for in a unikernel - you have a hypervisor yes, but if you get code execution in the application running in the unikernel you have access to the whole virtual system without restriction. I'd bet on CPU-enforced isolation over software any day of the week, even memory safe languages have bugs, and so do hypervisors.

I may have made incorrect assumptions here so feel free to correct me. I'm certainly not hostile to unikernels, either!



> I'd bet on CPU-enforced isolation over software any day of the week, even memory safe languages have bugs, and so do hypervisors.

... and so do CPUs! I do like CPU protections as long as they are dirt-simple, but it really scares me sometimes how complicated CPUs and chipsets are getting with their "advanced" security features. When an exploitable flaw is found, and malware survives OS/firmware reinstalls, it will be a mess.


Yeah this frightens me too :) and things like rowhammer [0] are surprising in this regard. Nothing can be trusted, but you can have a little more faith in some things.

[0]:https://en.wikipedia.org/wiki/Row_hammer


> I really find it hard to believe that security is really wonderfully provided for in a unikernel

I think you've missed the point: The hypervisor provides security. The unikernel does not. Everything in the same unikernel guest is in the same security domain, meaning everything in the same unikernel guest trusts everything else in that unikernel.

The hypervisor is the correct level to provide security because that's all it does. It exists to securely multiplex hardware, to allow multiple unikernel guests to run on the computer at the same time without having to be aware of each other. The hypervisor is, ideally, invisible, in that its only "API" is intercepting hardware access attempts and doing its magic at that point; it provides no abstractions of any kind, so it can be as simple as possible. You can't hit what you can't see, and you can't exploit code which isn't there.

The hypervisor of course uses all of the hardware tricks the CPU provides to provide isolation. Modern virtualization hardware fits the bill just fine here.

It's therefore possible for unikernels to establish and enforce their own security policy, just like how you can run Linux as a guest under Xen. It's just that it shouldn't be necessary in a proper hypervisor/unikernel setup, because everything running in the same guest should trust each other and only need to worry about information coming in from the outside world.


Sure, but what about private state in the unikernel? If I exploit it, I have access to everything inside it without restriction, I could even change the way the drivers work totally transparently to the rest of the software.

Is the idea that a single unikernel is equivalent to a single process? Surely we're getting into realms of serious performance issues if that's the case?

I do take your point on there being less going on meaning there is less to attack, and what you are saying is very interesting, don't get me wrong :) I'm just trying to understand it.

There have been hypervisor exploits, but of course far fewer than linux/windows/mac escalations


> Is the idea that a single unikernel is equivalent to a single process?

Yes.

A unikernel is equivalent to a process in a more traditional system. We usually don't secure parts of a process against other parts of the same process. We just start more processes.

> Surely we're getting into realms of serious performance issues if that's the case?

Why?


> Why?

Are you suggesting running an entire virtualised kernel in place of a process is not going to introduce a performance penalty?

There might also be latencies introduced in IPC.


If you ran an virtualized version of an entire traditional kernel, you'd have a hard time with performance. So that's not what you would be doing.

Go, read the old exokernel papers (see https://en.wikipedia.org/wiki/Exokernel#Bibliography, especially http://pdos.csail.mit.edu/exo/theses/engler/thesis.ps). They got nice performance improvements out of running their equivalent of unikernels. It's exactly because they can cut through all the layers of one-size-fits-all abstraction.

They also address IPC.

(This reminds me, I should go and re-read how they actually did IPC.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: