Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Xen on Raspberry Pi 4 (xenproject.org)
113 points by posix_me_less on Oct 4, 2020 | hide | past | favorite | 27 comments


I reach for LXC if I need to contain something inside it’s own OS, on the condition that the contained OS is running the same kernel as the host.

As I understand it, Xen would allow me to contain OSs that aren’t even Linux. What is available for Raspberry pi’s hardware, that I should try?

Is there a compelling advantage to using Xen for a container Linux, versus LXC?


Xen is not a container, compare it to qemu/KVM more.

I would not host untrusted customers in docker containers, LXC, but I would in Xen HVM any day.


>I would [..] host untrusted customers in [...] Xen HVM any day.

Intel doesn't recommend running untrusted code even in a separate VM, according to their official explanation of Spectre/Meltdown. At least we can do it on RPi now.


> At least we can do it on RPi now.

Couldn't we already do it on an RPi with linux/kvm+qemu?


Not to mention if you give the untrusted VM a network interface then it can be the equivalent of plugging an untrusted computer into your network


Huh? You can't make a VLAN or use a firewall?


The A72 has speculative execution so I don’t think that’s strictly speaking true


Security is one advantage. If you're running a server with multiple containers and you're concerned about attackers taking over one of those then you can isolate them more from each other and the rest of the system using VMs.

(Addendum: I'm not that experienced when it comes to container tech and I heard you can isolate your containers pretty well these days. Virtualization has also changed over time and isn't as "isolated" as it was once; for example virtualized guests are often executed directly on the host's CPU. I'd definitely look into this more before deciding on one approach.)


> What is available for Raspberry pi’s hardware, that I should try?

Windows runs on the pi4 (and now has x86 emulation, IIRC).

There's more than a few things that only run on windows.


> Windows runs on the pi4 (and now has x86 emulation, IIRC).

That's new to me. Is the rpi4 and window's x86 emulation performant enough to use popular windows-only apps like the Adobe suite?


Microsoft Surface units in non-US markets have been available in ARM variants for some time now. There was some different market name for the unit but I can't recall what they're called just now.


I wonder if HyperV is supported...


Can someone point me to some literature that explains the benefits of using something like Xen over KVM/qemu? I’m mostly experienced with Linux systems, so KVM is generally what I reach for.


It really depends on your context.

On the x86 side, Xen has two unique advantages over KVM in terms of security.

The first is the mature XenProject Security Response process [1]. All known Xen-related security issues, even DoSes, are announced and documented, so it's easy to find out when you need to patch your software. If you're a cloud provider, or make software with Xen as a component, you can be notified under embargo before the public announcement, so you can have your cloud patched / have a software patch tested and ready to download on the day the announcement goes public.

KVM doesn't have an equivalent process. Many high-profile KVM issues are issued under embargo on a mailing list, but 1) many are not 2) only distros are allowed to be on the list.

The second thing on the x86 side are some additional defense-in-depth security measures, including driver domains and device model stub domains. Driver domains allow you to run device drivers in a completely separate VM; so (for instance) a privilege escalation in iptables would allow an attacker only to control the bridge and the network device, as opposed to being able to take over the whole system. Similarly, device model stubdomains run QEMU (or the emulator of your choice) in a separate VM; which means if there's a privilege escalation bug in QEMU, you've just broken into Yet Another VM; whereas in KVM you're now inside a Linux host process. KVM processes inside Linux can be restricted with things like SVirt, but it's fundamentally more difficult to isolate a process than a VM.

These are some of the reasons why QubesOS [2] and OpenXT [3] both rely on Xen.

On the embedded side, the distinctive that Xen has over KVM is that it's a microkernel-style hypervisor. This leads to a couple of advantages.

First, Xen itself boots in less than a second, and using the "dom0less" feature, can direct-boot any number of other domains from the same initrd [4]. This means that if none of your VMs are Linux, you don't need to run Linux at all -- you can boot up all of your VMs and have them up and running in hundreds of milliseconds; or, if you need a single VM up and running quickly, you can start that one along with dom0, and start your other ones from dom0.

Secondly, Xen is small enough to be safety certified. This is possible for a microkernel-style hypervisor like Xen, particularly with the "dom0less" direct-boot feature, in a way that would be impossible for KVM, since you'd have to not only certify enough of the Linux kernel, but all of the userspace which is running to start your other VMs.

This is why Xen has been making significant inroads into the embedded space. It's been put on rockets [5], and was chosen by ARM to be part of their Automotive Reference Platform [6].

If you just want to run the occasional VM on your x86 desktop, then KVM is likely to be a better bet: There won't be a significant performance difference, and it's less effort to set up.

But if you're making a product in which you want to embed virtualization, Xen has a lot of advantages. This to me is actually why the Xen for RPi is so interesting: Because actually 44% of RPi sales are actually for industrial use cases, and this port expands the market both for Xen and RPi.

Obviously there are lots of other strengths and weaknesses, but that should give you an idea.

[1] https://xenproject.org/developers/security-policy/ [2] https://www.qubes-os.org/ [3] https://openxt.org/ [4] https://xenproject.org/2019/12/16/true-static-partitioning-w... [5] https://www.embedded-computing.com/guest-blogs/the-final-fro... [6] https://www.youtube.com/watch?v=boh4nqPAk50


While you make good points about the potential defense-in-depth aspects of Xen, the reality is that features like stub domains and dom0less have been very experimental and aren't enabled by default or used in production. They were first introduced about 10 years ago and are still tricky to set up. Porting and testing device drivers in stubdomains is also hard (from my own experience).

Svirt for KVM on the other hand is enabled by default on Red Hat like distros, providing a reasonable MAC policy and locked down device model without any requirement for the admin to do something. I like Xen's design but I can't help but feel that safer defaults trump theoretical configurations.


Did you mean "stub domains and driver domains"? I'm pretty sure dom0less was used in production before it even made it into the main Xen tree. :-)

As I said, using Xen from a plain vanilla distro is more difficult to set up. There are a couple of reasons for this; one of them being simply that RedHat's main product is itself a distro, and so the setup of their product translates over directly into making it easy to be set up within other distros.

The companies with products shipping Xen, on the other hand, primarily ship fully-integrated products. Citrix Hypervisor (was XenServer) and XCP-ng are "virtualization appliances" in which everything is integrated. OpenXT and QubesOS are the same way. Citrix Hypervisor / XCP-ng don't use driver domains, but they do have custom QEMU deprivileging for device models. OpenXT and QubesOS do (as I understand it). If you install one of these, you get a secure setup by default.

Fundamentally, none of these organizations would benefit directly from making it easier to set up driver domains on plain vanilla distros; and so making it easier to use driver domains on a plain vanilla distro never gets to the top of their engineers' priority queue. If you're interested in using Xen on a server fleet, I would definitely recommend going with XCP-ng or Citrix Hypervisor; if you want a secure desktop, definitely go with QubesOS or OpenXT. On the other hand, if having your fleet / desktop based on a vanilla Linux distro is a priority for you, then KVM might be a better bet at the moment.

But as always, "patches accepted". :-)


Thanks for the interesting reply. These are fair points. And I hadn't considered the Citrix products. If you read this then I do have another question. Is there way to disable PV guest support in Xen? IIRC this code represented a good portion of Xen's CVE list. It is an attack surface that we no longer need on modern CPUs. In theory you can just never run a PV guest but the code would still be present?


TLDR: Yes, you can disable PV entirely on x86 systems; on ARM systems, there never was PV.

On x86, as you say, has "classic" Xen PV, which doesn't require virtualization extensions. It also has "HVM", which includes a full system emulation (motherboard, etc); but also "PVH", which is basically what PV would be if it were designed today: It takes advantage of hardware support when it makes sense, and paravirtualizes when it makes sense. It doesn't require a devicemodel to be running at all, but also isn't susceptible to the PV XSAs. There's also a mode called "shim" mode, which allows you to run "classic PV" kernels in PVH mode.

Xen now has KConfig, and you can now disable PV mode entirely when building Xen. You can run dom0 in PVH mode, and then run all of your guests either in PVH mode or HVM mode.

The ARM port was made after ARM had virtualization extensions; so it never had a "classic PV"; nor does it require any devicemodel whatsoever. There's only one guest mode, which corresponds roughly to the "PVH" mode above.


And that's the right thing to do! Xen features no practical advantages anymore, that may have been different historically.


In the past (4+ years ago) I read that Xen was more performant than KVM due to tighter integration between the host and guest kernels.

Is that no longer the case? Is paravirtualization no longer necessary, or does KVM do paravirtualization now?


KVM supports paravirtualization of different hosts for years now. Some project history: https://lwn.net/Articles/705160/


> According to the RPi Foundation, over 35 million have been sold, with 44% of these sold into industry


This was the thing that most surprised me: We tend to think of RPi as aimed at hobbyists, but it turns out that it's actually at a really good price / feature point for a lot of industrial applications.


Has anyone tried comparing Xen's performance on the Pi against KVM? Any reason to choose the former over the latter?



is it possible to pass hardware peripherals to a virtual machine with this?


I've done that with xen before. Of course I haven't tried on a pi yet.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: