The NEMU folks did a talk at KVM Forum last year -- the impression I got was that their intent was to use NEMU as a testbed and demo platform of what you could do to produce a slimmed-down version of QEMU, and then as they established workable approaches to then propose/submit them upstream to QEMU piecemeal.
On the upstream end, one of the features that landed in 4.0 was a 'Kconfig' system that hopefully will make it easier to build slimline versions of QEMU which don't compile in the kitchen sink.
Can you provide a link to it? Google found http://bitblaze.cs.berkeley.edu/temu.html which seems like the wrong thing (it's stuff added to qemu, and it's from 2008).
Maybe https://bellard.org/tinyemu/ ? Seems like the right thing but it's not based on qemu I think. Anyone tried it with production workloads? It does seem like it should work - it does KVM and has virtio devices. Although at first glance I don't see either multiple CPU support or live migration support - not having multiple CPU support would basically rule it out for real workloads.
> Modern guest operating systems that host cloud workloads run on virtual hardware platforms that do not require any legacy hardware. Additonally modern CPUs used in data centers have advanced virtualization features that have eliminated the need for most CPU emulation.
> There currently is no open source hypervisor solutions with a clear and narrow focus on running cloud specific workloads on modern CPUs. All available solutions have evolved over time and try to be fairly generic. They attempt to support a wide range of virtual hardware architectures and run on hardware that has varying degree of hardware virtualization support. This results in a need to provide a large set of legacy platforms and device models requiring CPU, device and platform emulation. As a consequence they are built on top of large and complex code bases.
> NEMU on the other hand aims to leverage KVM, be narrow focused on exclusively running modern, cloud native workloads, on top of a limited set of hardware architectures and platforms. It assumes fairly recent CPUs and KVM allowing for the the elimination of most emulation logic.
> This will allow for smaller code base, lower complexity and a reduced attack surface compared to existing solutions. It also gives more space for providing cloud specific optimizations and building a more performant hypervisor for the cloud. Reducing the size and complexity of the code allows for easier review, fuzz testing, modularization and future innovation.
Sounds like what OpenBSD has done with LibreSSL as a slimmed down fork of OpenSSL. Or even OpenBSD itself as a slimmed down fork of NetBSD. Makes sense to me.
but like, how much further along would hard things like video/network drivers be if instead of managing 3 separate "everything else", it was just 1?
think of the time and effort in documenting all of their code, compiling, getting hosting set up. bootloaders, different filesystems, different libc implementations, different command line tool implementations.
Eh, there are a number of strong personalities involved in all three projects, and they all have their own priorities. So some kind of split was a natural outcome.
As a consequence, all three projects have their own focus (formed in part by the existence of the other projects), and that's good too.
but in the open source software industry, where most people are volunteering, it makes sense to try to be more efficient and not have 100 Linux distributions and 5 BSD flavors
or at least, it makes sense to me
but I guess contributors are free to spend their time however they please
OpenBSD and NetBSD have different priorities; NetBSD wants to run on every device possible, while OpenBSD wants to be simple and secure. It'd be more efficient to fork off and let them focus on those priorities (and cross-pollinate where it actually makes sense to cross-pollinate) than to expect those often-conflicting priorities to always have to be balanced throughout the entirety of development.
Same goes for Linux distros (albeit to a lesser extent, since "compatibility with software written for bigger distros" tends to be an implicit design goal for smaller distros), or for illumos distros (OpenIndiana is very different from SmartOS).
> The rust-vmm project came to life in December 2018 when Amazon, Google, Intel, and Red Hat employees started talking about the best way of sharing virtualization packages. More contributors have joined this initiative along the way... The goal of rust-vmm is to enable the community to create custom VMMs that import just the required building blocks for their use case.
gvisor is a fundamentally different thing from Firecracker (or the role QEMU or NEMU play when used with KVM). There's a pretty good summary here: https://gvisor.dev/docs/architecture_guide/
Glad you mentioned this, I came here to say something similar. The readme on the NEMU project definitely echoes a lot of the Firecracker project. Hopefully they both continue to push the boundaries of lightweight virtualization.
Calling this “hypervisor for the cloud” is a bit disingenuous. There is no way it would compete with the existing commercial solutions out there by VMWare/Pivotal, Joyent, and Red Hat. Where is the HA capability? No way to monitor cluster health. No k8s support either. This is just a fork of Qemu.
I think the backstory for this project is that QEMU is a full-featured virtualization project, capable of emulating many machines including actually emulating them (i.e., not using hardware virtualization features, just interpreting machine code), doing userspace virtualization (running a binary for a different CPU in the same kernel, instead of emulating an entire machine), supporting all sorts of hardware, supporting features needed for debugging OS development, etc. Most of that isn't useful in the cloud - in the cloud you've got hardware virtualization, almost always on x86-64 but occasionally on ARM64 and virtualizing the same platform as the host. In the cloud you're typically running an OS that can have custom drivers for your virtualization platform, you're not running old/historic OS images that need to run faithfully. In the cloud you generally want custom drivers for your virtualization platform because they're higher-performance, and you wouldn't want to emulate certain models of real-world network cards/disks/etc. in the first place - and in QEMU, those turn out to be huge attack surfaces. In the cloud you generally do not need a local graphical display and often do not need a graphics driver at all, since you're logging into machines over SSH (and if you need early boot debug output, you can do that over a virtualized serial port). In the cloud you don't need a floppy disk or a parallel port. In the cloud you probably don't need the standard PC bootup process where it starts in 16-bit mode for compatibility with old bootloaders, which themselves switch to 32-bit and then to 64-bit: you can have the hypervisor directly start EFI in 64-bit mode.
NEMU only does hardware virtualization on x86-64 machines and only exposes an emulated "machine" type that has the various virtualization-optimized devices and no emulation of real-world devices.
Maybe "production workloads" would be a better descriptor here than "cloud."
Its not clear why this needs to exist, as opposed to working with upstream to produce a single tree that can be built with a slimline profile enabled