> Replacing a sort algorithm in the FreeBSD kernel has improved its boot speed by a factor of 100 or more…
I don't think that's correct. While the optimized sorting algorithm is 100x times faster than the old sorting algorithm, it was only a small part of the boot process (2ms), so the boot speed up is only 7%.
If you fired a bullet at 450m/s at a FreeBSD machine sitting 50 feet away, it would have enough time to boot up and perform operations before being struck and destroyed by the bullet.
For ICBM test warheads they have a system that replaces the nuclear pit with a sensor package that measures the incoming shockfront, does signal processing and gets out a radio signal before the entire thing is crushed.
So if folks truly cared, it's possible to design a computer system to boot up, in a barebones fashion, and then do meaningful work and then transmit it, in under a millisecond?
Certainly, and I imagine that there are plenty that we interact with on a regular basis. For example, door security sensors (like for Ring) will be off nearly all the time. When someone opens the door, they will power up, send a notification, and go dormant again. I don't know if this will happen within a single millisecond, but it certainly could if there was a design need.
For many simple embedded systems that plug into the wall, the "boot" process will happen in under a millisecond. But it will take considerably longer than that for the voltage levels to stabilize.
The real thing which differentiates these from a PC or phone is that they are not running what we would typically call an OS. They may be running an RTOS (a very simplified OS), but for many of them, as soon as they have stable power, they are off to the races (Just like the home-computers of the 80s. Just plug in and go.)
Because I understand electronics just enough to know that sending a usable signal in under a millisecond from cold is incredibly tough. That's barely enough time for voltages to stabilize in even quite high end systems.
Like I said link a microcontroller that can meet that. 'Usable work' can be as little as adding two half-precision floating point numbers together, and sending it can be any possible method.
I'm sure Colin's results can be reproduced, but it will take some effort. Colin has been doing a lot of work so that Firecracker can boot FreeBSD -- see https://www.daemonology.net/blog/2022-10-18-FreeBSD-Firecrac... for an introduction -- and that is not yet all available in Firecracker "out of the box."
That's good. FreeBSD was pretty damn fast already, even on old hardware.
But in the cloud it still takes several more seconds more seconds to download the container full of 200 megs of javascript shite from the local docker repo and fire it up before being able to service requests...
Spinning up additional "servers" in the cloud when you receive many incoming requests. If requests are held in a queue waiting for the new server to start, a fast boot process will reduce the latency for those requests.
> Running FreeBSD as if it’s a “process” on Linux is interesting in a way but - who does this?
Cloud providers run VMs of different customers on the same physical machine. Hypervisors like Firecracker minimize the attack surface (smaller risk of local privilege escalation) and VM overhead (run more instances on one machine).
Ah alright. I’ve seen this done with an Erlang VM on Xen where Erlang is the “OS”. To me that seems a little more natural. But Unix as an “API” can work too though the processes are not as lightweight as Erlang’s.
> Hmm, actually I’m not sure I even understand the whole “serverless” thing that much to be honest.
A whole bunch of folks have given up on shared operating systems at a security, access control, data governance, and maintainability (reproducibility) level. At least if your goal is to create software for some purpose other than building server infrastructure.
Some of them have decided that if you’re going to want things like database servers and ssh bastions and VPN servers and cache servers et c to live on dedicated machines or VMs for various good reasons anyway… and you let something else trigger your custom code as needed, as distinct processes with maybe a little hot caching, like in old school PHP or CGI or Inetd (remember that?!), why, now it’s looking an awful lot like you don’t really have a reason to manage a server for that at all, if someone else can provide some service to trigger your code under certain circumstances, if your code and your new unit of process isolation (a whole damn OS) can start up fast enough.
Now if you pay for managed versions of all those other 3rd party software packages you use, so you don’t need to hire a couple PostgreSQL experts to make upgrades anything but nail-biting, for example—congrats, you’ve fully reached “serverless” in the “cloud”.
Probably one or more people are working on that. But Linux and FreeBSD exist and have had a whole lot of testing hours put into them, both per se and to test various libraries and programs using them as their OS, so for now, that’s the safe option.
Serverless is a relatively new marketing term, old by tech world standards, whereby you "don't need infrastructure". Your code gets executed into sort of virtual machines "small enough" to boot up on demand and execute your specific code. You don't need to manage dependencies or os configs. You define an "entry point" within your chosen programing language and the vm executes it on demand. The vm stays active for a set time. Beyond that time a new request requires a "cold start" - this can be slow if the OS' boot time is slow.
Also this is meant to make things easier and cheaper. But they are anything but easier and anything but cheaper compared to non serverless. It also leads to captive customers. But it drives a whole industry of training, consulting and selling AWS services.
Besides what sibling said, they mention a more general use case towards the end:
> We can see a lot of potential uses for microVMs, not just in cloud scenarios. The ability to run a single program built for one OS on top of a totally different OS, without the overhead of running a full emulated environment all the time, could be very handy in all kinds of situations.
Yes, it's a summary of that, overall, and I linked to that post in my article. That blog post is by the developer who did the work. Of course it came first: it is the source material!
Did you not even notice that I linked to it and recommended reading it?!
Yes, and sorry if that sounded too confrontational. I didn't take a lot of new things from this and I know many people here read the comments first, and in this case I prefer the source material.
* no matter if you provide links to explain everything, many won't follow any of 'em
* but you _must_ provide citations for every fact or claim
What I try to do is boil down long stuff to shorter punchier summaries, and explain harder tech stuff so non-experts can follow it.
If the reasons for something, or what it means, or why it matters, are not very obvious, I try to explain the whys and wherefores.
But when it's simple, some will complain there's not enough. If it's not simple, some will complain it's too hard. No matter how well referenced, some will argue. Even if it's my direct personal experience, some will tell me I am wrong.
The headline has got a lot of asterisks. On a server that's already running, they can open a new software-only VM in 25ms. So it's more like saying they launch a new application in 25ms. It still sounds great though.
I don't think that's correct. While the optimized sorting algorithm is 100x times faster than the old sorting algorithm, it was only a small part of the boot process (2ms), so the boot speed up is only 7%.