Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
FreeBSD can now boot in 25 milliseconds (theregister.com)
92 points by rodrigo975 on Aug 30, 2023 | hide | past | favorite | 46 comments


> Replacing a sort algorithm in the FreeBSD kernel has improved its boot speed by a factor of 100 or more…

I don't think that's correct. While the optimized sorting algorithm is 100x times faster than the old sorting algorithm, it was only a small part of the boot process (2ms), so the boot speed up is only 7%.


If you fired a bullet at 450m/s at a FreeBSD machine sitting 50 feet away, it would have enough time to boot up and perform operations before being struck and destroyed by the bullet.


For ICBM test warheads they have a system that replaces the nuclear pit with a sensor package that measures the incoming shockfront, does signal processing and gets out a radio signal before the entire thing is crushed.


So if folks truly cared, it's possible to design a computer system to boot up, in a barebones fashion, and then do meaningful work and then transmit it, in under a millisecond?


Certainly, and I imagine that there are plenty that we interact with on a regular basis. For example, door security sensors (like for Ring) will be off nearly all the time. When someone opens the door, they will power up, send a notification, and go dormant again. I don't know if this will happen within a single millisecond, but it certainly could if there was a design need.

For many simple embedded systems that plug into the wall, the "boot" process will happen in under a millisecond. But it will take considerably longer than that for the voltage levels to stabilize.

The real thing which differentiates these from a PC or phone is that they are not running what we would typically call an OS. They may be running an RTOS (a very simplified OS), but for many of them, as soon as they have stable power, they are off to the races (Just like the home-computers of the 80s. Just plug in and go.)


I see you've never worked with microcontrollers.


Okay then link a microcontroller that can do that in under 1 milisecond?


A 6502 takes 7 cycles to execute the reset vector and run user code.

Of course the definition of "usable" work can be argued over but even a C64 released in the 80s will run your code in less then 10us

1ms is an enormous amount of time. I'm really curious why you would think otherwise


Because I understand electronics just enough to know that sending a usable signal in under a millisecond from cold is incredibly tough. That's barely enough time for voltages to stabilize in even quite high end systems.

Like I said link a microcontroller that can meet that. 'Usable work' can be as little as adding two half-precision floating point numbers together, and sending it can be any possible method.


Does this have a name? A keyword I could search for? A wikipedia page?


Try "The Computer Designed To Die In Microseconds"

https://www.youtube.com/watch?v=FYdAT0v4DHs (Scott Manley)

https://www.lanl.gov/orgs/padwp/pdfs/2nwj2-03.pdf


High Explosive Radio Telemetry


it’s giving UDP energy


With an optical muzzle flash companion to wake it / circumvent the speed of sound, it could even complain about being shot at! https://ieeexplore.ieee.org/document/6685317


Or fire back if mounted on an autonomous drone


50 feet is about 15 meter

450 m/s is 1620 km/h


now that's a relief


> I believe Linux is at 75-80 ms for the same environment where I have FreeBSD booting in 25 ms.

From the article, Colin Percival on how Linux does as well. Impressive (though would be good to see other benchmarks to corroborate Colin's).

(Which is an HN comment actually: https://news.ycombinator.com/item?id=37205578.)


I'm sure Colin's results can be reproduced, but it will take some effort. Colin has been doing a lot of work so that Firecracker can boot FreeBSD -- see https://www.daemonology.net/blog/2022-10-18-FreeBSD-Firecrac... for an introduction -- and that is not yet all available in Firecracker "out of the box."


That's 48x faster than I can type boot.


That's good. FreeBSD was pretty damn fast already, even on old hardware.

But in the cloud it still takes several more seconds more seconds to download the container full of 200 megs of javascript shite from the local docker repo and fire it up before being able to service requests...


Having a tough time understanding the impacts of this work. Fast boot times matter if you boot often. So what’s the use case here?

Hmm, actually I’m not sure I even understand the whole “serverless” thing that much to be honest.

Running FreeBSD as if it’s a “process” on Linux is interesting in a way but - who does this?


> So what’s the use case here?

Spinning up additional "servers" in the cloud when you receive many incoming requests. If requests are held in a queue waiting for the new server to start, a fast boot process will reduce the latency for those requests.

> Running FreeBSD as if it’s a “process” on Linux is interesting in a way but - who does this?

Cloud providers run VMs of different customers on the same physical machine. Hypervisors like Firecracker minimize the attack surface (smaller risk of local privilege escalation) and VM overhead (run more instances on one machine).


Ah alright. I’ve seen this done with an Erlang VM on Xen where Erlang is the “OS”. To me that seems a little more natural. But Unix as an “API” can work too though the processes are not as lightweight as Erlang’s.


It was done in the context of running FreeBSD under the Firecracker platform [1]

[1] https://news.ycombinator.com/item?id=33243529


> Hmm, actually I’m not sure I even understand the whole “serverless” thing that much to be honest.

A whole bunch of folks have given up on shared operating systems at a security, access control, data governance, and maintainability (reproducibility) level. At least if your goal is to create software for some purpose other than building server infrastructure.

Some of them have decided that if you’re going to want things like database servers and ssh bastions and VPN servers and cache servers et c to live on dedicated machines or VMs for various good reasons anyway… and you let something else trigger your custom code as needed, as distinct processes with maybe a little hot caching, like in old school PHP or CGI or Inetd (remember that?!), why, now it’s looking an awful lot like you don’t really have a reason to manage a server for that at all, if someone else can provide some service to trigger your code under certain circumstances, if your code and your new unit of process isolation (a whole damn OS) can start up fast enough.

Now if you pay for managed versions of all those other 3rd party software packages you use, so you don’t need to hire a couple PostgreSQL experts to make upgrades anything but nail-biting, for example—congrats, you’ve fully reached “serverless” in the “cloud”.


So, why re-use a time sharing, multi-user OS to run your apps? Don't you just need a multi-core capable, POSIX environment?

Arguably POSIX doesn't really matter anymore does it? It's Linux or it doesn't matter in a lot of cases (sadly).

Seems like overkill. Sometimes optimal solutions are the available ones, sure, but is no one working on a POSIX exokernel "includeos" like thing?


Probably one or more people are working on that. But Linux and FreeBSD exist and have had a whole lot of testing hours put into them, both per se and to test various libraries and programs using them as their OS, so for now, that’s the safe option.


Serverless is a relatively new marketing term, old by tech world standards, whereby you "don't need infrastructure". Your code gets executed into sort of virtual machines "small enough" to boot up on demand and execute your specific code. You don't need to manage dependencies or os configs. You define an "entry point" within your chosen programing language and the vm executes it on demand. The vm stays active for a set time. Beyond that time a new request requires a "cold start" - this can be slow if the OS' boot time is slow.

Also this is meant to make things easier and cheaper. But they are anything but easier and anything but cheaper compared to non serverless. It also leads to captive customers. But it drives a whole industry of training, consulting and selling AWS services.


Besides what sibling said, they mention a more general use case towards the end:

> We can see a lot of potential uses for microVMs, not just in cloud scenarios. The ability to run a single program built for one OS on top of a totally different OS, without the overhead of running a full emulated environment all the time, could be very handy in all kinds of situations.


All I know about turtles is that they should be made out of turtles and all the way down.


Seems to be a summary/rehash of https://www.usenix.org/publications/loginonline/freebsd-fire... - which I guess came first, and is a better article.


I wrote the article.

Yes, it's a summary of that, overall, and I linked to that post in my article. That blog post is by the developer who did the work. Of course it came first: it is the source material!

Did you not even notice that I linked to it and recommended reading it?!


Yes, and sorry if that sounded too confrontational. I didn't take a lot of new things from this and I know many people here read the comments first, and in this case I prefer the source material.


Fair enough.

It is a hard line to walk as a journo.

I have discovered, the hard way, that:

* _many_ people can't skim

* but they don't know that they can't

* so they misunderstand and blame the writer

* no matter if you provide links to explain everything, many won't follow any of 'em

* but you _must_ provide citations for every fact or claim

What I try to do is boil down long stuff to shorter punchier summaries, and explain harder tech stuff so non-experts can follow it.

If the reasons for something, or what it means, or why it matters, are not very obvious, I try to explain the whys and wherefores.

But when it's simple, some will complain there's not enough. If it's not simple, some will complain it's too hard. No matter how well referenced, some will argue. Even if it's my direct personal experience, some will tell me I am wrong.


The headline has got a lot of asterisks. On a server that's already running, they can open a new software-only VM in 25ms. So it's more like saying they launch a new application in 25ms. It still sounds great though.


How does bubblesort happened in FreeBSD? Can my laptop boot in 25 milliseconds now?


It was discussed[1] in the previous thread on the topic.

Essentially limited stack and input known to be bounded to reasonable size, along with low complexity of implementation.

And no, AFAIK, as your PC has to wait for hardware to initialize and such. This is for booting VMs.

[1]: https://news.ycombinator.com/item?id=37205552


I'm fairly sure you can find your answer in this thread from a couple of weeks ago. [0]

[0] https://news.ycombinator.com/item?id=36002574


First line of article: Replacing a sort algorithm in the FreeBSD kernel has improved its boot speed by a factor of 100 or more

This is obviously not true.


It's a bit of clonky writing, but in proper English "its boot speed" relates to the algorithm and not the FreeBSD kernel


Still doesn't really make sense, as the original algorithm is not 100x faster. The replacement is.


Well, now you mention it I can see it ... Feels very unintuitive to me.


If only windows optimized boot as well, it often takes ~10seconds even on the highest end machines


See, all those leetcodes practice finally pay off /s.

All seriousness, this is for VM booting right? Not hardware booting.


I wrote the article.

(I did not write the headline.)

Primary sources are some HN threads, which I linked to, so I am surprised to see it here. That's why I didn't post it.

It's for one very specific VM with one very specific role, which I carefully explained in the article.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: