Systemd mounted efivarfs read-write, allowing motherboard bricking via 'rm'
Essentially, systemd defaulted to a configuration where the computer's motherboard could be permanently destroyed by removing a 'file' from the command line. The bug reporter argued that this was unduly dangerous, but the systemd developers thought that systemd was working as intended.
> but the developer's thought that it was working as intended
Really? Is that evidenced by Lennart's response to this, which stated "The ability to hose a system is certainly reason enought to make sure it's well protected and only writable to root."[1]? I think it implies the opposite.
It happens rarely, but actually I totally agree with Lennart on this one. Maybe not for the very same ultimate reasons, but nevertheless, I agree.
Being able to brick hardware through a very oftenly used action (unliking a filesystem entry) throws us back into the times where one could damage display devices beyone repair by feeding them scan frequencies outside their operational range or by destroying hard disks by smashing the heads into a parking position outside of the mechanical range.
We left those days behind us some 20 years ago: Devices got smart enough to detect potentially dangerous inputs and execute failsafe behaviour. It's just reasonable to expect this from system firmware.
When talking about (U)EFI variables we're not talking about firmware updates, which are kind of a special action (and even for firmware updates its unacceptible that a corrupted update bricks a system¹). Manipulating (U)EFI variables is considered a perfectly normal day-to-day operation and the OS should not have to care about sanity checks and validity at boot time. (U)EFI is the owner and interpreter of these variables, so it is absolutely reasonable to expect the firmware to have safeguards and failsafe values in place.
IMHO (U)EFI is a big mess, a bloated mishap of system boostrap loader. And I'm totally against trying to workaround all the br0kenness in higher levels. The more and often systems brick due to the very fundamentals of (U)EFI being so misguided, the sooner we'll move on to something that's not verengineered.
----
¹: Just to make the point: When we developed the bootstrap loader for our swept laser product we implemented several safeguards to make it unbrickable. It's perfectly fine to cut the power or reset the device in the middle of a firmware upgrade. It safely recovers from that. Heck, the firmware in flash memory could become damaged by cosmic radiation, the bootloader would detect it and reinstall it from a backup copy in secondary storage.
There's a certain rich irony that Lennart is being flogged for following the Unix tradition (root can do anything, including blow up their monitor with bad X configs); that his detractors are suggesting userland tools manage their hardware (normally the job of the kernel in a Unix system); that systemd ought to be expanded to manage hardware (after years of complaining it's too big), presumably by adding whitelisting capabilities and a database of known-good/known-bad UEFI implementations.
I guess at least it demonstrates how utterly unhinged some people become when his name is attached to anything.
Actually Lennart working for RedHat is what gave systemd serious traction. If it came to 21st century software, there are numerous modern init systems around (dependency driven, fast) that largely predate systemd. If any of those systems had been widely adopted by now it would be probably far ahead of systemd in feature set.
OpenRC is what got closest to be the replacement SysV-Init, just by its qualities and market share. But other, nicer systems exist, but they never found wide adoption because the main distributions didn't let go of SysV.
Now Lennart claims all the bragging rights, but people who used modern init systems before it was cool know better (there are a lot of Lennart opponents among those, BTW).
I completely disagree. I looked at the patch above. I personally don't like. mounting efivars rw is akin to mounting boot rw by default.
A sane linux distro will mount it ro and switch to rw whenever need. Defaulting to rw efivars is, excuse the language stupid.
I've done a fair share of efi debugging even removing some of the variables that the kernel will now protect you from breaking.
If the issue is that users should be able to remount efivars as rw whenever needed then that should be addressed, not prevent you from doing stuff to it because there is a rogue init system doing crazy stuff.
EDIT: BTW, i don't think systemd does anything besides write to the various Boot* variables, but I may be wrong. I don't see why that can't be addressed with a remount. If you replace the boot.efi you still have to remount the efi partition anyway.
While Matthew may be right that there is an issue that needs to be addressed, but in one of his tweets he basically says the kernel should fix it because tooling isn't and bioses suck. Well, maybe tooling should be forced to fix it.
or from the issue:
Matthew-Jemielity commented 24 days ago
What needs efivars mounted at all anyway? So far I've seen:
grub
systemctl --firmware-setup reboot
efibootmgr
Since those likely need superuser, couldn't they handle (un)mounting it themselves?
@annejan
annejan commented 23 days ago
As long as distribution that are aimed at consumers remount it ro and on updating kernels wrap grub with remount this is a complete non-issue.
> I personally don't like. mounting efivars rw is akin to mounting boot rw by default.
No, it's not the same. Mounting `/boot` rw by default does not put your system in the danger of getting damaged beyond repair. If you hose the boot partition you can always start a recovery system (live Linux or similar) to repair the damage.f
But if deleting efivars renders a system inoperable on a firmware level you're essentially SOL, save for rewriting the contents of the system firmware flash using an external programmer and a clean image. That is an absolutely inacceptable situation. The year is 2016 and hosing a firmware by writing malformed values into the firmware API is, simply put, a software vulnerability that allows to permanently DoS a system. As such this is a security issue that must be fixed at where the security issue happens. And in case of efivars the issue is that certain input is not properly validated and/or sanitized. If a system firmware can not properly start with certain variables being unset or removed or set to invalid valued, its should be a implementation requirement to validate input on such variables before executing the change.
> Defaulting to rw efivars is, excuse the language stupid.
It probably it. But it's not the responsibility of the OS to sanitize values that are not intended for being used by the OS. efivars are intended to be used by (U)EFI and hence it's the (U)EFI implementation's task to properly sanitize access to them.
Essentially we're talking Bobby Tables here, just with a different API.
I remember the days when a single virus destroyed practically every motherboard for my local government. They had to throw away every machine and replaced it, costing the Swedish government several millions in early 2000 (https://en.wikipedia.org/wiki/CIH_%28computer_virus%29), and according wikipedia caused globally something around $1 billion US dollars in commercial damages.
A virus should not be able to destroy the system BIOS. The problem with efivars is illuminating a vulnerability, not a feature for "doing stuff". I would expect to see this used in the wild if left unfixed, especially the next time we hear about a remote vulnerability that permit arbitrary memory execution.
I can see the point that you shouldn't make otherwise-undesirable OS changes to accommodate incompetent hardware designers. The thing is, I'm not convinced that it this is undesirable: mounting efivars read-only by default seems like a completely reasonable thing to do, even absent this particular hardware issue. BIOS setting are important and rarely need to be changed, and anything that doesn't have the privileges to unlock efivars probably shouldn't be messing with it anyway.
> mounting efivars read-only by default seems like a completely reasonable thing to do, even absent this particular hardware issue.
It's perfectly reasonable to ro mount efivars. But doing so must not be the workaround to fix an security issue (and yes, this is a security issue) in (certain implementations of)(U)EFI. Just to make clear why this is an security issue: Security rests of three pillars:
- availability
- confidentiality
- authenticity
Rendering a system unusable (DoS-ing it) it an attack on availability. And speaking of security, if the goal is sabotage and causing large financial damages, then being able to permanently brick a system in case of a privilege escalation (there's nothing stopping UID=0 from remounting rw efivars) is pretty bad; and no, the implemented fixes in the efivars kernel code don't help, because an attacker can still mount a custom kernel module which will talk to the respective efivars code directly circumventing sanity checks (or directly talk to (U)EFI without using the efivars code).
Aside: I misread "swept laser" as "sweet laser" and was really bummed you didn't include a link to the Kick Starter for whatever project that would have been.
... so running "rm -rf /" as root should brick your motherboard because it's the responsibility of the motherboard manufacturer to protect against this. That's all fine and dandy in an idealized world, but in the "real world" there are going to be motherboard manufacturers that play fast and loose with these things.
No, you are missing the context of the earlier paragraph, which says "Well, there are tools that actually want to write it." so it needs to be accessible in some way. Making it truly read-only is not really feasible, and goes against the Unix philosophy of "root can do anything." That's not to say that we shouldn't make it harder for root to do some things, such as brick your system. In the end, the fix still doesn't prevent root from bricking a system in the case the firmware is badly coded, it just makes it a bit harder. Mounting read-only by default or defaulting all files to immutable until changed by chattr are not hard hurdles to overcome for root, but they the do make it harder than it would be without them, which in this case is a good thing.
> ... so running "rm -rf /" as root should brick your motherboard because it's the responsibility of the motherboard manufacturer to protect against this.
You are assuming "only writable by root" means "root can write without any restrictions" which is a fairly uncharitable reading, and requires assumptions about his intent which are not evident.
I could make a statement such as "cars in the united states can only be legally driven by people of an appropriate age" and you could assume I meant that's all that needs to apply and start calling out my statement as wrong, or you could assume I was aware of the additional requirement of a driver's license, or you could ask me to clarify my point of view. I just don't believe the first option is conducive to useful discussion, nor do I think it's appropriate to use assumed information in a negative way towards a third party.
Edit: s/disparage/use assumed information in a negative way/ for lack of better phrasing coming to mind. The statement wasn't really disparaging, just an uncharitable interpretation, so I don't want to overstate that.
> goes against the Unix philosophy of "root can do anything."
I don't think that's a useful perspective here. This is not a feature - nothing should ever want to permanently brick a motherboard, there's no use case for that. There's no benefit to allowing root to do this. The OS is supposed to abstract the hardware in a way that it can be safely operated.
I think you are mistaken about some of the details here. We are talking about root being able to write to efivarfs, which is needed for some valid reasons. Unfortunately, there are firmware implementations out there that don't handle some variables being overridden/wiped, and it bricks the system. So, the problem is bad firmware, exacerbated by the choice to represent EFI variable as a filesystem and what happens when you do a "rm -rf /" (in the simplest example). So, in my eyes, root should be able to write to efivarfs when needed, but we should mitigate this problem in some way so it's hard to accidentally brick your system by making it a little harder (but not impossible) for root to write to this filesystem. It's not possible to allow root to write to EFI while still ensuring that it can't brick the machine, because the responsibility for that capability lies with the firmware developers.
My understanding is that the amount of software that wants/needs to write to efivarfs is fairly small[1] compared to the amount of software that is normally run with root privileges.
The fact is that 'rm -rf /' is a common mistake, both at the command line (i.e. manually typing) and in scripts that don't adequately protect against things like missing variables. E.g.:
MY_DIR=
rm -rf "$MY_DIR/"
The fact that this could brick a system is a big deal. Pushing blame around doesn't do anybody any good.
[1] I realize that boot loaders like grub are everywhere and probably need to write to efivarfs, but that's still a data point of 1. Would it be that difficult for grub and it's related scripts to upgrade to remount the filesystem readwrite when it needs to perform an operation? I'm sure it's only been a couple of years since efivarfs functionality was even added to grub.
I understand and agree with all of this, I'm just not sure how it follows from what I wrote. The problem has a fix scheduled (what this submission is about), and in a fairly short period of time (~30 days for a fairly esoteric[1] bug).
To my mind there are two places to fix this, in the kernel for a real mitigation technique that helps "solve" the problem, and in the distros for quick fixes and hacks, or backports of the kernel fix, as necessary. Systemd pushing a fix of their own 1) only affects distros using systemd, while this problem affects all recent distros that use efivarfs), 2) probably won't get picked up by distros immediately excapt as a backported fix anyway, as I doubt most of them push new versions of something as integral as systemd every time a new version is out, at least not with a lot of testing, and 3) would not have been a good fix, and would have required the utilities that still needed access to actually remount a filesystem.
1: Triggering this problem is not as easy as what you (and I) wrote in most instances, as / has special consideration in rm, and generally requires the "--no-preserve-root" flag.
The kernel isn't in charge of mount options, that's the job of userspace (generally an init system). In any case, you would end up needing a way to mount it RW to accommodate legit uses of the non-broken EFI vars, at which point you're back where you started. This fix addresses it by having the kernel identify and protect only broken (or at least, non-standard) EFI vars.
> In any case, you would end up needing a way to mount it RW to accommodate legit uses of the non-broken EFI vars, at which point you're back where you started.
Not necessarily. Leaving it "wide open" for anything to accidentally write to it all of the time vs. just mounting it readwrite when you actually need to write to it are two different risk profiles.
It's firmware. It's supposed to brick your system if you ruin it. Firmware is basically hardware. It's not a multi-user protected-mode operating system.
It's the responsibility of the OS to make sure it doesn't ruin firmware.
This wasn't removing firmware as in the actual boot code for the motherboard. It was just clearing the list of what drives to attempt to load an OS from. That list is explicitly intended to be accessible to and modifiable by operating systems, to provide a better user experience than the old method of the user having to manually change BIOS settings.
The standard practice for PCs has always been that firmware configuration settings can be cleared (through a jumper or by pulling the battery) to reset the system to its factory state, forcing it to fall back to its conservative and safe defaults. Some systems have apparently forgotten to have defaults. Their firmware is already broken and afflicted with a major bug even if you avoid triggering it in this particular manner.
The EFI variables are just that. Variables not an entire firmware. I think this is where you confusion is. What is happening here is that these variables are presented to the user as a filesystem under /sys. If you run 'rm -rf /' (or even 'rm -rf /sys'), you will start "deleting" these variables (which I assume either sets them to a null-equivalent value, or somehow removes them entirely from wherever they are stored). It sounds like:
rm /sys/efivar/MyVariable
is roughly equivalent to:
var obj = {MyVariable: 1};
obj.MyVariable = null;
or:
var obj = {MyVariable: 1};
delete obj.MyVariable;
This bricks the firmware, because some of these variables end up being required. It's on the firmware maker / motherboard manufacturer to make sure that there is a way to recover from this rather than having the firmware fail to startup because some variables are missing.
My understanding is that firmware made to spec would not brick over the the EFI variables getting wiped. The motherboards that are encountering these issues are running firmware that is cutting corners. Unfortunately cutting corners and ignore specs is nothing new for hardware manufacturers who are more concerned with the manufacture of the physical devices and usually put only the minimum amount of effort into getting any sort of software components (firmware, OS device drivers, etc) running.
Why is it the OS's fault if the firmware can't reset itself to factory defaults? You're not modifying the firmware image, you're modifying a scratch area exposed by the firmware for the express purpose of the OS (ab)using it in any way it deems necessary.
Agreed, but that's almost a non-sequitur. The systemd developers are not the firmware developers. Additionally, while preferably we would have better firmware that couldn't be bricked in this way, we live in a world where this is often not the case, so the problem does need to be mitigated in some manner.
Bingo! While preventing a user from accidentally hosing their system due to a bad firmware implementation this still does nothing to address potential malware, it's good to have a patch that helps but I think any blame directed at systemd or the kernel devs really needs to be redirected at poor firmware engineering efforts. Mounting efivars as ro is a bandaid, this patch is a bandaid - the real damage is yet to be done!
The amount of software that requires write access to EFI variables (as root) is pretty small, so leaving it open all of the time in an environment where the firmware writer could have bugs that cause this seems like it's not necessarily the best course of action.
Root is certainly capable, traditionally, of completely hosing all levels of the software stack. Usually it's the OS's job to protect even root from being able to hose hardware/firmware, especially by easy-to-make mistake.
Yes, which is what we have here. My point is that I don't think there was ever any indication that the systemd developers thought there wasn't a problem to be fixed. That the fix was pushed farther up-stack so it was more comprehensive is not a bad thing in my eyes.
Put another way, systemd mounting efivarfs read-only doesn't necessarily prevent you from bricking your machine by deleting files within that file system, it protects only those using systemd. The problem stems from the nature of efivarfs, and that it was implemented as a file system and not some other interface. Making it default to something more sane is a better (but by no means the best) solution, and that ideally would happen within the source, which is the kernel.
I don't disagree with this, but I still think it's irresponsible of systemd to claim it's a kernel problem but not take steps to help people not brick their systems before it's fixed upstream.
What, like his second comment (which was just a few minutes after his first, IIRC), which in it's entirely is '(note that you can remount it readonly at boot, simply by adding an entry for it into /etc/fstab, that is marked "ro")'? Keep in mind we've had a single month since this bug was submitted. Systemd had a release 11 days ago according to their repo[1], but if that wasn't expected to be picked up and used by distros (which it might not, depending on the distro and what changes have been implemented since the distro shipped/updated systemd), then the appropriate places to fix this are in the kernel and/or as a patch to the distro package for systemd.
Really, I think the two appropriate places for this fix are the kernel, and if that's not expected to be rolled out soon, as a patch to systemd to mount it read-only by the distro shipping systemd. Systemd shipping the fix would only really help the small group of people that install systemd from source during the short window from 11 days ago until the next kernel is available (or that choose to run an older kernel), and makes the whole efivarfs situation a bit more confusing by leaving it read-only and immutable for the future.
Not everything is protected, like if the user is using chmod -R to change permissions to 777 and makes a typo for the main directory / instead of ./ or ~/ and then the whole filesystem is 777 and things stop running due to wrong permissions.
I was going from memory, and didn't reread Lennart's responses in the thread. But looking at it again now, I still think it's an accurate summary. Lennart's line after you quoted is "But beyond that: root can do anything really."
I read this as Lennart saying that when root issues an 'rm' in efivarfs, the variable should be removed even if this renders the motherboard unusable without physical repairs. What's your interpretation?
I've edited to fix my terrible punctuation, and to make it clear that 'it' refers to 'systemd', and to add a link to MJG's response on Twitter. I can edit further if you have a way to make it clearer.
My interpretation, stemming from his statement that it is a problem to brick machines and his statement that some programs need write access, is that something needs to be put in place, but we can't unilaterally restrict root (which would be very un-Unixy to do). I think he was fairly ambigous in details on how to fix it (possibly because he wasn't sure the best path to take), and included with some fairly blanket statements about policy/belief that allowed people to interpret his statements however they were inclined. Unfortunately due to the polarizing effect of systemd (and Lennart's prior projects, and possibly Lennart himself), there are plenty of people inclined to believe he doesn't care.
It shouldn't be mounted rw, and only exposed at root. It shouldn't be mounted by default. It shouldn't even be a bloody filesystem. EFI variables have so much metadata and other information attached to them that trying to wedge them into the filesystem is just asking for problems like this. Were the maintainers smart, they'd dump the file system all together, because it's a nonsensical way of presenting this data.
It should be mounted rw because existing userspace expects it to be rw.
> It shouldn't be mounted by default.
It should be mounted by default because it's information that's relevant to various pieces of userspace.
> It shouldn't even be a bloody filesystem.
With hindsight, it should absolutely not have been a filesystem. There's very little metadata associated with EFI variables and the convenience of exposing this in a way that can be handled using read and write is huge, but real-world firmware turns out to be fragile enough that this was a mistake. But, in the absence of a time machine, there's very little I can do to fix that now.
Start pushing for its deprecation. Devfs existed before it was replaced by udev. Keeping a broken system around just makes it harder to fix it. Interfaces on other systems, like the BSDs' sysctl show that you don't need to make everything a pseudofilesystem to make variables possible to change. If someone really really wants their efi variables as a filesystem, they could always write a fuse driver or the like.
> It should be mounted rw because existing userspace expects it to be rw.
Can you explain this a bit? I'm not familiar with the particulars of firmware, but I'm having a hard time imagining why any userspace program would expect a firmware partition to be writable. Even if there are any, certainly I'd have a hard time believing that they would need it to be mounted and writable all the time.
It's not a firmware partition, it's an interface to the nvram variables provided by the firmware. There's a bunch of reasons why these should be writable - you need to be able to modify some of them to do things like reboot into the firmware settings UI, they're used for secure boot key management, you want to be able to choose which OS to boot into on the next reboot or which network interface to PXE off, that kind of thing.
As for why userspace couldn't remount it itself - it could. It doesn't. Changing the behaviour of systemd without changing the behaviour of the rest of userspace would result in userspace being broken, and making that kind of incompatible change is annoying - especially when fixing it in the kernel allows us to avoid that breakage.
> Can you explain this a bit? I'm not familiar with the particulars of firmware, but I'm having a hard time imagining why any userspace program would expect a firmware partition to be writable.
Best example of why you need to have it RW: if uefi is in fast boot, the only way to actually enter uefi is to boot an OS and have the OS set the uefi variables so that it goes into the firmware screen on next restart. This is (on Linux) done by changing something in that virtual filesystem.
> certainly I'd have a hard time believing that they would need it to be mounted and writable all the time.
You could probably remount it, but that also means that you all the sudden have concurrency problems with that operation. So not really ideal.
For this particular example, wouldn't this be best solved with a separate runlevel for "reboot into UEFI"? Stop all daemons, remount all normal filesystems read-only, remount UEFI RW, write variable and reboot.
I'm curious, what kind of userspace application needs r/w access to EFI vars?
I would think a) this are mostly system tools like boot managers and b) these tools need root (or setuid root) anyway, so why can't they just mount it themselves temporarily?
Edit: It seems it is mostly grub-install, efibootmgr, and `systemctl reboot --firmware` that need this mounted rw. The first two aren't something that a casual user uses very often, and if someone does, a "Filesystem is mounted read-only" message will point them in the right direction. The latter is part of systemd and could easily be changed to mount efivarfs itself, no third party involved.
They could remount it read-write, but they don't. If systemd had made a unilateral change that broke existing userspace people would have been unhappy for a different reason. Doing this in-kernel means that the issue could be fixed without breaking existing userspace, so it's clearly the better solution.
The EFI spec is a horrible disastrous mess, but in theory you could be reading and writing EFIvars at runtime to dynamically configure boots - ie, "reboot to safe mode" would set the evifar next boot parameter to your initram-fs fallback. It would not be complicated to implement pam / polkit support for unprivlidged users to set such things without authenticating root.
It does not change the fact the fault lies with shitty proprietary UEFI implementations, and nobody writing free software is at fault here.
Specifically, the way a mental model of a hierarchy is broken by mounting a higher-order ressource (UEFI variables) as a subordinate of a file system that is itself a subordinate of the OS.
UEFI vars are just hardware resources. Mapping them as a file system object is just unnatural and, yes, stupid.
Trying to use a permission model ("only root can do it") overlooks the real problem: The user do not expect higher order objects to be mapped as subordinates of the file system.
When you delete from the file system, you expect objects to be deleted from the disk - not UEFI variables to be altered or deleted! And because the user does not expect such behavior, there's a good chance she/he will override warnings and go ahead with the operation expecting only file system objects to be affected.
This is "everything is a file" taken a bridge too far.
Good lord, presenting distinct data points with content and a name as a filesystem mount is a "problem".
The EFIVars table is stored in mainboard flash as... file(s). Probably only one, since the firmware isn't going to be using a filesystem.
But it is still a map of key-value pairs. That maps perfectly fine to a filesystem.
Nothing about the "brick your laptop with rm -rf /" is the fault of any free software component or philosophy. By the specification of UEFI itself the efivar table is for transient data storage between the OS and the firmware. It is supposed to be mutable, removable, you can do whatever you want to it as an OS and the firmware can do whatever it wants as well and neithers behavior should stop the world.
All this was is a demonstration of why proprietary firmware is bad, and that again the free software community needs to work around broken proprietary crap that cannot obey its own design documents.
If you want to soapbox about how everything would be easier if we had to link a library and access all OS level data through some 50k command API instead of through files... I'm not sure you are going to actually find an instance where people are improperly treating data as files, because almost anything can be treated as a file. You can implement it poorly, but if it is data and it has organization you can put it in a filesystem.
Where does Linux promise that files are bits on a disk? As a user I certainly don't expect that. Perhaps you have a problem with the name "file" but the abstraction itself still seems useful. (And yet I do find it quite odd when I have to do something like `echo "TPAD" > /proc/acpi/wakeup` to disable wake-on-trackpad.) That said I don't disagree with you that UEFI variables should not be delete-able, but there are many files on Linux that you can't delete.
It is a file system, consisting of file system objects, like files and directories.
Already exposing processes as files is an abstraction. It somewhat works because you can imagine the file representation being maintained by the process. But it is an abstraction, because a process is not a file.
But what is more important: A file system is a hierarchy. At the root is the most fundamental object. Each level has subordinate objects. That the model you expect.
Having UEFI variables mounted as a file is a surprising loop back to something even more fundamental that the OS itself: The firmware of the physical computer. It a breach of the mental model.
It breaks one of the most fundamental principles that should be followed in man-machine interaction: The principle of least surprise.
I have a machine. I have installed an operating system on it. The OS manages several disks. On the disks the OS manages file systems. I expect the files of that system to be managed by the OS.
I do not expect that regular file system actions have effect outside the hierarchy of the directory on which I perform the actions. Specifically I do not expect files on that system to manage the physical computer.
> Already exposing processes as files is an abstraction.
No. The file system, is the abstraction. Adding /proc onto it is a use of that abstraction.
There's two basic extreme positions, and you're adopting one, your parent is adopting something closer to the other.
a. The filesystem only exposes filesystems actually on disk, mapped to some hierarchy. As you say, "On the disks the OS manages file systems."
b. The filesystem is (roughly) a hierarchical container of named binary blobs (called "files") with some defined associated metadata, such as permissions.
While you can adopt (a), and that's fine, some of us (myself included) see a lot of value in (b). The biggest problem with only exposing "real" file-storing FSes in the file hierarchy is that it leaves you with a ton of questions about how to expose all the other things. Taking the stance that we're only going to expose "real" files in the file hierarchy leaves us with several classes of objects that aren't files-on-disk, and you need to name them s.t. the user can interact with them. It is certainly possible to expose each different type of thing in a completely separate namespace. You'll probably also need to be able to associate permissions with those objects¹, as so now you've got a named, ACL'd list or hierarchy of objects, and it's starting to look a lot like a filesystem. You now also need another set of tooling to work with each of these classes of objects. You need another set of syscalls for each of these objects.
The great thing about having a unified file hierarchy in the (b) abstraction is that tooling works on all of these different classes of objects different. It's really just the "CRUD" idiom, and normally it allows things to interoperate quite smoothly. I can write a bash script that draws a progress bar of my battery, and it requires no knowledge other than where in the file hierarchy the battery is.
This is, of course, a case where the power is somewhat biting us. That doesn't make the abstraction wrong, nor does it mean the abstraction isn't leaky. (In fact, in this case, the abstraction works really well, I'd say. Any other implementation of UEFI variables is going to have a "delete" call, AFAICT. What bit us here is that all the objects are in one bucket together, and thus rm -rf / removes more than just files.)
> It breaks one of the most fundamental principles that should be followed in man-machine interaction: The principle of least surprise.
While I agree, that doesn't mean we need to throw out all the power of having a unified file system, but it might beget some way of ensuring the user understands what `rm -rf /` actually does. There's certainly more than one way to solve this, some of which don't involve limiting what can be done with the FS. (As some examples: perhaps rm shouldn't recurse to a different FS, and objects of similar types are on different FSs, which prevent the very error that got us here; perhaps some files force "user acknowledgement" of their removal; perhaps it really does get mounted read-only.)
¹While you might be able to get away with "only root accesses UEFI vars" in the scenario that they're not in the file hierarchy, if you remove all non-real-files then you've got a lot of other things to deal with: unix sockets, block devices, terminals, all the various I/O ports, temp sensors, battery data… the list is extensive.
> No. The file system, is the abstraction. Adding /proc onto it is a use of that abstraction.
Agree that the file system is an abstraction. Makes us think in terms of directories (containers) and files (items). Everything in the file system is designed around the idea of files and directories. Permissions (rwx), operations (create, move, copy, append, delete).
However, already adding /proc challenges that. What does it mean to have "execute" right to a process? It is already running? What does it mean to append to a process? to move it? If processes are "files", why can I not kill the process by deleting the file? Processes are not naturally files. Yes, it makes somewhat sense if you think of /proc as status information being maintained for each process, i.e. they are extracts, owned by the OS.
But UEFI vars makes absolutely no sense. It is a true leaky abstraction. If one need to be able to write to UEFI vars, then create an API for it, possibly some utilities. That way I need not risk altering fundamental firmware settings by performing seemingly file system operations whose effect I expect to be limited to the hierarchy!
> The great thing about having a unified file hierarchy in the (b) abstraction is that tooling works on all of these different classes of objects different. It's really just the "CRUD" idiom, and normally it allows things to interoperate quite smoothly. I can write a bash script that draws a progress bar of my battery, and it requires no knowledge other than where in the file hierarchy the battery is.
But it actually just sweeping complexity under the rug. I need documentation for what the file contains on each "line" - what it means to write to it, etc. It is not discoverable at all. If you expose system resources as actual resources and do not try to map them onto files, you can actually make a discoverable system. An example of such a regime is CIM. On Windows, PowerShell (or Python or VBScript or ...) can be used to interact with such fundamental system resources. To use your example of a progress bar of the battery, here is an example of how the entire process from discovering the correct ressource (the battery) to displaying a progress bar on Windows without consulting documentation:
PS C:\> #there's probably some class for batteries. let's look for it by name
PS C:\> get-cimclass *battery*
NameSpace: ROOT/cimv2
CimClassName CimClassMethods CimClassProperties
------------ --------------- ------------------
CIM_Battery {SetPowerState, R... {Caption, Description, InstallDate, Name...}
Win32_Battery {SetPowerState, R... {Caption, Description, InstallDate, Name...}
Win32_PortableBattery {SetPowerState, R... {Caption, Description, InstallDate, Name...}
CIM_AssociatedBattery {} {Antecedent, Dependent}
PS C:\> # the Win32_Battery probably offers the most specific information
PS C:\> Get-CimInstance Win32_Battery
Caption : Internal Battery
Description : Internal Battery
Name : DELL 1C75X31
Status : OK
Availability : 2
CreationClassName : Win32_Battery
DeviceID : 647Samsung SDIDELL 1C75X31
PowerManagementCapabilities : {1}
PowerManagementSupported : False
SystemCreationClassName : Win32_ComputerSystem
...
BatteryStatus : 2
Chemistry : 6
DesignCapacity :
DesignVoltage : 12992
EstimatedChargeRemaining : 94
EstimatedRunTime : 71582788
ExpectedLife :
MaxRechargeTime :
...
ExpectedBatteryLife :
PS C:\> # yep - that's it. lets save this instance in a variable
PS C:\> $bat = Get-CimInstance Win32_Battery
PS C:\> # display a progress bar and update it continually every 10 secs
PS C:\> for(){ Write-Progress Battery -PercentComplete $bat.EstimatedChargeRemaining -Status "Charge remaining"; sleep 10 }
> This is, of course, a case where the power is somewhat biting us.
No, what biting us is a leaky abstraction that surprises us: We can accidentally delete firmware variables because file system operations are not constrained to the directories/files they operate on.
> That doesn't make the abstraction wrong
It is an abuse of the abstraction.
> Any other implementation of UEFI variables is going to have a "delete" call, AFAICT.
Indeed. In PowerShell you can discover the commands for manipulating by
gcm UEFI
> What bit us here is that all the objects are in one bucket together, and thus rm -rf / removes more than just files.
No, what bit us is the broken expectation (a surprise) that a higher-level resource was mapped below some file system directory.
Off-topic, but imo HN needs more PS-promoting posts like this. I have the impression there are still tons of people out there who're stuck in the 'windows has no proper command line so administration means ugly batch files and registry hacks' mindset. Examples like this should open their eyes. Sure you can't pipe text around (and go through hoops to parse it corectly) like in bash so it takes some getting used to depending on your background, but one you get a hold of it you realize it really is powershell.
I still am not convinced it is a leaky abstraction because of this. Perhaps it is a surprising one, it certainly took me a while to learn about it, I don't think I really appreciated universal file I/O until I worked through a bit of The Linux Programming Interface (recommended!). I see the filesystem now as a standard interface to many parts of the system (including hardware, processes, kernel state, etc) through the kernel. I don't think that /proc/1 is a process, it is an interface to information about a process. It makes sense to me UEFI variables are exposed this way. In fact if anything I'd say the abstraction is not leaky enough here: deleting the files deletes information on firmware, isn't that the promise of a "regular" file? :) Again, I am not defending the current behavior! There's nothing about the abstraction that says such a file must obey to `rm` in this way.
I recently ran into a funny bug, however, that makes me more sympathetic to your point. In Emacs a version of TRAMP mode (which is used to connect to remote servers or to connect locally as a different user) would try to cleanup itself by deleting some sort of tramp history file after a session. And if the history file didn't exist to begin with (or if a setting disabled it, I'm not sure exactly) someone thought it appropriate to simply open up "/dev/null", throwing away all writes to the file -- OK, makes sense so far.
But in TRAMP I often connect as root to my own machine -- it makes it easy to edit files as root, or run a shell in root. And as root you have the power to delete /dev/null! So, TRAMP would delete it without my knowledge... what's odd is that it quickly gets created again (perhaps by Emacs) so that it appears to exist, except that suddenly a) it's a regular file and b) it's owned as root without world write/read permission, so that suddenly all sorts of things start to fail because they can't open /dev/null. Fun.
I agree that there's no promise it's a file. After all, fuse will let you mount anything as a filesystem. But I'm trying to think of another file I can "delete" and permanently hose my computer, but I'm coming up blank. Maybe some fuse filesystem somewhere can do it, but none of the ones I've used.
I mean, yeah, I've destroyed many a partition table in my day, and I've permanently lost myself some data, I've even dd'ed in the wrong direction with no recourse but to suck it up and deal with it, but I've never fried a computer with a rm command. (Contrary to what some commenters seems to be viciously defending, this does seem to be a legitimately different level of destructive possibility than has conventionally been available. This is the sort of thing that would put me off having ever installed Linux in the first place.)
A lot of people are busy arguing who to blame, I just think it's interesting that sometimes you need to support non-standard software. Actually, I think it's interesting that highly successful programs are not the ones who go "not my fault, you should just implement the spec better".
Raymond Chen talked[1] about the importance of supporting that ran on Win95 but broke on WinXP, even if they weren't complying to Microsoft specs.
I also remember reading that web browsers had to go to great length to render completely non-compliant web pages.
In your experience, when should you decide to support "non-complying" behavior?
No, the firmware is the right place to fix the problem. A BIOS that bricks itself because of within-specification deletion of variables via a standard API is just plain broken.
But in the real world no one ever fixes firmware bugs, so this is the best we can do.
Not really. Systemd could have fixed it, but it would have been a lot of work and a hack. The Linux kernel team could have fixed it, but it would have been a lot of work and a hack. The hardware manufacturers should have fixed it, but they don't care.
The Kernel team showed they are professionals by stepping up and doing the work.
What you mean is: systemd could have fixed it, but it would have been a lot of work and a hack that wouldn't help you when efivarfs gets mounted on a sysv/upstart/openrc system. The kernel was the best place for the fix.
disagree. It's not just systemd that could mount efivars as read/write, anyone could. So anyone that does will fall prey to this. It's firmware, and failing that, the kernel protecting you is a good second.
systemd couldn't have fixed it, because removing the offending code from systemd wouldn't have prevented any other program from, innocently or maliciously, triggering the same bug.
The kernel is where the buck stops when it comes to protecting hardware and, therefore, protecting software from misdesigned and/or buggy hardware. That's been true for longer than most of us have been alive.
Bricking is a rather extreme firmware bug, but even if it didn't brick itself - if it just lost a bunch of information (Boot list, Vendor information, Time, BIOS settings, Windows activation data / preinstalled keys... I don't know all the kinds of stuff you can put in there.)... wouldn't that also be bad? I would never expect to be able to cause that by deleting a directory. So by the principle of least surprise, this shouldn't be mounted by default.
So on a non-broken BIOS, there is "technically" no bug - but the pieces (BIOS, Kernel, Systemd) come together to make a bad design.
> But in the real world no one ever fixes firmware bugs, so this is the best we can do.
PREFACE: This is an anecdote, but I do believe it reflects on general state of hardware vendors, because when I Google'd, it showed that people had similar, if not worse problems than I did.
And this is so incredibly sad. Especially when you buy a $2.5k laptop which only works with Windows (with quirks).
I bought a laptop^[model] on which you couldn't even install another OS because of a crippling firmware bug. It wasn't until a shit storm on their forums that they released a firmware update which fixed the issue (which was that the SATA controller was stuck in RAID mode, and you couldn't change it to AHCI), which prevented any OS from being installed (even window, that was installed already, which is bizarre) because no OS could recognise the PCIe NVMe M.2 SSDs.
After the update was released, I did happily install Linux on it, but the ACPI DSDT was so broken, I didn't know where to begin with fixing it (apart from this whole hardware stuff being outside of my domain). Other than that jack detection is jack shit (pun intended). I literally can't use my headphones without special OEM or Realtek software (forgot which) on Windows, and I can't use them at all on Linux because there's no equivalent. I tried playing with various modes^[modes] and output configurations, but to no avail.
Also, on Windows I hear a subtle scratchy sound from somewhere in my laptop, but I don't hear it on Windows. I noticed it the most while moving my USB mouse or when there's a lot of CPU intensive work. No, all the solutions recommended online didn't work, and this is apparently an issue with Windows on Asus/Realtek for years, if not decades.
Furthermore, there's a bizarre flicker which subtly intensifies and then subtly goes away on Windows (and it interestingly happens only in some applications which appear to use GPU acceleration) which doesn't happen on Linux (even during an intensive OpenGL benchmark followed by a WebGL benchmark).
The things I thought I'd have most issues with (the GPU and the Skylake processor) turned out to be the least of my problems. Actually, 0 problems with them. So, kudos to NVIDIA for their proprietary Linux drivers (the novueau ones worked great, too, but I devcided to go for the proprietary ones due to the slight performance benefit).
So, no this isn't a Linux issue to anyone who wants to scream "boohoo linux is bad for consumer PCs". This is all an issue of shitty hardware vendors. There's probably over a hundred models documented on the Archlinux Wiki[archwiki] with all their various quirks and what not. Most of those are actually hardware problems, and there's no way for Linux to fix all these problems without there being some giant database with each laptop model and its quirks and applying configuration fixes, and this would also have to be distro-agnostic or cover various distros to work properly. The only reason why most of it kinda (not flawlessly) works on Windows is because the various Vendors actually cooperate with the Windows developers (I imagine), and its rare that I see them even trying to cooperate with Linux developers; maybe I just missed it, but each time someone does cooperate, it's met with this grand praise that's quite hard to miss, so I doubt I missed it (this excludes certain vendors who have always cooperated with Linux devs, or who specifically write drivers for linux in the first place).
It's so, so solemnly sad that people blame most of this, if not all, on Linux. Especially considering Linux does its best to try and patch this endless stream of oncoming shitty hardware and nobody (not literally nobody, but a very small percentage) sees or recognises that effort.
systemd can't take all the blame either - I bricked (yes really bricked) one of these by grub installing a "stub" that only booted into grub-rescue on my EFI partition. I can't get into the firmware settings and the rescue loader can't read the partition tables -> bricked unless I can corrupt the EEPROM somehow and force a menu (no CMOS battery in these low-end devices to pull)
FWIW: that DSDT hack is replacing a constant return value from the first method in the "HAD" device. It was specified to return 0, but now returns 0xf instead. This is almost certainly the _STA (status) method, which informs the OS about the operational status of the described device. I forget the exact meaning of the bottom four bits offhand, but 0xf is the standard value for "device present and operating normally -- use it!".
That it was returning zero would cause the linux ACPI framework to ignore it and not probe its driver. My vague understanding is that windows works differently, and calling _STA is done by the driver, so it's possible to just not do it and still have a working system.
I don't know what the device itself is, but given that the script says "audio" in there it's probably the audio codec.
PendoPad 11" 'laptop' running Windows 10 Home 32 bit (despite having a 64 bit capable processor).
I replaced the bricked device and I'm going to be a lot more careful this time.
Booting Ubuntu Wily works, but there's no battery (status/charging?), wifi, audio or touchscreen. So if you use the XDA scale it's working perfectly!
I have another Z3735 device (MeegoPad T01 - Intel Compute Stick knockoff), but it's unusable because the clock runs fast, then slow, then fast - enough that an NTP sync makes the clock go backwards and then everything breaks.
These chipsets are turning up everywhere and most of the time the implementation is garbage. I hope Intel did better with the reference implementation/s but I can't afford them at the moment.
The idea of multiple vendors making the same "beige boxes" and only competing on price is bad. One of the things on my wishlist for an Intel/AMD branded laptop is a high quality UEFI implementation.
Keep wishing. Even a $1400 Lenovo X1 had terrible UEFI when I used it - what's the point of re-implementing your old keyboard-only BIOS UI exactly if you have mouse support? Dell does better, at least in the business grade products.
> I literally can't use my headphones without special OEM or Realtek software (forgot which) on Windows, and I can't use them at all on Linux because there's no equivalent.
If the problem is no output at all, it may be just a matter of toggling some HDA codec GPIO or EAPD pin to power up external amplifier chip, which can be done with hda-analyzer. But if it's some combo headphone/mic jack and detection doesn't work then I have no idea.
Thanks for the tip! I did install hda-analyzer, but never ran it since other stuff came up. The sound thing wasn't such a huge issue, because I just play the music without headphones.
Another problem is that the laptop has a 2.1 sound system (or 4.1 maybe, I am not actually sure?) and the outputs are a bit wonky (which can, apparently, also be fixed/configured with hda-analyzer).
In short, the whole laptop is a mess. I imagine it will be fixed eventually by Linux sound drivers. I am still collecting data to open a bug report on kenrel.org, hoping it helps future people not having to go through all this… bullshit, for a lack of more apt expression.
This commit message shows why I like the Linux Kernel team:
>These fixes are somewhat involved to maintain
>compatibility with existing install methods
>and other usage modes, while trying to turn
>off the 'rm -rf' bricking vector.
They go out of their way to make sure changes are backwards compatible.
As usual when someone mentions "UEFI" and "sane" in the same sentence, I post this quote from Matthew Garret [1] (of Linux EFI maintainer fame):
"""
UEFI stands for "Unified Extensible Firmware Interface", where "Firmware"
is an ancient African word meaning "Why do something right when you can
do it so wrong that children will weep and brave adults will cower before
you", and "UEI" is Celtic for "We missed DOS so we burned it into your
ROMs".
"""
Yes, its bad firmware, but the OS shouldn't allow a user to brick a machine even if the hardware vendor has broken firmware. Discovering the problem is unfortunate, but the OS folks not roping that area off is just wrong.
So if I understand this correctly, now instead of bricking the system it will just fuck up the bootloader, even if the bootloader is completely unrelated to the linux install you are `rm -rf /sys`ing. Since the useful efivars that set up bootloaders must be on the whitelist.
It's an improvement, but it seems like we should do this in addition to default mounting read only.
It still seems to me that Linux should follow FreeBSD and not mount it as a filesystem and just use a library to manipulate the values. It clearly has some huge problems with being a filesystem. This isn't Plan 9 and everything does not have to be a file.
FreeBSD actually doesn't have any support for EFI variables at all! It just installs the loader into the default location (bootx64.efi) and the loader does everything.
And so, when we say "permanently destroy" do we really mean that something is "destroyed" and so done with "permanence"?
This motherboard... It refuses any sort of reflashing of the firmware? Taking the button cell out of the battery slot, and removing all power from the board does nothing? The motherboard won't enter BIOS, upon pressing F10 at power on?
Yes, this is one of the rare cases where "brick" is being used in the correct technical sense of rendering the motherboard permanently unusable without repairs involving a soldering iron. There is no reset option.
Depends on the package used it may be possible to reprogram the UEFI chip in situ with test clips[1]. But of course one will still need to acquire a reprogrammer and a know good copy of firmware.
Poettering gets shit on a lot. While his software does often have problems, he is really well-intentioned and working on really hard things that really do need fixing. New software simply has bugs and problems, and while the transition period can be rough, I think things will be really improved when we break through to the other side.
While systemd bugs are often cause for rejoice and schadenfreude by its critics, I think you're underestimating the argument. "The other side", that is, the vision that the Poettering et all have for the Linux environment is the issue that the critics have against systemd, not the bugs and other occasional issues.
I say as someone still ambivalent, not pro nor against it.
People don't hate systemd because it has bugs. People hate it because it's clear that it thinks it owns your system and that its manifest destiny is to contain all software that exists.
However, "it's clear that it thinks it owns your system and that its manifest destiny is to contain all software that exists" is not one of those well-thought-through reasons :P
What is the reason that systemd has to include a bootloader (bootctl -- which was gummiboot)? Why does it include a container engine (nspawn)? Why does it think that it should control all logging on your system (the whole journalctl monstrosity)?
Yes, it might've been hyperbole, but the fact that systemd has so much code inside it which has nothing to do with "being an init system" just begs the question "why?".
Well, it's already taken over your logging system (storing things in a binary format, which means you're fucked if it gets corrupted), your bootloader, top, it provides an alternative container runtime, etc. It's just a matter of time. :P
It seems to me that the real issue is that "rm -rf" should by default not recurse into mounted filesystems, but should at most try to unmount them.
In addition to clearing EFI variables, the current behavior will also attempt to clear any mounted removable drives and any mounted network drives, which is usually even more harmful than messing with EFI.
Of course that would be a backwards incompatible change, although I don't think many scripts rely on this behavior.
> It seems to me that the real issue is that "rm -rf" should by default not recurse into mounted filesystems, but should at most try to unmount them.
There is a --one-file-system argument that skips directories not on the same filesystem. You could add this layer of protection by adding it to an alias in your shell.
To be fair "rm -rf /" doesn't just work. You have to confirm that you really do want to delete everything. Destroying / in itself is pretty harmful. If you're planning to do that you should already know not to have anything you want to keep mounted.
For the few use cases where a system admin wants to "rm -rf /", there are hundreds of bad scripts that can screw up a system. I believe Solaris did the right thing and made it not work.
To be clear, the problem described in the video is not something that can happen. "rm -rf $1/$2" where $1 and $2 aren't defined (therefore making it "rm -rf /") will not run. If you really want to destroy your root directory you have to specify the --no-preserve-root flag. No more accidents from scripts that assume things poorly, but it will still do exactly what the user asks.
When $STEAMROOT was empty, "steam apparently deleted everything owned by my user recursively from the root directory. Including my 3tb external drive I back everything up to that was mounted under /media."
Wow that is an annoying example. Aside from not following the Golden Rule of shell variables (always consider the unset case), why on Earth didn't they just write rm -rf "$STEAMROOT" ? The trailing slash and asterisk do nothing. Cargo-cult scripting. Ick.
Yet, I can still delete my home directory by accident (e.g. Steam patch). The idea that any rm can kill the directory I'm in is just bad. A flag on rm is the wrong solution. It should just fail.
I disagree with this almost completely. If I tell my computer to do something it should just do it (possibly after some complaining). You cannot delete the directory you are immediately in, so that at the very least is prevented. But as you move away from just root the usefulness of deleting nearby folders and files actually becomes useful. And putting those kinds of things is absolutely a reasonable solution. It keeps you (and scripts) from shooting yourself in the foot but lets you do things as long as you acknowledge what you are actually doing.
I said "The idea that any rm can kill the directory I'm in is just bad. A flag on rm is the wrong solution. It should just fail."
you say "You cannot delete the directory you are immediately in, so that at the very least is prevented."
I have no idea what the rest of your comment is in relation to what I said other than I'm pretty sure you can accidentally delete a directory your in given what Steam did.
Deleting you current directory is against the POSIX standard. It should not be allowed.
What I was saying was that "rm -rf ." just won't work. You cannot delete the directory you are in directly ("." and ".." are not valid options).
If however you delete a directory that is higher up the directory tree (e.g. the parent directory), it will be deleted.
As far as I can tell this does not violate the POSIX standard[1], as that situation is left as undefined (since in theory the directory you are deleting will chain to the directory you are currently in which is open in the tty).
Edit: The rest of my previous comment was trying to say that the utility of being able to self destruct the current directory is arguable. Why should it be prevented (especially when it could just be hidden behind a flag to prevent accidental destruction)?
You wouldn't be able to run any programs from the hard drive, and when you reboot, you'll end up with no operating system.
Unless you're running FreeBSD (or Illumos) with ZFS and Boot Environments, in which case you'd just select a backup boot environment and continue working :-) Probably without your home directory though, as that is usually excluded from boot environments. But you can set them up however you want.
But if you're running Linux (before this update) on a laptop with terrible piece of shit firmware, you'd end up with a brick.
P.S. found a cool post about rm -rf / in my bookmarks: https://lambdaops.com/rm-rf-remains/ – you can recover a running rm'd Linux machine by using a running shell and /dev/tcp :D
I wanna try it out but I'm not brave enough to do it directly in my OS. WHat would happen if I did it in Ubuntu inside a VM? What if VirtualBox has mounted some directory on my system to the guest? I'm afraid to try this too :/
I once tried 'format C:' on a Windows 10 laptop I didn't care about and I just got a boring error message.
> P.S. found a cool post about rm -rf / in my bookmarks: https://lambdaops.com/rm-rf-remains/ – you can recover a running rm'd Linux machine by using a running shell and /dev/tcp :D
Awesome. This dynamic loading of bash plugins is mad.
It's not a systemd bug, it was just exposed by systemd. This was what most everyone seemed to completely miss in the prior exchanges.
Another thing that was missed was that Lennart wasn't being unreasonable, nor was he saying it wasn't a problem (he specifically stated the opposite, in fact). I had a feeling at the time (based on his responses) that the reason he wasn't specifically stating he was going to fix it or open a bug report for it in systemd was that he was going to push it up-stack to a more appropriate place, and it looks like that's what happened.
> Another thing that was missed was that Lennart wasn't being unreasonable, nor was he saying it wasn't a problem (he specifically stated the opposite, in fact). I had a feeling at the time (based on his responses) that the reason he wasn't specifically stating he was going to fix it or open a bug report for it in systemd was that he was going to push it up-stack to a more appropriate place, and it looks like that's what happened.
This. It's really hard to blame this on systemd (not that people didn't try anyway).
It seems to me like it's a bad thing that an accidental rm in an efivarfs filesystem can brick the system, regardless of whether the filesystem was automounted or not.
But on sysvinit if you mount efivars rw for any reason and your hands slip a bit a stray `rm` could still brick your motherboard, so it's not really fair to say that without automounting the issue goes away.
I'd say it wasn't a bug in systemd's implementation, it was a bug in the design of systemd. Why require that evivarfs be mounted at runtime? You could easily make it so that it is only mounted at boot time if neccessary, or only when you run a tool that wants to write to EFI.
I'd expect file operations to only permanently affect storage devices per default. Sure you can mount almost anything as a file in Unix, but to automatically mount more than necessary is bad design. It's like placing mystery files in the filesystem, and when a curious user deletes or modifies them, they loose their monitor's color profile, there printer's firmware, or all of their GMail attachments. You could say it was the user's fault to mess with it, but I'd say it's weird to expose such things as files (unless explicitly asked for by the user or a tool).
EVERY single init system mounts it's rw. It is not specific to systemd. There are lots of situations where EFI boot variables need to be set by the OS.
No. Sane operating systems don't present UEFI variables as a filesystem. Their structure, their contents, everything are not files. Period. efivarfs doesn't expose metadata regarding when variables are visible, does't expose any sort of interface to present a private key for managing secure variables, etc. It's just a bad, broken idea that should be completely removed.
The systemd developers could have said that they wanted no part of that madness, and chosen not to automatically mount it. Someone else's bad decision doesn't absolve one from their own bad decisions.
They disabled a kernel feature to work around a firmware bug (which not all computers have). The firmware really shouldn't allow modification of things that will cause the firmware to permanently face plant.
It's not a systemd bug, it is not a kernel bug. It was a firmware bug. It is usually the Linux kernel that "fixes" these idiotic issues when vendors don't implement something correctly.
Here's a reasonably impartial discussion on a FreeBSD list that gives an overview: https://forums.freebsd.org/threads/54951/
And from that thread, here's a link to Matthew Garrett (the creator of efivarfs) saying that efivarfs is at fault here rather than systemd: https://twitter.com/mjg59/status/693494314941288448