Sadly, I can report that this has brought down 2 of the major Mastodon nodes in the United Kingdom.
Happily, the small ones that I also use are still going without anyone apparently even noticing. At least, the subject has yet to reach their local timelines at the time that I write this.
2 of the other major U.K. nodes are still up, too.
Ever since the Verisign coup in 2003, the world has had the idea of "delegation-only" and suchlike filtering on responses from superdomain servers. More recently, query minimization was invented. Both of these can militate against the root content DNS servers doing that.
Better still, one can run one's own private root content DNS server. I've been doing that (in several ways) for a couple of decades. If ICANN decided to blackhole (say) www.microsoft.com. tomorrow, my DNS lookups wouldn't be affected.
To affect them, the aforementioned "court action" would have to target Verisign.
I'm curious: how did you implement your "private root content" DNS server such that it keeps up with (valid -- and how would you know?) updates made by the TLD registries via IANA?
More realistically, DNS blocking is no longer an issue unless "they" get to the registries for the top-level/second-level domains. It's easy to make yourself immune to things injected by the root content DNS servers, with at least two mechanisms for combatting this (the better one being just running your own private root content DNS server) having existed for most of this century.
The problem with clickbait and entirely uninformative headlines like this is that sometimes they are so egregious that they discourage reading further, as one balks at having been so obviously lured by the bait. It's only through the comments section that one discovers otherwise. This hyperlink would have remained entirely unfollowed.
Sure, but who's going to pick up a random USB-to-SD adapter from the parking lot and plug that into a computer? The point of the USB key experiment is that the "key" form factor advertises "there is potentially interesting data here and your only chance to recover it is to plug this entire thing in wholesale".
You're moving your own goalposts, by now restricting this to a storage device that is fitted into an adapter to make it USB. There is no requirement to limit this to USB, however.
They'll pick up the SD/TF card and put it into a card reader that they already have, and end up running something just by opening things out of curiosity to see what's on the card.
One could pull this same trick back in the days of floppy discs. Indeed, it was a standard caution three decades ago to reformat found or (someone else's) used floppy discs. Hell, at the time the truly cautious even reformatted bought-new pre-formatted floppy discs.
This isn't a USB-specific risk. It didn't come into being because of USB, and it doesn't go away when the storage medium becomes SD/TF cards.
> You're moving your own goalposts... This isn't a USB-specific risk
I'm not, because I am talking about a USB-specific risk that has been described repeatedly throughout the thread. In fact, my initial response was to a comment describing that risk:
> A USB can pretend to be just about any type of device to get the appropriate driver installed and loaded. They can then send malformed packets to that driver to trigger some vulnerability and take over the system.
The discussion is not simply about people running malware voluntarily because they have mystery data available to them. It is about the fact that the hardware itself can behave maliciously, causing malware to run without any interaction from the user beyond being plugged in.
The most commonly described mechanism is that the USB device represents itself to the computer as a keyboard rather than as mass storage; then sends data as if the user had typed keyboard shortcuts to open a command prompt, terminal commands etc. Because of common controller hardware on USB keys, it's even possible for a compromised computer to infect other keys plugged into it, causing them to behave in the same way. This is called https://en.wikipedia.org/wiki/BadUSB and the exploit technique has been publicly known for over a decade.
A MicroSD card cannot represent anything other than storage, by design.
Mine. Not asking whoever happens to have local physical access interactively, strictly speaking, as that just papers over one of the problems; but controlling what Human Input Devices are allowed when plugged in, by applying rules (keyable on various device parameters) set up by the administrator.
Working thus far on NetBSD, FreeBSD, and Linux. OpenBSD to come when I can actually get it to successfully install on the hardware that I have.
In principle there's no reason that X11 servers or Wayland systems cannot similarly provide find-grained control over auto-configuration instead of a just-automatically-merge-all-input-devices approach.
It's not an interactive approval process, remember. It's a ruleset-matching process. There's not really a chicken-and-egg problem where one builds up from nothing by interactively approving things at device insertion time using a keyboard, here. One does not have to begin with nothing, and one does not necessarily need to have any keyboard plugged in to the machine to adjust the ruleset.
The first possible approach is to start off with a non-empty ruleset that simply uses the "old model" (q.v.) and then switch to "opt-in" before commissioning the machine.
The second possible approach is to configure the rules from empty having logged in via the network (or a serial terminal).
The third possible approach is actually the same answer that you are envisaging for the laptop. On the laptop you "know" where the USB builtin keyboard will appear, and you start off having a rule that exactly matches it. If there's a "known" keyboard that comes "in the box" with some other type of machine, you preconfigure for that whatever it is. You can loosen it to matching everything on one specific bus, or the specific vendor/product of the supplied keyboard wherever it may be plugged in, or some such, according to what is "known" about the system; and then tighten the ruleset before commissioning the machine, as before.
The fourth possible approach is to take the boot DASD out, add it to another machine, and change the rules with that machine.
The fifth possible approach is for there to be a step that is part of installation that enumerates what is present at installation time and sets up appropriate rules for it.