Mildly amusing, but it seems like this is thinking that two wrongs make a right, so let us serve malware instead of using a WAF or some other existing solution to the bot problem.
The web is overrun by malicious actors without any sense of morality. Since playing by the rules is clearly not working, I'm in favor of doing anything in my power to waste their resources. I would go a step further and try to corrupt their devices so that they're unable to continue their abuse, but since that would require considerably more effort from my part, a zip bomb is a good low-effort solution.
Based on the example in the post, that thinking might need to be extended to "someone happening to be using a blocklisted IP." I don't serve up zip bombs, but I've blocklisted many abusive bots using VPN IPs over the years which have then impeded legitimate users of the same VPNs.
At least, not with the default rules. I read that discussion a few days ago and was surprised how few callouts there were that a WAF is just a part of the infrastructure - it is the rules that people are actually complaining about. I think the problem is that so many apps run on AWS and their default WAF rules have some silly content filtering. And their "security baseline" says that you have to use a WAF and include their default rules, so security teams lock down on those rules without any real thought put into whether or not they make sense for any given scenario.
I admire your deontological zealotry. That said, I think there is an implied virtuous aspect of "internet vigilantism" that feels ignored (i.e. disabling a malicious bot means it does not visit other sites) While I do not absolve anyone from taking full responsibility for their actions, I have a suspicion that terrorists do a bit more than just avert a greater wrong--otherwise, please sign me up!
That's why I said, if it's easy. On some server stacks it's no big deal to have a connection open for an extra 30 seconds; others, you need to be done with requests asap, even abuse.
tcpdrop shouldn't self DOS though, it's using less resources. Even if other end does a retry, it will do it after a timeout; in the meantime, the other end has a socket state and you don't, that's a win.
So first, let me prefix this by saying I generally don't accept cookies from websites I don't explicitly first allow, my reasoning being "why am I granting disk read/write access to [mostly] shady actors to allow them to track me?"
(I don't think your blog qualifies as shady … but you're not in my allowlist, either.)
So if I visit https://anubis.techaro.lol/ (from the "Anubis" link), I get an infinite anime cat girl refresh loop — which honestly isn't the worst thing ever?
Neither xeserv.us nor techaro.lol are in my allowlist. Curious that one seems to pass. IDK.
The blog post does have that lovely graph … but I suspect I'll loop around the "no cookie" loop in it, so the infinite cat girls are somewhat expected.
I was working on an extension that would store cookies very ephemerally for the more malicious instances of this, but I think its design would work here too. (In-RAM cookie jar, burns them after, say, 30s. Persisted long enough to load the page.)
Just FYI temporary containers (Firefox extension) seem to be the solution you're looking for. It essentially generates a new container for every tab you open (subtabs can be either new containers or in the same container). Once the tab is closed it destroys the container and deletes all browsing data (including cookies). You can still whitelist some domains to specific persistent containers.
I used cookie blockers for a long time, but always ended up having to whitelist some sites even though I didn't want their cookies because the site would misbehave without them. Now I just stopped worrying.