Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Mildly amusing, but it seems like this is thinking that two wrongs make a right, so let us serve malware instead of using a WAF or some other existing solution to the bot problem.


The web is overrun by malicious actors without any sense of morality. Since playing by the rules is clearly not working, I'm in favor of doing anything in my power to waste their resources. I would go a step further and try to corrupt their devices so that they're unable to continue their abuse, but since that would require considerably more effort from my part, a zip bomb is a good low-effort solution.


There's no ethical ambiguity about serving garbage to malicious traffic.

They made the request. Respond accordingly.


Based on the example in the post, that thinking might need to be extended to "someone happening to be using a blocklisted IP." I don't serve up zip bombs, but I've blocklisted many abusive bots using VPN IPs over the years which have then impeded legitimate users of the same VPNs.


This is William Gibson's "black ICE" becoming real, and I love it.

https://williamgibson.fandom.com/wiki/ICE


This book was so far ahead of its time


WAF isn't the right choice for a lot of people: https://news.ycombinator.com/item?id=43793526


At least, not with the default rules. I read that discussion a few days ago and was surprised how few callouts there were that a WAF is just a part of the infrastructure - it is the rules that people are actually complaining about. I think the problem is that so many apps run on AWS and their default WAF rules have some silly content filtering. And their "security baseline" says that you have to use a WAF and include their default rules, so security teams lock down on those rules without any real thought put into whether or not they make sense for any given scenario.


Truly one my favorite thought-terminating proverbs.

"Hurting people is wrong, so you should not defend yourself when attacked."

"Imprisoning people is wrong, so we should not imprison thieves."

Also the modern telling of Robin Hood seems to be pretty generally celebrated.

Two wrongs may not make a right, but often enough a smaller wrong is the best recourse we have to avert a greater wrong.

The spirit of the proverb is referring to wrongs which are unrelated to one another, especially when using one to excuse another.


> "Hurting people is wrong, so you should not defend yourself when attacked."

This is exactly what Californian educators told kids who were being bullied in the 90's.


> a smaller wrong is the best recourse we have to avert a greater wrong

The logic of terrorists and war criminals everywhere.


I admire your deontological zealotry. That said, I think there is an implied virtuous aspect of "internet vigilantism" that feels ignored (i.e. disabling a malicious bot means it does not visit other sites) While I do not absolve anyone from taking full responsibility for their actions, I have a suspicion that terrorists do a bit more than just avert a greater wrong--otherwise, please sign me up!


Defense and Offense are not the same.

Crime and Justice are not the same.

If you cannot figure that out, you ARE a major part of the problem.

Keep thinking until you figure it out for good.


And also how fuctioning governments work: https://en.m.wikipedia.org/wiki/Monopoly_on_violence

Do you really want to live in a society were all use of punishment to discourage bad behaviour in others? That is a game theoretical disaster...


And sometimes one man's terrorist is another's freedom fighter.... (Not to defend terrorism, but it's just not that simple)



I did actually try zip bombs at first. They didn't work due to the architecture of how Amazon's scraper works. It just made the requests get retried.


Amazon's scraper has been sending multiple requests per second to my servers for 6+ weeks, and every request has been returned 429.

Amazon's scraper doesn't back off. Meta, google, most of the others with identifiable user agents back off, Amazon doesn't.


If it's easy, sleep 30 before returning 429. Or tcpdrop the connections and don't even send a response or a tcp reset.


That's a good way to self-DOS


That's why I said, if it's easy. On some server stacks it's no big deal to have a connection open for an extra 30 seconds; others, you need to be done with requests asap, even abuse.

tcpdrop shouldn't self DOS though, it's using less resources. Even if other end does a retry, it will do it after a timeout; in the meantime, the other end has a socket state and you don't, that's a win.


So first, let me prefix this by saying I generally don't accept cookies from websites I don't explicitly first allow, my reasoning being "why am I granting disk read/write access to [mostly] shady actors to allow them to track me?"

(I don't think your blog qualifies as shady … but you're not in my allowlist, either.)

So if I visit https://anubis.techaro.lol/ (from the "Anubis" link), I get an infinite anime cat girl refresh loop — which honestly isn't the worst thing ever?

But if I go to https://xeiaso.net/blog/2025/anubis/ and click "To test Anubis, click here." … that one loads just fine.

Neither xeserv.us nor techaro.lol are in my allowlist. Curious that one seems to pass. IDK.

The blog post does have that lovely graph … but I suspect I'll loop around the "no cookie" loop in it, so the infinite cat girls are somewhat expected.

I was working on an extension that would store cookies very ephemerally for the more malicious instances of this, but I think its design would work here too. (In-RAM cookie jar, burns them after, say, 30s. Persisted long enough to load the page.)


You're seeing an experiment in progress. It seems to be working, but I have yet to get enough data to know if it's ultimately successful or not.


Just FYI temporary containers (Firefox extension) seem to be the solution you're looking for. It essentially generates a new container for every tab you open (subtabs can be either new containers or in the same container). Once the tab is closed it destroys the container and deletes all browsing data (including cookies). You can still whitelist some domains to specific persistent containers.

I used cookie blockers for a long time, but always ended up having to whitelist some sites even though I didn't want their cookies because the site would misbehave without them. Now I just stopped worrying.


> Neither xeserv.us nor techaro.lol are in my allowlist. Curious that one seems to pass. IDK.

Is your browser passing a referrer?


Did you also try Transfer-Encoding: chunked and things like HTTP smuggling to serve different content to web browser instances than to scrapers?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: