No, why would they? If I voluntarily request your website, you can’t just reply with a virus that wipes my harddrive. Even though I had the option to not send the request. I didn’t know that you were going to sabotage me before I made the request.
Because you requested it? There is no agreement on what or how to serve things, other than standards (your browser expects a valid document on the other side etc).
I just assumed court might say there is a difference between you requesting all guess-able endpoints and find 1 endpoint which will harm your computer (while there was _zero_ reason for you to access that page) and someone putting zipbomb into index.html to intentionally harm everyone.
So serving a document exploiting a browser zero day for RCE under a URL that’s discoverable by crawling (because another page links to it) with the intent to harm the client (by deleting local files for example) would be legitimate because the client made a request? That’s ridiculous.
That is not the case in this context. robots.txt is the only thing that specifies the document URL, which it does so in a "disallow" rule. The argument that they did not know the request would be responded to with hostility could be moot in that context (possibly because a "reasonable person" would have chosen not to request the disallowed document but I'm not really familiar with when that language applies).
> by deleting local files for example
This is a qualitatively different example than a zip bomb, as it is clearly destructive in a way that a zip bomb is not. True that a zip bomb could cause damage to a system but it's not a guarantee, while deleting files is necessarily damaging. Worse outcomes from a zip bomb might result in damages worthy of a lawsuit but the presumed intent (and ostensible result) of a zip bomb is to effectively cause the recipient machine to involuntarily shut down, which a court may or may not see as legitimate given the surrounding context.