That's gamed though. It's supposed to be criminal intent but prosecutors routinely bend it to be intent to perform any action during which a crime may have happened. (I did not intend to trespass on your land, though of course I did intend to go for the walk in the first place...)
I can intentionally call your documents department and "guess a URL" without breaking the law, but if I do it on your webserver that same legal intent is turned around and used a proof of criminality. Leaving me with guessing "well, there's a document there in the public folder but maybe they don't want me to see it."
Of course, this is all about malicious and inconsistent prosecution. For you and I it's a theoretical game, but Weev could go back to jail for relying on convention to guess that index.html is public.
We need an actual technological bar otherwise our security industry becomes an ass-covering exercise.
Yes, I do see some need for a technical bar. That's why I've said before that for 'unauthorized access' to a computer system you (should) need to knowingly access a protected system in a way not permitted by the rights granted to you by the computer system, or by deliberate deception of either the computer systems or people.
So there needs to be an actual lock on the door that you've done something to bypass, whether that be manipulating the computers or lying to someone (or thing) to gain access.
No similar provision exists for unauthorized access to property. Opening an unlocked window (or door, or curtain) and climbing through it is an offense with a name: breaking and entering.
Yes, but a computer has programming to enforce the rules and it can (and should) reasonably be expected to enforce those intentions accurately.
Another way of putting this is that they are using the computer to express their intention to authorize (or not authorize) access by means of the programming. It should, as a matter of good public policy, be on them to get this right. That's why my standard would require material deception--that is, but for the intentional deception, access would not have been granted.
Anything less simply sets too low a bar and excuses incompetence. This is bad public policy because it allows people to stumble into felonies while excusing all kinds of negligence on the part of those people who were supposed to be protecting things. Otherwise we have an "I know it when I see it" standard for which parts of a site are okay to interact with and reasonable minds can (and frequently do) disagree over the particulars. My standard would move this rule to determining statements of fact--did they know or should they have known they were deceiving this computer system/person in order to gain access? It also deliberately prevents people from shifting the blame for negligently configuring access to their computers.
I think we both know the widespread public harm caused by networks of hacked computers and we both know that, unlike the real world, essentially every computer's locks and windows are tested many times a day. Leaving things open is clearly negligent in my view and I've seen far too many clients of mine leave vulnerabilities open longer than is justifiable, contrary to my advice. I mean, I still see PCI audits reporting POODLE, which is just sad.
Now, inasmuch as you're telling me that the law doesn't and isn't likely to see things my way, sadly, I have to agree with you there.
People can be civilly liable for negligence in their own security without changing the fact that other people are criminally liable for exploiting that negligence. I see this argument in virtually every thread about computer security and the law, and it never makes any sense to me.
But that's the thing, there's no clear definition of 'exploiting' in the law right now, just a fuzzy mess that judges are supposed to sort out on some ad hoc basis. If you want to go back to physical property law, it'd be more like 'trespass to chattels' anyhow, which is not a good basis to decide these things.
The fact that users are supposed to just guess about what access sites have or have not authorized, with felony charges for anyone who gets it wrong, does not make sense to me when they have the means to express their rules for authorization in code.
That's why I say you should have to intentionally deceive those rules (or their people) to get in.
I don't understand the logic here at all. I can't make sense of it. I can be civilly, or even criminally, liable for negligently protecting property that other people rely on. But the person who abuses my negligence is also fully liable. Liability simply isn't zero sum.
We interact with computers in a very different way from how we interact with real world property. There are no clear property lines and no clear boundaries. Even when liability isn't zero sum, I think we've both seen companies blame the hackers fully and use that as a fig leaf for their own negligence. I really haven't seen companies punished beyond a few cost of doing business fines.
The idea that we should be deliberately vague about where the boundaries are and let people stumble into felonies doesn't make any sense to me. The idea that we should let someone write that into a thousand page ToS also doesn't sit well with me. I'd rather it be a question of fact.
Take the case about modifying the URL. It's normal to be able to type any URL I like. Why should it be a felony if I try other IDs? I'm simply making a request, it's up to them to decide what access I should or should not have, or even googlebot may end up being a felon. The fact is that much of the web is and always has been open by default. Anonymous FTP is normal... if you want authentication, you should configure that. The idea that someone could be a felon because they were somehow supposed to know that your misconfigured FTP server wasn't supposed to let them in is simply unreasonable and it only works out because prosecutions are rare.
That's why I want a proper boundary. If you're not deliberately hacking someone or social engineering someone and if the only thing you do is to report the bug you found to responsible parties (the site owner/operator or government) I'm not willing to charge someone with a felony for modifying a URL or logging into anonymous FTP or whatever else like that.
You're saying the same thing you said earlier. This doesn't clarify anything for me.
If you want to advocate for liability for software security negligence, I won't argue --- at least, not on a moral basis (I think "be careful what you ask for" but whatever).
But I do not see what any of this has to do with liability for intruders.
It's more a social than a legal problem on that front. If they can say "X was prosecuted for hacking us" it doesn't make them look as incompetent to the public at large as if they have to admit that a random person on the internet could see obvious flaws in their setup. You can see it as another way of encouraging responsible reporting, not unlike one of the goals of bug bounties.
I think people would start seeing more social costs for running businesses negligently if they couldn't point to iffy hacking prosecutions to justify themselves.
There's no liability because they aren't intruders, they're requesters.
If all it takes to get something is to request it, without any interaction and thus without any fraud, then it's public.
What's the difference between me calling a phone number and asking for a company's financials before they're publicly released, and checking the probable URL before they're publicly released?
I mean that you keep presuming that we're discussing intruders which implies guilt, but when viewed in a more realistic context, as requesters, your comments about liability aren't relevant.
Nobody is at fault for simply asking for a document in the real world and nobody should be online.
It's a little discourteous to jump into the middle of someone else's discussion and attempt to alter the premise. Reply somewhere else if you'd like to have a different discussion. Thanks!
Hah, HN police. It's my thread, look up. But I magnanimously grant you the right to post in it. You're welcome!
You're the one trying to move the goalposts and alter premises. You reframe everything in a violent physical metaphor even though you've been around long enough to know that it's the least useful thing to compare information to. I can't copy a house by looking at it so it's a crappy analogy for a website.
There's a world of difference between asking for a document and opening a door and taking it.
Breaking and entering implies there is no authorization but here we have the property manager saying yes.
Also, breaking and entering is generally a misdemeanor rather than a felony.
And on top of that, there is no entering. People love to paint bad abstractions on top of computers, but fundamentally all there ever can be is you asking the remote computer to do something and it deciding whether to do it or not. That doesn't look anything like B&E or trespass. The best meatspace analogy is social engineering. But there is no generic law against manipulating people and the laws against specific types of manipulation are highly context-dependent because so is the harm.
Where are you getting this "misdemeanor" thing from?
Burglary is a felony, and it's defined in most places simply by entering a building or vehicle without permission and with illicit intent (that intent need not be "to steal stuff").
Breaking and entering is generally listed as a misdemeanor. Misdemeanors result in jail sentences of less than one year. However, breaking and entering is often associated with the felony crime of burglary. Burglary is usually defined as "breaking and entering with the intent to commit a felony while on the premises".
The difference between burglary and B&E of "intent to commit a felony" is the sort of thing that justifies the felony charges. I don't see that requirement in the CFAA.
The CFAA doesn't have "intent to commit a felony," it has committed in furtherance of any criminal or tortious act in violation of the Constitution or laws of the United States or of any State, which is much broader. It allows felony penalties without felonious intent.
And as you point out, even that limitation is basically swallowed by the fact that the Wire Fraud statute covers very similar conduct to the CFAA, so they can charge you with computer fraud in furtherance of wire fraud.
The physical building metaphors are yours and you keep pushing back to those. The whole point of a standards-compliant webserver is to speak to standards-compliant web clients. When used without a password, on a public IP, at a standard port, the simple existence implies you're allowed to speak to it.
Asking a human for a document isn't a crime, so it shouldn't be when you ask a computer.
Fraudulently claiming to be someone else to get a document is a crime, and that's how it should be on a computer.
This is another super popular argument about computer security, and it falls apart almost instantly under scrutiny. By your logic, I can dump a SQL database full of credit card numbers due to an SQLI in a GET handler, because, after all, the software components are just doing what they're meant to do!
No, at least in my standard, that would be material deception. You said that was your name/address/whatever, but it's actually an SQL command.
I suppose you'll ask what if someone's name really is Bobby Tables, but then I submit that it either can't be tailored to your database, or that their intent when changing their name to exploit you counts as social engineering and I don't consider it worth optimizing for.
How is that any different from "I can reasonably infer that my customer ID is 101, but I told you it was 102 in a URL to see the account information for a different customer"?
These issues are less fraught than people seem to think they are. We walk through commercial spaces all the time that are full of unlocked, unmarked doors, and it rarely takes us more than a moment to realize when we've gone somewhere we weren't meant to be. It's not a new problem.
The difference is the ultimate usefulness and workability of the system.
Let's say I spray-paint people's medical records on the sidewalks in the park. It's not your responsibility to avoid the park despite anything you may see. A public area by definition carries no expectation of you having to wonder if you were meant to be there.
Similarly, if I leave sensitive information all over some publicly accessible URLs it shouldn't be your obligation to avoid those URLs, even gratuitously. As it's not your obligation to plug your ears when your neighbors fight, etc. You are still obliged to not break the law even if supplied with the means. But that means the act of blackmailing is illegal, not the listening.
The current application of the current law requires reading the mind of the viewer/listener/requester. That would punish you for picking up my keys from the sidewalk just because you don't play well to the jury.
tldr; The proper law would only penalize actual crime, not theoretical crimes.
Yeah, but we don't normally hammer people that open the wrong unmarked door by mistake in a long hall full of unmarked doors. It's not hard to typo a URL. It's not unreasonable to connect to anonymous FTP--they choose to authorize you and you have no good reason to second-guess them.
Now, yes, when you do that 100 times with a script, you can argue that they're doing something bad... which is why I also advocate a safe-harbor for people who report the bug only to the site owners or the government. If someone is making a reasonable effort to help you fix your bad security, we shouldn't be treating them like a bad guy.
No, we hammer them for what they do with those URLs; for instance: long IRC conversations about how they're going to sell the identities of everyone who bought an iPad to spammers, or suchlike.
Assuming that was just a joke and they didn't actually do that and that they did report it to the site owners or a government agency that could reasonably be assumed to oversee it, I wouldn't hammer them.
If they actually do that or take steps that would make a reasonable person conclude that they actually were on their way to do that before they were caught? Then yes, hammer away.
I'm more concerned with their being a brighter line for what is and isn't unauthorized access and a safe harbor so people can't hammer anyone who simply makes them look bad by pointing out that their fly is open.
If you can fit facts of some old case to show that they were guilty in a principled way, I won't argue. I may or may not agree with the specific reading of the facts, but at least I would find that to be a principled disagreement.
My point in this exercise is to shift things from an exercise in how people feel about some particular intrusion or a person's motives which are things not likely to reach broad consensus to an exercise where we debate the specific facts of the case which reasonable people should be able to agree upon given enough information, no matter how they feel about the parties of any particular case.
I specifically require material deception as an overt act that establishes some level of mens rea for that very reason, but my construction is meant to frame discussion of their motives around overt acts, rather than attempts at mind reading, which are far more susceptible to personal biases of every kind. Having elements of the crime that demonstrate criminal intent is not, in fact, a construction unknown to law in general. The elements of a crime like shoplifting might require elements like both concealing the merchandise and attempting to exit the store. You can see that the act of concealment gives information about their motives in a way subject to fact-based inquiry, rather than having to attempt to read someone's mind. And people can come to a fact-based conclusion about this, as the jurors have to say this person didn't actually conceal and remove the merchandise, rather than saying this kid looks like a good guy, or that kid looks like a thief and deciding a motive from that.
So I think you'll find that the better-constructed laws require overt acts that inform us about motive in many (but not all) cases, rather than mere inferences about motive. Though I certainly agree that there are reasonable facts which one can use to infer mens rea from. Good intent can also be inferred--that's why I specified a safe harbor for people who were trying to simply do the right thing and report a bug they'd found. Someone who reports a problem to the site owners in good faith is simply not someone I think we should be hitting with criminal penalties in general, though this might be weighed against other things like attempts to ransom the bug or sending DROP DATABASE injections and whatnot which could wipe out any inference of good intent.
That said, I've always said this was my view of how the law ought to work. Implicit in that is that the law does not, in fact, work that way, so I honestly can't disagree with you here--the law certainly doesn't work my way, and this is all my own thinking on how it ought to work in my own view. So inasmuch as you're saying the law doesn't work this way, I completely agree.
Scraping sites for email addresses isn't illegal or we'd arrest Google. It's the spam that's actionable. So intent to scrape email addresses to give to spammers doesn't constitute mens rea.
There has to be 1) believable intent to 2) commit an actual crime.
Your reasoning is circular without that - "he had bad intent so whatever he was doing was illegal which is how we can say his intent was bad rather than simply unsavory, etc."
There's no valid precedent for simply requesting a document being illegal, nor ultimately a security benefit from it being so.
I think you have that backwards. Trespassing is your actus reus. What's at issue is intent, or mens rea. You're looking for firmer indications of mens rea than, apparently, you've seen in CFAA cases to date.
I'm saying that "trespassing" is too broad an actus reus to establish mens rea because we interact with websites in a very different way from real property. I've proposed an alternate set of rules (material deception + a safe harbor) that I feel better capture actual criminal intent.