"We're writing research on the security systems involved around the Linux kernel, would it be acceptable to submit a set of patches to be reviewed for security concerns just as if it was a regular patch to the Linux kernel?"
This is what you do as a grownup and the other side is expected to honor your request and perform the same thing they do for other commits... the problem is that people think of pen testing as an adversarial relationship where one person needs to win over the other one.
That's not really testing the process, because now you have introduced bias. Once you know there's a bug in there, you can't just act as if you didn't know.
I guess you could receive "authorization" from a confidante who then delegates the work to unwitting reviewers, but then you could make the same "ethical" argument.
Again, from a hacker ethos perspective, none of this was unethical. From a "research ethics committee", maybe it was unethical, but that's not the standard I want applied to the Linux kernel.
> from a hacker ethos perspective, none of this was unethical.
It totally is if your goal as a hacker is generating a better outcome for security. Read the paper, see what they actually did, they just jerked themselves off over how they were better than the open source community, and generated a sum total of zero helpful recommendations.
So they subverted a process, introduced a Use After vulnerability and didn't do jack shit to improve it.
> It totally is if your goal as a hacker is generating a better outcome for security. Read the paper, see what they actually did, they just jerked themselves off over how they were better than the open source community, and generated a sum total of zero helpful recommendations.
The beauty of it is that by "jerking themselves off", they are generating a better outcome for security. In spirit, this reaction of the kernel team is not that different from Microsoft attempting to bring asshole hacker kids behind bars for exposing them. When Microsoft realized that this didn't magically make Windows more secure, they fixed the actual problems. Windows security was a joke in the early 2000s, now it's arguably better than Linux. Why? Because those asshole hacker kids actually changed the process.
> So they subverted a process, introduced a Use After vulnerability and didn't do jack shit to improve it.
The value added here is to show that the process could be subverted, the lessons are to be learned by someone else.
It can also be subverted by abducting and replacing the entire development team by impostors. What's your point? That process security is hopeless and we should all just go home?
> What's your point? That process security is hopeless and we should all just go home?
That there's an ethical way of testing processes which includes asking for permission and using proven tested methods like sending a certain amount of items N where X are compromised and Y are not compromised and seeing the ratio of K where K are rejected items and the ratio of rejected items which are compromised K/X versus non-compromised items K/Y.
By breaking the ethical component, the entire scientific method of this paper is broken... now I have to go check the kernel pull requests list to see if they sent 300 pull requests and got one accepted or if it was a 1:1 ratio.
> That there's an ethical way of testing processes which includes asking for permission and using proven tested methods like sending a certain amount of items N where X are compromised and Y are not compromised and seeing the ratio of K where K are rejected items and the ratio of rejected items which are compromised K/X versus non-compromised items K/Y.
Again, that's not the same test. You are introducing bias. You are not observing the same thing. Maybe you think that observation is of equal value, but I don't.
> By breaking the ethical component, the entire scientific method of this paper is broken...
Not at all. The scientific method is amoral. The absolute highest quality of data could only be obtained by performing experiments that would make Joseph Mengele faint.
There's always an ethical balance to be struck. For example, it's not ethical to perform experiments on rats to develop insights that are of no benefit to these rats, nor the broader rat population. If we applied our human ethical standards to animals, we could barely figure anything out. So what do we do? We accept the trade-off. Ethical concerns are not the be-all-end-all.
In this case, I'm more than happy to have the kernel developers be the labrats. I think the tradeoff is worth it. Feel free to disagree, but I consider the ethical argument to be nothing but hot air.
This is the sort of situation where the best you could do is likely to be slightly misleading about the purpose of the experiment. So you'd lead off with "we're interested in conducting a study on the effectiveness of the Linux code review processes", and then use patches that have a mix of no issues, issues only with the Linux coding style (things go in the wrong place, etc.), only security issues, and both.
But at the end of the day, sometimes there's just no way to do ethically do the experiment you want to do, and the right solution to that is to just live with being unable to do certain experiments.
To play Devil's Advocate, I suspect that this would if different results because people behave differently when they know that there is something going on.
That's the thing, you just told the person to review the request for security... in a true double blind, you submit 10 PRs and see how many get rejected / approved.
If all 10 are rejected but only one had a security concern, then the process is faulty in another way.
Edit: There is this theory that penetration testing is adversarial but in the real world people want the best outcome for all. The kernel maintainers are professionals so I would expect the same level of caring for a "special PR" versus a "normal PR"
In a corporate setting, the solution would presumably be to get permission from further up the chain of command than the individuals being experimented upon. I think that would resolve the ethical problem, as no individual or organisation/project is then being harmed, although there is still an element of deception.
I don't know enough about the kernel's process to comment on whether the same approach could be taken there.
Alternatively, if the time window is broad enough, perhaps you could be almost totally open with everyone, withholding only the identity of the submitter. For a sufficiently wide time window, Be on your toes for malicious or buggy commits doesn't change the behaviour of the reviewers, as that's part of their role anyway.
There are ways to reach the Kernel Security team that doesn't notify all the reviewers. It is upto the Kernel team to decide if they want to authorize such a test, and what kind of testing is permissible.
What's the harm exactly? Greg becomes upset? Is there evidence that any intentional exploits made it into the kernel? The process worked, as far I can see.
What's the benefit? You raise trust in the process behind one of the most critical pieces of software.
Let's take a peek at how the people whose time is being wasted feel about it:
> This is not ok, it is wasting our time, and we will have to report this, AGAIN, to your university...
> if you have a list of these that are already in the stable trees, that would be great to have revert patches, it would save me the extra effort these mess is causing us to have to do...
> Academic research should NOT waste the time of a community.
> The huge advantage of being "community" is that we don't need to do all the above and waste our time to fill some bureaucratic forms with unclear timelines and results.
Seems they don't think it is a good use of their time, no. But I'm sure you know a lot more about kernel development and open source maintenance than they do, right?
I didn't intend to convey that the answer to my question is "no". That's the whole problem with tests: Most of the time, it's drudge work and it does feel like they're a waste of time when they never signal anything. That doesn't mean they are a waste of time.
Similarly, if a research paper shows that its hypothesis is false, the author might feel that it was a waste of time having worked on it, which can lead to publication bias.