The point is to make it very obviously not worth it to conduct this kind of unethical research. I don't think UMN is going to be eager to have this kind of attention again. People could always submit bogus patches from random email addresses - this removes the ability to do it under the auspices of a university.
> this removes the ability to do it under the auspices of a university
It really doesn't though. You can claim ownership of that email address in the published manuscript. For that matter, you could even publish the academic article under a pen name if you wanted to. But after seeing how the maintainers responded here, you'd better make sure that any "real" contributions you make aren't associated with the activity in any way.
I think you're getting heavily downvoted with your comments on this submission because you seem to be missing a critical sociological dimension of assumed trust. If you submit a patch from a real name email, you get an extra dimension of human trust and likewise an extra dimension of human repercussions if your actions are deemed to be malicious.
You're criticizing the process, but the truth is that without a real name email and an actual human being's "social credit" to be burned, there's no proof these researchers would have achieved the same findings. The more interesting question to me is if they had used anonymous emails, would they have achieved the same results? If so, there might be some substance to your contrarian views that the process itself is flawed. But as it stands, I'm not sure that's the case.
Why? Well, look at what happened. The maintainers found out and blanket banned bad actors. Going to be a little hard to reproduce that research now, isn't it? Arbitraging societal trust for research doesn't just bring ethical challenges but /practical/ ones involving US law and standards for academic research.
Additionally some universities use a subdomain for student addresses, only making top level email addresses available to staff and a small selection of PhD students who needs it for their research.
Again, we're entering the territory of fraud and cybercrime, whether its white collar crime or not. Nothing wrong with early detection and prevention against that. But as it pertains to malicious actors inside the country, the high risk of getting caught, prosecuted, and earning semi-permanent permanent blackball on your record that would come up in any subsequent reference check (and likely blackball you from further employment) is a deterrent. Which is exactly what these researchers are finding out the hard way.
Anonymity is de facto, not de jure. It's also a privilege for many collaboration networks and not a right. If abused, it will simply be removed.
Given what the Linux kernel runs these days, that would probably be advisable. (I'm a strong proponent of anonymity, but I also have a preference that my devices not be actively sabotaged.)
> we're entering the territory of fraud and cybercrime
So what? The fact that it's illegal doesn't nullify the threat. For that matter, it's not even a crime if a state agency is the perpetrator. These researchers drew attention to a huge (IMO) security issue. They should be thanked and the attack vector carefully examined.
I think you're focusing too much on the literal specifics of the "attack vector" and not enough on the surrounding context, or the real world utility/threat. You're not accurately putting yourself in the shoes of someone who would be using it and asking whether it has a sufficient cost benefit ratio to merit being used. Isn't that what you mean by "carefully examined?"
If you want to talk about a state level actor, I hate to break it to you, but they have significantly more powerful and stealthier 0-day exploits that are a lot easier to exploit than a tactic like this. Guess what's the last thing you want to have happen when you commit cybercrime? Do it in public with where there's an immutable record that can be traced back to you, and cause a giant public hubbub, maybe? So, I can't imagine how someone could think there's anything noteworthy about this unless they were unaware of that.
That's somewhat the unintentional humor and irony of this situation -- all the researchers accomplished was proving that they were not just unethical but incompetent.
Upon further reflection, I think what I wrote regarding anonymity specifically was in error. I don't think removing it would serve much (if any) practical purpose from a security standpoint.
However, I don't agree that what happened was abuse or that it should be deterred. Responding in a hostile manner to an isolated demonstration of a vulnerability isn't constructive. People rightfully get angry when large companies try to bully security researchers.
You question if this vulnerability is worth worrying about due to the logistics of exploiting it in practice. Regardless of whether it's worth the effort to exploit I'd still rather it wasn't there (that goes for all vulnerabilities).
I think it would be much easier to exploit than you're letting on though. Modern attacks frequently chain many subtle bugs together. Being able to land a single, seemingly inconsequential bug in a key location could enable an otherwise impossible approach.
It seems unlikely to me that the immutable record you mention would ever lead back to a competent entity that didn't want to be found. There's no need for anything more than an ephemeral identity that successfully lands one or two patches and then disappears. The patches also seem unlikely to draw suspicion in the first place, even after the exploit itself becomes known.
In fact it occurs to me that a skilled and amoral developer could likely land multiple patches with strategic (and very subtle) bugs from different identities. These could then be "discovered" and sold on the black market. I see no convincing reason to discount the possibility of this already being a common occurrence.
The only sensible response I can think of is a focus on static analysis coupled with CTF challenges to beat those analysis methods.
Indeed the situation is bad, nothing can be done. At the very least as long as they can make unintentional vulnerabilities, they are defenseless against intentional ones, and fixing only the former is already a very big deal.
I saw at least one developer lamenting that they were going to potentially bring up mechanisms for having to treat every committer as malicious by default instead of not at the next kernel summit, so it's quite possible that's going to take place.
> lamenting that they were going to potentially bring up mechanisms for having to treat every committer as malicious by default
I think "lamenting" is very much the wrong attitude here. Given all the things that make use of Linux today that seems like the only sane approach to me.
Well, it seems unlikely that any other universities will fund or support copy cat studies. And I don't mean in the top-down institutional sense I mean in the self-selecting sense. Students will not see messing with the linux kernel as being a viable research opportunity and will not do it. That doesn't seem to be 'feel-good without any actual benefit to the kernel's security'. Sounds like it could function as an effective deterent.