the project is 30 something odd years old now, and is no longer a hobby, it is now critical infrastructure that powers virtually everything.
it's unfortunate that something happened in the project that cost the maintainers a lot of hours, that comes with the territory of working on important software, i'd argue.
i don't want an espionagetastic fundamentally untrustable and inescapable computing hellscape, airplanes falling out of the sky, cars crashing or appliances catching fire because it "already was the common view on the issue."
if the paper raises awareness of the issue, it's a good thing for society, it seems. if money materializes to do background checks on kernel contributors, that seems a good thing, no? if resources materialize for additional scrutiny, that seems a good thing, no?
if anything, the "common view" / status quo seems terribly broken, as demonstrated by their research. while what they've done is unpopular, it seems to me that ultimately the project, and the society of which large chunks it now powers, may be better of for it in the long run...
So you are saying that because a non-controversial method to show the same issue wouldn't cause the publicity connected purely to their way of operating, it was right to ignore the concerns? Lots of slippery slopes there, and science has spend a long time to get away from such an "the end justifies the means" attitude.
> Lots of slippery slopes there, and science has spend a long time to get away from such an "the end justifies the means" attitude.
if the developers were working for a private company, there could be a similar loss in time for a similar exercise that could be approved by company leadership, no? if tim cook ordered an audit and didn't tell anyone, wouldn't there be developers at apple feeling the same way?
look, i get it, it's unfortunate and people feel like it's a black eye... but it's also a real issue that needs real attention. moreover, linux is no longer a hobby, it is critical infrastructure that powers large chunks of society.
> So you are saying that because a non-controversial method to show the same issue wouldn't cause the publicity connected purely to their way of operating, it was right to ignore the concerns?
what's the non-controversial alternative? alerting the organization before they do it? that doesn't work. that's why scientists do blinding and double blinding.
if you mean something else, then i'm missing parts of this (really complicated) debate.
> If you wanted to know if the kernel review process is able to reliably catch malicious attempts you literally could have just asked the kernel maintainers and they'd told you that no, review can't go that deep. Or looked at non-malicious bugs and observed that no, review does not catch all bugs with security implications.
> You'd very likely would have been able to get code past them even if you told them that there are attempts coming and got their approval.
If you want to know that the process isn't catching all attacks, that should be all you need. For the second case, getting patches past a warned maintainer is harder and should be even better evidence of the problems of the process, without any of the concerns. There is a wide range of options to learn about code review, and what they did was one of the extreme ends - just to find "yes, what everyone has been saying is true". And then not putting in the work to make amends after it becomes clear it wasn't appreciated, so now this other student got caught in his advisors mess (assuming the representation of him actually testing a new analyzer and not being part of a repeated attempt to introduce bugs is true - the way he went about it also wasn't good, but way less bad).
But you don't get splashy outrage, and thus less success at "raising awareness" with people that didn't care before, which is what your comment seemed to argue for.
> But you don't get splashy outrage, and thus less success at "raising awareness" with people that didn't care before, which is what your comment seemed to argue for.
the reason for doing it, is basic quality science. it's proper blinding.
the result is raising awareness, which if it leads to more scrutiny and a better and more secure linux kernel, seems to be a good thing... in the long run.
i mean, i get it. a lot of this security stuff feels a lot like gotcha qa, with people looking to make names for themselves at the expense of others. and yeah, in science, making a name for yourself is the primary currency...
but honestly, they ran their experiment and it worked, uncovered an actual, not theoretical, vulnerability in an irrefutable way, in a massive chunk of computing infrastructure that powers massive chunks of society.
papers like this one can have a lot of potential in terms of raising funds. this is the sort of thing that can be taken to governments, private foundations and public corporations to ask for quite a lot of money to help with the situation.
it's unfortunate that something happened in the project that cost the maintainers a lot of hours, that comes with the territory of working on important software, i'd argue.
i don't want an espionagetastic fundamentally untrustable and inescapable computing hellscape, airplanes falling out of the sky, cars crashing or appliances catching fire because it "already was the common view on the issue."
if the paper raises awareness of the issue, it's a good thing for society, it seems. if money materializes to do background checks on kernel contributors, that seems a good thing, no? if resources materialize for additional scrutiny, that seems a good thing, no?
if anything, the "common view" / status quo seems terribly broken, as demonstrated by their research. while what they've done is unpopular, it seems to me that ultimately the project, and the society of which large chunks it now powers, may be better of for it in the long run...