Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Not a big loss: these professors likely hate open source.

> They are conducting research to demonstrate that it is easy to introduce bugs in open source...

That's a very dangerous thought pattern. "They try to find flaws in a thing I find precious, therefore they must hate that thing." No, they may just as well be trying to identify flaws to make them visible and therefore easier to fix. Sunlight being the best disinfectant, and all that.

(Conversely, people trying to destroy open source would not publicly identify themselves as researchers and reveal what they're doing.)

> whereas we know that the strength of open source is its auditability, thus such bugs are quickly discovered and fixed afterwards

How do we know that? We know things by regularly testing them. That's literally what this research is - checking how likely it is that intentional vulnerabilities are caught during review process.



Ascribing a salutary motive to sabotage is just as dangerous as assuming a pernicious motive. Suggesting that people "would" likely follow one course of action or another is also dangerous: it is the oldest form of sophistry, the eikos argument of Corax and Tisias. After all, if publishing research rules out pernicious motives, academia suddenly becomes the best possible cover for espionage and state-sanctioned sabotage designed to undermine security.

The important thing is not to hunt for motives but to identify and quarantine the saboteurs to prevent further sabotage. Complaining to the University's research ethics board might help, because, regardless of intent, sabotage is still sabotage, and that is unethical.


The difference between:

"Dear GK-H: I would like to have my students test the security of the kernel development process. Here is my first stab at a protocol, can we work on this?"

and

"We're going to see if we can introduce bugs into the Linux kernel, and probably tell them afterwards"

is the difference between white-hat and black-hat.


It should probably be a private email to Linus Torvalds (or someone in his near chain of patch acceptance), that way some easy to scan for key can be introduced in all patches. Then the top levels can see what actually made it through review, and in turn figure out who isn't reviewing as well as they should.


Yes, someone like Greg K-H. I'm not up to date on the details, but he should be one of most important 5 people caring for the kernel tree, this would've been the exact person to seek approval.


Auditability is at the core of its advantage over closed development.

Submitting bugs is not really testing auditability, which happens over a longer timeframe and involves an order of magnitude more eyeballs.

To adress your first critic: benevolence, and assuming everyone wants the best for the project, is very important in these models, because the resources are limited and dependent on enthusiasm. Blacklisting bad actors (even if they have "good reasons" to be bad) is very well justified.


> Auditability is at the core of its advantage over closed development.

That's an assertion. A hypothesis is verified through observing the real world. You can do that in many ways, giving you different confidence levels in validity of the hypothesis. Research such as the one we're discussing here is one of the ways to produce evidence for or against this hypothesis.

> Submitting bugs is not really testing auditability, which happens over a longer timeframe and involves an order of magnitude more eyeballs.

It is if there's a review process. Auditability itself is really most interesting before a patch is accepted. Sure, it's nice if vulnerabilities are found eventually, but the longer that takes, the more likely it is they were already exploited. In case of an intentionally bad patch in particular, the window for reverting it before it does most of its damage is very small.

In other words, the experiment wasn't testing the entire auditability hypothesis. Just the important part.

> benevolence, and assuming everyone wants the best for the project, is very important in these models, because the resources are limited and dependent on enthusiasm

Sure. But the project scope matters. Linux kernel isn't some random OSS library on Github. It's core infrastructure of the planet. Assumption of benevolence works as long as the interested community is small and has little interest in being evil. With infrastructure-level OSS projects, the interested community is very large and contains a lot of malicious actors.

> Blacklisting bad actors (even if they have "good reasons" to be bad) is very well justified.

I agree, and in my books, if a legitimate researcher gets banned for such "undercover" research, it's just the flip side of doing such experiment.


I will not adress everything but only this point:

Before a patch is accepted, "auditability" is the same in OSS vs in proprietary, because both pools of engineers in the review groups have similar qualifications and approximatively the same number of people are involved.

So, the real advantage of OSS is on the auditability after the patch is integrated.


> So, the real advantage of OSS is on the auditability after the patch is integrated.

If that's the claim, then the research work discussed here is indeed not relevant to it.

But also, if that's the claim, then it's easy to point out that the "advantage" here is hypothetical, and not too important in practice. Most people and companies using OSS rely on release versions to be stable and tested, and don't bother doing their own audit. On the other hand, intentional vulnerability submission is an unique threat vector that OSS has, and which proprietary software doesn't.

It is therefore the window between patch submission and its inclusion in a stable release (which may involve accepting the patch to a development/pre-release tree), that's of critical importance for OSS - if vulnerabilities that are already known to some parties (whether the malicious authors or evil onlookers) are not caught in that window, the threat vector here becomes real, and from a risk analysis perspective, negates some of the other benefits of using OSS components.

Nowhere here I'm implying OSS is worse/better than proprietary. As a community/industry, we want to have an accurate, multi-dimensional understanding of the risks and benefits of various development models (especially when applied to core infrastructure project that the whole modern economy runs on). That kind of research definitely helps here.


> On the other hand, intentional vulnerability submission is an unique threat vector that OSS has, and which proprietary software doesn't.

On this specific point, it only holds if you restrict the assertion to 'intentional submission of vulnerabilities by outsiders'. I don't work in fintech, but I've read allegations that insider-created vulnerabilities and backdoors are a very real risk.


> On the other hand, intentional vulnerability submission is an unique threat vector that OSS has, and which proprietary software doesn't.

Very fair point. Inside threat also exists in corporations, but it's probably harder.


If the model assumes benevolence how can it possibly be viable long-term?


Like that: malevolent actors are banned as soon as detected.


What do you suppose is the ratio of undetected bad actors / detected bad actors? If it is anything other than zero I think the original point holds.


Most everything boils down to trust at some point. That human society exists is proof that people are, or act, mostly, "good", over the long term.


> That human society exists is proof that people are, or act, mostly, "good", over the long term.

That's very true. It's worth noting that various legal and security tools deployed by the society help us understand what are the real limits to "mostly".

So for example, the cryptocurrency crowd is very misguided in their pursuit of replacing trust with math - trust is the trick, the big performance hack, that allowed us to form functioning societies without burning ridiculous amounts of energy to achieve consensus. On the other hand, projects like Linux kernel, which play a core role in modern economy, cannot rely on assumption of benevolence alone - incentives for malicious parties to try and mess with them are too great to ignore.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: