This is a great statement, they confirm they're aware of the issue, they acknowledge the concerns and they set out their intention to gather the full facts whilst suspending the operation of the research in the meantime. They also acknowledge the systematic way the need to deal with this.
I hope their follow up is as thorough but I want to applaud this, it's a good approach.
This statement rings true because it is the wordsmith'd version of exactly what the department head probably said when he first heard, which probably went something like "what the f*k did you do, who the f*k thought this was a good idea, and who the f*k told you you could do this?"
Yeah, i thought the same thing.
This is basically someone yelling in an office put through a diplomacy filter, which is what you would expect happened.
I'm pretty sure the biggest program at UMN is Social Sciences. Conducting research experiments on unwitting subjects (the kernel maintainers) is a huge ethics violation.
I can only imagine the other department heads screaming at the head of the, relatively speaking, small CS department.
When I was in grad school for a branch of social sciences, we had a whole class on how to conduct research experiments on unwitting subjects. It's something that social scientists do all the time.
(Tangentially, it's something that computer people do even more often, and probably with fewer ethics controls. A/B testing, for one prominent example.)
But it's typically only allowed when it's clear that the risk of harm to the participants is pretty much negligible. Which is clearly not the case here. As others have said, it's probably the case that the reason this got past the IRB is not because the research was clandestine, but because the team lied about how it might affect people. And probably also, I'm guessing, the IRB had their guard down because they weren't expecting that particular department to be conducting social science field experiments.
It is a good statement, and I believe they'll follow through, but it's missing something important that is often missing from otherwise professional communication.
The last line is "We will report our findings back to the community as soon as practical." It should be followed by "and we will provide an update in no more than 30 days".
Without any explicit time frame, holding them publicly accountable becomes trickier. At any point they can just say "we're still investigating" until enough time has passed people aren't paying attention any more.
Note that the companies that do ongoing incident updates on status sites best all do this -- "We will provide an update by 10:30pm PST".
> The last line is "We will report our findings back to the community as soon as practical." It should be followed by "and we will provide an update in no more than 30 days".
I don't really think this is a worthwhile distinction. Personally, it comes off as trying to nanny a process that involves the Linux kernel maintainers and UMN Admins. There's a ban on UMN, probably until they show they can head off ethical issues. The public doesn't need to be sitting around demanding an update in 30 days or less. They'll act when they have confidence in their facts and the outcome, and the Linux community will undoubtedly act in kind.
I disagree. The associate department head named in the statement linked it on Twitter, immediately following up his comments with
> I very much welcome feedback from the participants who brought this to our attention: that's why I tagged @gregkh
. Obviously, we would appreciate any guidance as to how we can get the Univ. of Minnesota contribution ban lifted.
I don't understand the question. By fast I mean that they had a statement up on the website within 24 hours and the associate head of the department is already pinging members of the kernel team on Twitter to ask how they can get unbanned. That's moving very quickly by academic standards.
> They need to act to get the ban rescinded, they can't just ignore this issue away.
I doubt that - in the other thread, someone noted that the last non-hostile kernel contribution from @umn.edu was in 2014. I don't think they are champing at the bit to get legit kernel contributions merged - or at least haven't in the past 7 years. At this point, it is purely a reputation/PR problem with little urgency - the ban is not blocking anyone's work at the university (except for the suspended research project)
The real urgency is they need to protect their reputation. So long as they do a proper investigation and take proper action they can spin this as a rouge professor and be forgiven. We as a community should give them time to investigate and figure out how to handle and prevent this, and then if done correctly forgive them. If they make this statement and then there is no action within a year, then we should assume this is a token attempt to sweep things under the rug and continue with more bans.
For now I'll accept their apology, but if there is no action in the future I'll retract that.
It's a common misspelling, and I find it particularly grating (that, and "tounge" *shivers*). However, I've been getting better at reflexively requesting corrections as I also have some words I constantly misspell.
Well ya know you have to find out the details to make a decision, no? Snap decisions are almost always wrong until you know the facts, no just one side, but all sides. This is the same shit that happens in the 24/7 newscycle "now now now"
As the pressure is already on them to rectify the situation, there's no need to for an artificial deadline. It adds nothing, but could delay their announcement or force them to publish an incomplete response.
"and we admit that a full accounting for these actions is necessary before our ban is rescinded" is a valid (albeit even less common) form of making sure that they are held accountable. Nonetheless, it is important that the accused admits it.
The common case today is that admitting you are due to account for your actions is the same as admitting that you are at fault. I have been denied jobs because I have admitted that to being accountable for my decisions, whereas I got the distinct impression had I simply not accounted for them they wouldn't have cared.
It is a good PR statement, but it doesn't touch on any of the Linux Kernel community's concerns. It makes no committment to working with the community or, like you said provide any kind of explicit time frame.
Instead, the statement is there to prevent journalists from putting "UWN puts the entire internet at risk" on their front page. Instead, it frames the incident into a boring "Students offended the Linux Kernel community while trying to help out by doing security research, we will investigate" story.
Well put. It’s taken me the last hour to realise that it really is refreshingly transparent.
To me, it’s like the distinction between offence caused and actually showing some genuine reflection as to why there might be offence caused. This statement, to me, is firmly in the former camp.
This statement is in the former camp, stating that they do not have enough data yet to be able to be in the latter camp. They're stating that they'll be working toward it. It's a statement that they needed to put out asap before being able to state affirmatively what actually happened, since that can take time.
They need not promise to have results in a particular time frame, but they should commit to giving an update by a particular date, even if that update is just, "We've made progress investigating this incident, but we are not yet ready to report our findings. We will give our next update no later than $DATE."
I have also seen many times a university happy to throw a student or even a faculty member under the bus, and in this case it seems extremely clear that they were running an unethical sociological experiment that was probably not reviewed by the IRB there. That's grounds for the university very rapidly turning on you.
Violating IRB ethics is a very serious issue, and has timelines in place once a complaint is filed. I expect that this statement was intended to be fast, short, and direct because of the time sensitivity.
Which means either they mislead the IRB, or the IRB also needs reform. That investigation should take a while either way to figure out how that is possible.
My experience with anything that involves other faculty (at worse, tenured faculty) is that it will get done some time between later and never. With prioritization slipping the further you venture from your home department.
And this is by definition a matter that will involve a lot of parties. At minimum, as everyone in the chain tries to ensure their ass is covered and how it's not really their fault, because it wasn't really their job to say no to this.
While I disagree with you in this specific instance (because I think it'd be hard for them to be so specific this early on), I think you've made a very good point about building trust in the face of handling things.
It's an example of an organization making a promise to take action to respond to feedback from the community, not providing a time frame, and then just never doing it.
For the sake of discussion, I'll grant the example, but the fact that org A acts in bad faith has no bearing on whether org B acts in bad faith.
If Mozilla had ties to UMN CS&E you might have had a (tenuous) point, but...
Plus, this has come out in the last 48hrs or so? And you're somehow saying that's comparable to 3 years? Sure, if they haven't issued a further statement in 1 month, 3 months, etc. then you might be justified.
"Ability to commit to the Linux kernel with my school email" isn't likely to be a major issue for many. It's a non-issue for undergrad work, and even most grad students are unlikely to be affected. Other than this research, only one other person associated with UMN has committed code to the kernel.
This impacts any direct school-sponsored research work, but if some random student wants to write a patch, they'll just do it from a personal address - no kernel committer is going to go do social media stalking of every contributor.
Maybe practically this doesn’t prevent most students or faculty from doing anything, but it is a huge reputation problem. How many universities (or organizations in general) are banned from contributing to the Linux kernel? When people search for why, they’ll find a research group basically screwing over their collaborators and anyone else who uses Linux. That that exists at UMN could be viewed as a serious cultural problem at the university and dissuade prospective students and collaborators from contact with UMN. That in real terms costs the university prestige and money.
I feel you're overvaluing the ability to contribute to the linux kernel - this is definitely a bad thing and the university should work to correct the situation. But when I was looking at colleges and universties (for undergrad - I didn't pursue a grad degree) I didn't ever ask if the university was blacklisted by any open source organizations.
I don't think anyone would notice this ban - it'd just be an odd curiosity and impediment to any student that tried to submit a patch... that is assuming it doesn't hit the main news circuit.... But, if I hear about this on Colbert tonight I'll be amazed.
The fact that the FBI raided Steve Jackson Games[1] over GURPS: Cyberpunk is, I think, completely absent from general public knowledge at this point - even though that incident[2] led to the creation of the EFF which most folks on HN will certainly be familiar with. Notoriety is a fickle thing and no matter how negative the incident is it'll usually either fade into nothingness or give a positive boost to the organization - this is where the concept of "there's no such thing as bad press" comes from. I, at least, am far more aware of UMN now than I was this morning.
> I feel you're overvaluing the ability to contribute to the linux kernel - this is definitely a bad thing and the university should work to correct the situation. But when I was looking at colleges and universties (for undergrad - I didn't pursue a grad degree) I didn't ever ask if the university was blacklisted by any open source organizations.
You are not looking at it the right way. This is an issue for the President and the Provost because of alumni donations.
When the choice is between firing an adjunct/assistant and not getting a 100k from alumni the adjunct/assistant has no chance.
The question isn't whether they need to be able to commit to the Linux kernel. Probably they don't. But the question is, what reputation does a CS department (and consequently a university) have, that has been banned from submitting patches to one of the most prolific open source projects around?
I think you underestimate the shade this puts on the UMN name. I've never even heard of UMN before, but I doubt I'll ever forget hearing about this university fraudulently trying to sabotage the Linux project, and will probably treat anything and anyone with an UMN background with great suspicion in the future.
Very appropriate. Until yesterday I was happy to have a CS degree from the UMN. Now that is tainted and I want to hide who gave me the degree. I have to wonder if they taught me some things that were unethical that I'm not doing without knowing better. I wouldn't hire a UMN grad because of their reputation.
For now I'm assuming that my degree was more than 20 years ago, and things change in that time (most of the professors I remember best are dead...). However this is doubt in my mind.
If this was just one patch and it was caught early, it could be excused as a rogue solo stunt. But papers have been published. IRB board granted exemptions. A whole team worked on it. Too many people conspiring on pissing in the pool and wasting kernel maintainers time and casting doubt on 190+ commits indicates a complete institutional failure. No colleagues, co-students or supervisors stopped to ask if this behavior was appropriate? It taints the entire UMN.
What if a car or medical device running linux turns out to have buggy mutex locking either due to a malicious commit or a now-hastily reverted commit? As a Linux user of both computers, appliances and vehicles, I am not impressed.
I don't think that's the case (due to how fame works) and I don't even think it's particularly productive to bring up that point.
Their actions should be rectified since they did wrong - not out of fear of a punishment. When we bring only a specific punishment in as a consequence then the question of how to respond can be shifted over to a "which is worse" proposition which means that the punishment needs to be properly proportioned.
At any rate - I doubt admissions would be appreciably impacted even if they handled this incident extremely poorly - some potential grad students might look elsewhere while most would likely be ignorant of the whole incident.
I certainly wouldn't do so - this looks like it was a research topic by one professor and one grad student... So nearly no one with a degree from UMN was involved with this. Even the specific grad student was college aged at the time and we all did stupid stuff when we were young. I think this only really rubs off on the professor since they clearly should've known better. Honestly I think the biggest blow to the university will be when it comes to hiring CS professors - those are the only folks likely to do the due diligence on this topic or be passively aware of it.
I’ve read and re-read that statement, and it seems like the ban is the focus – not what led to the ban.
I get that they may not know anything, but there are other ways to word that without admitting liability, making it seem less like the focus is on the ban and more on the allegedly shady stuff.
Not once do they talk about getting the ban removed, instead they talk about figuring out why it happened and how to be better at having research done being ethical.
Was the ban the trigger to them (the heads) looking into it ? Of course since they do already have safeguards and review processes in place, this happened despite those, so they're saying they will investigate them to figure out how this project was validated and make sure to strengthen these processes as needed.
The end goal they give themselves in that message is not a ban removal but "safeguard against future [such] issues".
> Not once do they talk about getting the ban removed, instead they talk about figuring out why it happened and how to be better at having research done being ethical.
I feel as if we’re discussing two different statements.
> The research method used raised serious concerns in the Linux Kernel community and, as of today, this has resulted in the University being banned from contributing to the Linux Kernel.
Here the cause is that "the research method used raised serious concerns in the Linux Kernel community"
Not that it was unethical, or potentially how it was. It’s not that something clearly went wrong. The cause can be read as the response, rather than the action.
Yes that's called the trigger. You have a trigger, that leads you to focus on and review what caused said trigger, and reach conclusions.
The ban is the trigger. The review is about to happen, so they really can't talk about its result yet. For all you and me know, said review will say their processes are just fine which I would personally disagree with but it could happen. Then, if there was an issue, they will update their processes, which is the end goal stated.
So your quote:
> the ban is the focus – not what led to the ban
The ban is the trigger that starts it, but the focus, the thing on which they will work, is their process. "Something important happened so we will spend lots of time figuring it out how it could have happened despite our processes made to protect against it" makes it pretty clear the focus, the thing they will spend their time on, is the review of their processes.
I think we’re mostly in agreement. The ban is clearly the trigger, and it’s pretty transparent.
> For all you and me know, said review will say their processes are just fine which I would personally disagree with but it could happen.
Agreed. For what it’s worth, I don’t actually think there’s much they can really do besides acknowledge it and make sure their ethics board is competent and consulted.
> the ban is the focus – not what led to the ban
I was talking about the ban being the focus of the statement, as it’s the point at which there’s a clear shift from the situation to the fix. This is unfortunate, because to me it is placing the emphasis on the trigger, rather than the cause.
I believe it could have been written in a way that mentioned the ban, left room to investigate, but made it crystal clear that the community concerns and the ban were not the problem. It makes it feel to me as though their primary motivation to investigate is to get unbanned – which, to be fair, it probably is – rather than to be committed to root out alleged unethical practices. Even if the short-term consequences are the same, it’s a subtle but important distinction.
I suppose it’s a form of honesty, and I could instead embrace its transparency.
I'm not sure how you get that. The ban is mentioned as part of a single sentence that acknowledges the current state of the situation, which seems obligatory, so of course it's there. Then the whole second paragraph is talking about how they're shutting down the activity that led to that situation while they work on getting to the bottom of it.
This seems like an entirely appropriate balance of text and emphasis for a statement that is short and to the point. Which is also appropriate and laudable. Typically when an organization says any more, it's to try and do some spin doctoring.
> The research method used raised serious concerns in the Linux Kernel community and, as of today, this has resulted in the University being banned from contributing to the Linux Kernel.
> We take this situation extremely seriously.
I think it’s because the last bit of the first paragraph – the ban – flows onto the second paragraph – the situation.
Once you’ve had the two linked, it’s like one of those ambiguous optical illusions, where you just can’t see the other.
If I were writing that statement, I’d be concerned it looked that had there been no ban, there would be no situation. Said statement doesn’t do that for me.
> I think it’s because the last bit of the first paragraph – the ban – flows onto the second paragraph – the situation.
So, as long as you ignore the formatting they presented it with and decide to read it without it, you can come to a different conclusion?
I don't think contortions such as that to link sentences is fair, nor the fault of the organization that put forth for a statement specifically separating them.
> So, as long as you ignore the formatting they presented it with and decide to read it without it, you can come to a different conclusion?
No. It reads that way with the formatting they provided. You can’t take that paragraph break out without putting one back exactly there. It’s refreshingly transparent, and perfect if you expect them not to care about the underlying cause as much as they care about the ban.
> I don't think contortions such as that to link sentences is fair, nor the fault of the organization that put forth for a statement specifically separating them.
It’s not a contortion, it’s just how it reads to me. I’m not taking some deliberately contrarian stance – I was really quite shocked at the multiple comments saying how great the statement was when it inadvertently or otherwise conveyed the very message I believe they should have avoided – the one where they simply do the least they need to do to get unbanned, which may well be closer to the real objective. It’s the difference between being shamed into action and recognising why action is necessary.
I would not want to be the person to have to write such a statement
> You can’t take that paragraph break out without putting one back exactly there.
Exactly. And paragraphs are used to separate concepts and statements into conceptual units. That you're letting a concept and interpretation from one apply to and influence the reading of another as if there is no break is the problem.
> It’s not a contortion, it’s just how it reads to me.
I think you have some interesting ideas of how to read. I don't think that follows necessarily for the majority of other people, and I don't think that's what was intended by the writer.
At he same time, I'm not entirely surprised. This is why writing is hard, and sometimes thankless. Regardless of intention and how clear you think you're being, someone will always read it otherwise. It's just the nature of the medium, to some degree. It can happen through something like this, where you're inferring intent across boundaries where I think that boundary is intended to clearly separate it, and it can happen if they are absolutely literally clear and denounce other stances, because people will read those denouncements as indicators of the opposite, as crazy as that sounds ("The lady doth protest too much, methinks").
I think you're better off taking a separate paragraph for what it usually meant to be. A way to separate statements so they are clearly distinct.
> Exactly. And paragraphs are used to separate concepts and statements into conceptual units. That you're letting a concept and interpretation from one apply to and influence the reading of another as if there is no break is the problem.
Their second paragraph says they "take the situation very seriously".
The focus is rescinding the ban, but they acknowledge that the way to do so is review their actions and set up safeguards to prevent similar things from happening. There's too much bureaucracy involved for them to already publicly review their actions.
Why else would they take the ban extremely seriously and take the actions mentioned? I guess it's possible they're worried about the ban spreading, but rescinding the ban seems more likely.
Or, maybe they don't want to be in a position where they are getting banned just in general? Like, maybe you don't mind getting banned from a specific bar, but you do mind being the kind of person that is getting banned from bars.
Of course, no PR person with anything would allow such a thing into their statements. The UMN is far too big to allow someone without some competency in PR.
My take on that is that it's up to the kernel maintainers to unban them. If they end up the investigation with: "Yeah, that was bad but we won't do anything about it", it's unlikely to get the banning side to move an inch.
I agree, and given that they have only just started to look into it, I think it shows an appropriate amount of concern and urgency. They'll at least want to talk to the researchers and get their point of view, before committing any further. This is about the best you could expect at this point, they'll want to proceed methodically.
I think it’s more accurate to say that “we know that they approved something. Whether or not that something turns out to be exactly that this professor and his student did here is I gather a different question.
The "hypocrite commit" preprint/abstract was a controversy which broke late last year. Prof. Lu at that point published an FAQ stating that he didn't think it was Human Subject Research (HSR) and got a post hoc review from the UMN IRB giving him a free pass, agreeing that attempting to con humans is apparently not HSR by their lights. This is three month old news at this point, and is quite well established.
What triggered the ban now is another set of suspicious commits was sent by a graduate student in the same research group.
Thanks, I didn't know that background (shame on me for not reading through all the history I guess). Odd decisions appear to have been made here by all parties. Though I was born in Iowa, I grew up in Minnesota and have a lot of friends who went there (Twin Cities campus, mostly).
Have you ever drafted a public response for an issue receiving a lot of attention on social media when you don’t know the whole story but, from what little you do know, things aren’t looking great for the entity you represent?
You vastly underestimate how much effort and attention was put into writing that “single paragraph” because I promise you it sure does take much.
Too early. The shit storm is less than a day old. You wouldn't accept an insincere apology right? So we shouldn't demand an apology before a sincere one could possibly be issued.
They have to investigate and also apologize accordingly in the name of the university.
Saying the researcher is the only responsible of a work that is managed by the university (with things such as probably funds and for sure resources) is just wrong.
>but I think an apology would be in order - perhaps left out because it can be considered an admission of guilt.
I think an apology before they've had a chance to review everything that occurred would be rather empty, don't you? "I'm sorry you're upset"?
I far prefer what they've stated they're going to do which is stop the activity they know about immediately, and to review how we got to this point.
If, on the off chance, the professor was being honest about ensuring they weren't wasting anyone's time, and ensuring no bad code made it into the kernel, then the situation becomes a bit more murky. They'd need to go through all of the associated emails both public and internal to validate that.
If on the other hand this is in fact a duck, I would expect them to both to issue a meaningful apology at that point, as well as lay out the steps they've taken to ensure it doesn't happen again (which they've indicated is their plan). And hopefully restore the trust of the OSS community.
They could investigate and determine that they did nothing wrong, but they aren't the final arbiter of justice. This is not like the police investigating themselves. The OSS community can come to a different conclusion and still punish them.
The job of UMN is to get the facts first, and then determine actions. If the OSS community (and others) feel like the actions don't match the facts (or if they feel the facts don't match what actually happened) they can apply their own set of actions.
Apologies are overrated. Are these leaders actually sorry for something they likely weren’t aware of before today? Doubtful. Who are they apologizing to? The public?
When the issue has gotten actual attention an apology to the kernel mailing list might be appropriate, but as someone who has been apologized to many times it all just becomes meaningless, who cares if you say you’re sorry. I care what you’ll do about it.
Immediate knee-jerk apologies are usually worthless because the person apologizing usually has no idea what's going on and can't make a reasoned statement. They're just trying to do damage control.
The best apologies are the ones that are done after some reflection, not under duress, and not purely for reputation.
There was one thing that I found to be lacking from their statement. They never said that what they had done was wrong. The university already knows what the researchers did and are aware of the paper that was written about the subject by those same researchers. [1]
The department heads just learn of what is happening. They cannot say "we didn't do anything wrong!!!" without investigation because it will make the university in a very negative light (it is a serious ramifications). They need to get all the facts and knowing how it happens and who is responsible for this. So this way they can make a concise action and they will make a proper statement. They are taking "investigation first and comment after" cautiously and seriously.
It's a bit unclear who complained to whom about this - and I'd expect looking into this to be part of the review. i.e. I could easily see "some people reached out to IRB concerned about lack of its involvement, IRB talked to researcher and went through process, came to result for whatever reasons" happening and not being something that is reported up the chain, because it was "resolved".
I'm up for "wait and see". From an administrative perspective, removed from what is happening, I can see investigating first before admitting fault. I'd much rather people understanding why they were wrong, even if it takes some time.
They shouldn't have. This appears to be a pretty clear cut case, but made up outrage-bait has made it to the top of HN before. Investigating claims is the correct thing to do.
> I would argue that first requires investigation.
Why do you think that enough of an investigation hasn't been performed in order to understand culpability?
Thay already know what happened and want to learn why it was approved.
That was what their comment said.
Take a look at the actual PDF from the researchers, "On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits" [1]
The prof overseeing the paper clarified that they initially did not seek IRB approval, and then received an IRB exemption [0]. I'd want to ask the IRB why they approved that, for starters. Maybe because they'd already done the research and hoped it would blow over, vs. the controversy of rejecting it when they'd already done the work?
Honestly I’m guessing this kind of situation didn’t match any existing policy for the review board and a few people made a bad call.
We need to make sure to be supportive of people making mistakes and learning from them instead of raising pitchforks for every misstep. Failure is never completely avoidable and responding properly to failure is way more important than never failing.
From my reading of the threads in the kernel mailing lists, it seems the IRB thought "is it bioscience with experimentation on live animals? No? Then it's all fine".
Yeah, especially considering that the IRB said the research was out of scope (specifically that it was not "human subject research") rather than indicating that it was ethical. Kind of like the distinction between a court not having jurisdiction and a court declaring you didn't break any laws.
I think they misrepresented the project so that it would be classified as “not human research”. It’s unclear whether the misrepresentation was intentional (to obtain the exemption) or unintentional (they were genuinely unaware of the human impact).
> We will investigate the research method and the process by which this research method was approved, determine appropriate remedial action, and safeguard against future issues, if needed.
The "if needed" tells me they aren't sure what if anything is wrong yet. It would surprise me if they have done that much of an investigation in the few hours since they might have learned about this beyond scheduling meetings with involved parties and compiling relevant documents in a folder.
I think they are at the point of having a bunch of angry emails and a few news articles from certain publications. I don't blame them wanting a bit more than that before saying anything.
This is what innocent until proven guilty looks like. I, for one, agree with the approach. I don't want to live in a world where people are fired and projects shutdown on the basis of allegations alone.
Don't jump to conclusions, and say I am guilty of something that I didn't write. I never said to fire people and shutdown projects based upon allegations alone. You may not have read the article for this comment page, but they admit that the action took place. The researchers, themselves, elsewhere admitted it took place.
Pardon me. I did not mean to imply you were calling for anything. And I agree, the facts look pretty damning. Nevertheless, I fully support a slow, deliberate, and comprehensive evaluation by any authority when evaluating serious accusations. Further, I strongly believe in presumptive innocence, regardless of initial impressions.
Btw, I'm not suggesting you don't believe in any of the above.
It’s too early for that. I expect their followup will have details on failures but first they need to figure out exactly what happened. That will take some time and effort.
At least now they’re burning UMN time and not kernel maintainer time.
They have just learned about the details of the research conducted:
> Leadership in the University of Minnesota Department of Computer Science & Engineering learned today about the details of research being conducted by one of its faculty members and graduate students into the security of the Linux Kernel.
I'm going to say that the odds are that the faculty member in question is not going to be a faculty member anymore especially if the ban remains in place as the said faculty member probably at best fibbed the leadership before about his research.
> I do work in Social Computing, and this situation is directly analogous to a number of incidents on Wikipedia quite awhile ago that led to that community and researchers reaching an understanding on research methods that are and are not acceptable.
> Yes! This was an IRB 'failure'. Again, in my area of Social Computing, it's been clear for awhile that IRBs often do not understand the issues and the potential risks/harms of doing research in online communities / social media.
An interesting question is IMHO who people actually tried to contact in the first round of this getting publicity (and I guess how many did so at all), and if leadership should thus have been aware earlier that there was controversy worth looking into.
Absolutely. I think this is more of a failure of the IEEE Security and Privacy program committee (PC) than that of the university. The chairs should have desk rejected it on ethical grounds. Failing that, at least one of the reviewers should have noticed this was fishy and raised the alarm. And failing that, someone in the full PC meeting should have been like "whoa, hol up." (though this last failure could be partially explained with PC meetings now happening online instead of in person, so less engagement)
Yes Twitter is "weird" about with sub-thread it picks as the "main" one (well, it choses one and didn't pick the interesting one here), so I had to explicitly link the latter point.
Basically now they can do a meta study on how the review process of IRB has flaws.
As shown, if you intentionally try to bypass the IRB, apparently you can. It's even reproducible.
Oh so interesting ! In my french uni, psych teachers were all doing wikipedia research to prove it was unreliable by timing correction delay of purposedly introduced mistakes,
They were so proud of their discovery. They didnt think to time what that would take if they went to a printer and changed the text there before a book is printed.
Terveen was always honest about what works and what doesn't work. Its good to see them acknowledge that this is something they need to dive into and fix.
>it's been clear for awhile that IRBs often do not understand the issues and the potential risks/harms of doing research in online communities / social media.
Too bad he's not in a position of power to implement that additional review to CS department research.
First time they initially skipped IRB review for sending malicious patches to the mailing list, which people do install. (So IRB exemption should not apply.) A top security conference allowed a paper with a broken IRB process, and UMN IRB, when a later IRB exempt request was filed, explicitly allowed this. Bad, bad, and bad.
The latest Linux incident seems like a repeat by the same CS dept, advisor, student, & presumably, UMN IRB. No naivete excuse anymore, this is now business as usual.
The bigger fail to me is the UMN dept head + IRB, and the security conference review process around IRB norms. Especially damning that it's a repeat of the same IRB mess. IRB exemption matters a lot for practicing scientists, and leadership tolerating this stuff is what will get it taken away for everyone else.
> looks like the 2nd time they're doing the same thing
It's not clear that they're doing the same thing--we don't know that these recent patches are deliberately bogus. See this comment with clarifications from the other post: https://news.ycombinator.com/item?id=26890583
Agreed! The broader R&D group seems to work on symbolic execution, and given the patches, which it seems to stem from.
Most charitably, the tool makes changes maintainers largely don't care about -- see the revert reviews that characterize past acceptances as being rubberstamping of irrelevant patches to irrelevant code -- so this just violates basics like informed consent by not disclosing what's happening. Maintainers had to figure out the nonsense, and instead of declining a second round of undesirable non-opt-in participation, banned. What would normally happen is the patches would be tagged w/ the R&D tool, so people know they're part of an experiment, and can ignore / ask them to stop / consent through action.
Less charitably, it is a DoS attack on reviewers, and who knows what else is in here.
Yeah I agree. To me, the biggest problem is with Oakland, who accepted the paper. A single faculty member doing something really stupid is one thing. But the field's top conference accepting the work? Christ.
I don't know why, but somehow I am not too bothered by the research itself. Sure, in retrospect, it does not sounds like it was the right thing to do (or the right way to do). But, you know, stuff happens.
Instead, what bothered me immensely is the way the PhD student handled that interaction: immediately claiming "bias", "slander", playing "victim", etc... I don't know if he learned such a way to communicate from his professors, other students at the CS&E department, or UMN environment in general, but it's very bothersome to me. Such way to communicate precludes constructive (and perhaps heated) exchange of ideas. I think people like that should be nowhere near important CS/EE projects.
It's entirely possible that Aditya is actually just working on a static analysis tool, it is buggy, and he wasn't aware of the other research his advisor does. If that is the case, I can kind of understand his response. I would be pretty upset if I knew I was just trying to submit some honest(if buggy) patches, but was accused of being a scoundrel because of something that didn't even involve me.
Of course, it's also possible that he is just gaslighting Greg and actually was doing the same kind of "research" as other people did before.
I think that UMN will get to the bottom of it - it will be pretty clear to them what kind of research he was doing, and whether he represented himself honestly.
Still, given that he is 4+ years into a PhD (after a Masters), and has published on this topic before, his first deflection is "We are not experts in the linux kernel" seems disingenuous - kinda like "Hey guys, I'm not malicious, just not good at my job."
I think this can be an interesting topic in itself, how those trigger words and victim playing can get you through code reviews faster. It's certainly true in my company...
Here is some background for those who are not familiar with the rules of academic research in the US. Usually, any experiments involving human subjects are subject to Univesity Institutional Review Board (IRB) [1] approval. IRB should evaluate both ethical and safety concerns. I recall having to jump through quite a few hoops just to do a simple touch screen gesture recognition test of a dozen willing fellow students. Apparently, the IRB approval step was skipped, or IRB failed to do its job in this case.
This wasn't a direct test on the maintainers, so it is easy to see how the IRB would miss that. Still not excusable to miss that there are humans involved.
Anyone saying that it was a bad idea to ban the entire University isn't looking at the big picture. I look at it from a very philosophical standpoint:
The entire idea of an academic (research) institution can be summarized as "an entity representing a group of trusted people who act in good faith of that institution". The moment one of your researchers acts in bad faith, or shows that they cannot be trusted, it's clear that the institute cannot be trusted until the institute takes decisive restorative action. By banning the University, you are saying "we no longer trust you in your authority as an institution until it can be made clear that this isn't a systemic problem."... you are not saying "every single person at the University is terrible" as it was framed by some commenters in the other thread.
This is why, in any broad case, it's so upsetting when an institution of any variety doesn't take clear responsibility for the behavior and actions of its members or representatives (take for example the police, or the federal government). When that is true, the behavior of any member of that institution should be subject to scrutiny and distrust. It's not a complicated social phenomena.
If one doctor at a hospital does very bad things - then yes, I would avoid that hospital completely. Because in a working environment, bad actors would be detected by colleagues, etc.
And since this not happened, one can only assume the whole hospital to be deeply flawed.
Indeed! If I heard that a doctor was deliberately and repeatedly poisoning his patients, I'd absolutely avoid the place because proper oversight is clearly lacking.
Perhaps the hospital actually has excellent oversight and it is just that the evil doctor is exceptionally clever at avoiding it. But, Occam's Razor says that is a poor bet.
Banning the university is fine to send a message, but reverting all patches from umn emails seems very shortsighted to me. Especially blindly reverting patches years before this "research" was conducted that almost certainly have had context changes around them, likely introducing more harm than good.
I think that reverting them so that they can be reviewed is entirely appropriate - sure there are lots of people at that particular university, but it's not the linux maintainer's job to know which of them are bad actors - better that the entire university gets locked out, if only to get their attention, and then let them sort out their bad actors before being let back into the fold
I could easily see sending in a bad patch to see what happens and then waiting a few years to do more and write them up; there's no realistic way to guarantee when the bad-faith contributions started.
Except that patch is clearly BS! You can't patch a double-read vulnerability by checking for a capability; that's not a thing that works. So either the description is wrong, or the patch is wrong, or both.
And the point of the reverts is that the kernel maintainers don't have the unlimited time that would be necessary to re-review all of these questionable patches for probable malicious underhanded C, so they are reverted for now for triage (not permanently).
For the linked patch, I would judge it possibly malicious as it leaves the identified vulnerability in the kernel for later exploit by the attackers, namely, the UMN research team.
You don't think reverting a patch from someone whose only relation is working(worked?) at the same university as the advisor initially responsible for the security "research" is overkill? If the goal is to prevent security bugs in mainline then maybe haphazardly reverting everything that doesn't conflict and fixing it later isn't the best approach.
I'm disappointed at seeing hackernews jump on this mindless mob justice like other sites would.
Seems like the problem was from more than one person. This doesn't seem like mob justice, it seems like a pretty measured response to a source of repeat bad faith actors.
I don't think the removal of those patches means the changes will be immediately pushed to public. If they are breaking anything it is develop branch and surely they will review all those patches and will merge the good ones back before releasing anything public
So far it has only been 'insiders' (FOSS-aware/friendly press and trade press a bit closer to the dev side than the management side) but the real question will be if it hits any large mainstream news sites by tomorrow. If it does not land until Friday then the story is over in terms of real consequences, but the PR team at UMN is probably hoping that fast damage control now can push anything major out a day or two where it disappear into the weekend trash.
Your tech department being blackballed from submitting Linux patches is a pretty embarrassing consequence. What else would happen? It's not like anyone will fine them.
> Your tech department being blackballed from submitting Linux patches is a pretty embarrassing consequence. What else would happen? It's not like anyone will fine them.
The ban could affect donations from alumni or corporations.
They’d be concerned about (known-ish) bad actors getting commits into Linux without anyone catching it, and probably want to tell the foundation to start red-teaming the kernel commit process.
What's the source of your information that some US government agency has control and/or influence over Linux developers? Also in this case, what makes "US" special? Your comment sounds like American exceptionalism all over to me.
> What's the source of your information that some US government agency has control and/or influence over Linux developers?
(I'm not the user you're replying to.)
Nothing, frankly. But why can't any agency tell them that red-teaming is a good idea? DHS Cybersecurity is interested in keeping Linux secure, so it will tell them.
There's no "American exceptionalism" in the parent comment, just that the US is a pretty influential country and it may want to give its opinion.
I do some maintenance work for the linux kernel dvb and infrared subsystems. I reviewed and accepted some patches from umn.edu addresses. They looked fine to me, however they're all around error handling, which can get pretty tricky with long error paths.
gregkh sent a 190 patch series to revert all of the "easy" UMN reverts, pending review. People are now looking at the patches and saying things like, "that's one OK, don't revert".
There are another 68 commits which did not revert cleanly, in some cases because they were later fixed up, already reverted, or some other patch has touched those lines of code. This will require further manual work.
We basically at this point assuming bad faith for all UMN patches and reviewing them all before allowing them to stay in. (Or if they get reverted by default, someone else can manually apply them after they go through strict review.)
Temporarily banning UMN until they can get their IRB act together makes sense, but wholesale reverting every commit ever made by a UMN e-mail address -- whether affiliated with this research or not -- seems kind of extreme?
I'm not sure how many people here understand this, but the University of Minnesota is quite large, over 50,000 people. That's comparable to the entire population of Palo Alto and is larger than MIT, CMU, and Stanford combined. Jeff Dean is a UMN alumni. I am too. The fraction of this set that is actually associated with the shady research is tiny.
It seems to me like the kernel maintainers are at best wasting a whole ton of their own time on this, and at worst re-introducing a wide range of bugs that UMN contributors had fixed over the years. A real "cutting off your nose to spite your face" situation IMO.
Well, it got the heads of the CS department to finally notice. The preprint of the "hypocrite commit" paper was sent out late last year, and while there was contrversy about that back then, with Prof. Lu admitting that he didn't think it was Human Studies Research (HSR) and so he didn't bother to get IRB approval, the CS department heads didn't do squat. And the UMN IRB said, "Okey-dokey!" after being asked to do post hoc review (which as others have said, a post hoc review should have raised red flags with the IRB right there). It's the lack of institutional response and lack of any kind of instiutional controls which is the most concerning.
And we didn't take any action until another series of suspicious patches started getting sent for review from a graduate student from the same group. At which point, we have a unrepentant professor who has gotten rewarded by a paper at IEEE S&P, and being invited to serve on the PC of the IEEE S&P next year, and an apparently apathetic, toothless IRB at UMN. I can see people criticizing us if we hadn't taken action.
My point was more about collective punishment than about the value of the contributions. Unrelated people at UMN that have contributed code to Linux shouldn't have their work thrown away.
That said, there were hundreds of reverted commits and many of them were fixing real security bugs. In particular, the same security researchers that did the questionable experiment has also contributed many real bug fixes.
>We basically at this point assuming bad faith for all UMN patches
This seems like a gross overreaction to three commits that didn't even make it into mainline. Especially when done for commits years before the "research" was done. But I suppose nobody can miss a chance to let loose a little outrage
They admit newer commits are part of a research effort and were sent with the intent to get feedback, but they didn't actually disclose any of that in the patch description [1]. Furthermore, lots of these patches are at best useless and at worst actively introduce bugs.
It's famously hard to distinguish malice from incompetence, but I don't think assuming bad faith is out of line here.
How do you know? These people already attempted to manipulate the kernel maintainers repeatedly, even after being caught. Nothing they claim about their own work can be trusted.
I think everybody is missing the point. If one grad student was able to do this, imagine what a team of dozens of well-paid, well-equipped, and highly experienced security experts could do.
In other news, we just learned that any half-decent security agency has already injected their own vulnerabilities and back-doors in OSS.
There are so many security flaws in critical software that you really don't need to inject vulnerabilities. You just need your engineers to find, catalog, and script exploits for them - ready to use whenever needed.
If you do inject vulnerabilities you need to assume your adversaries will find, catalog, and script an exploit for it. And you risk your reputation loss if you do get caught. So I'm sure it has happened, but I bet not that often.
"In other news, we just learned that any half-decent security agency has already injected their own vulnerabilities and back-doors in OSS."
We did not learn that today, but we still assume it.
Btw. the professional agencies have their vulnerabilities injected probably way down in hardware level. Intel ME etc. and or even more bare to the metal.
Sure, but are there others who did not get caught? Others who might be better at obscuring their changes? And who might first develop a history of good, secure work, and then slip something sketchy into one -- and only one -- patch?
Seems like the UMN "researcher" was doing this over and over; the more times you do it, the more likely you are to get caught.
The other reason this is a hotbutton issue is because it is related to the general problem of abusing the universities as a protected platform for general subversion for its own sake.
It's not innovation or research, or even progress, it's actively destabilizing and subverting a targetted community that a huge part (even majority) of the economy depends on the integrity of. This is a hawkish view, but in a challenging cultural moment, their activities are very difficult to be charitable about.
The research approach is distasteful and dangerous. Any one should not introduce bugs or malicious code intentionally, even for research purpose. The results are also a bit trivial. It is easy to imagine that this type of code injection would be possible. So let this be an example of what is the consequence of intentional malicious code injection, even in the name of research.
I wish I could be on the researchers' side, as I am both Chinese and an alum from UMN. But No - wrong is wrong.
I guess the question I have is "Did any *previous* research done by UMN successfully introduce bugs into the Linux Kernel git commit log?"
There are weasel words in this statement that make it unclear and the researchers have been really dishonest already. But! If it's true that their research has never made it out of email chains then it does seem like the reaction is a bit disproportionate to the damages here.
e: Not getting pulled into a maintainer tree isn't enough to be safe about what was posted to a kernel mailing list. People can (and testing scripts blindly do) grab and apply patches on the mailing list.
Much focus is on the 3 patches from the paper last year, but others have been submitted before and since by the same group, and some that have been found to be malicious did make it into the Stable branch: https://lore.kernel.org/lkml/78ac6ee8-8e7c-bd4c-a3a7-5a90c7c...
That was quick. I'm glad they will look into this and I hope we don't see this kind of research in the future.
This story reminded me of facebook experimenting on their users to control their emotional state (but they said users were volunteers because "terms of service"...)
That seems optimistic. Scientific fraud is caught all the time and basically never results in anything happening. The university here repeatedly signed off on this so it's institutionally culpable. Normally when bad stuff happens in research academics just all blatantly deny anything is wrong and dare you to do anything about it.
This is a statement announcing an investigation. Expecting a mea culpa right at this time is a bit premature considering they still need to figure out where they went wrong.
It's not just the linux kernel. You can imagine this having some fallout to other open source projects where they all come out and say they won't accept contributions from the university. Huge PR nightmare.
you should notice that the professor involved is listed as being on the Program Committee for IEEE S&P 2022 already, meaning he must be well-connected with various higher ups at the conference. This is probably why they are reluctant to remove the paper. There were similar issues with the ISCA'19 peer review fraud case. IEEE/ACM are having some major issues these days.
It seems concerning that the investigation is being conducted by the CS&E department itself, rather than an independent third party. There's a risk that the results of the investigation won't be seen as impartial, since the CS&E faculty obviously have an interest in protecting the department's reputation.
basically the response I expected & hoped for. hopefully an eventual follow-up statement will detail what their investigation finds & what actions they took as a result
> Does this project waste certain efforts of maintainers?
> Unfortunately, yes. We would like to sincerely apologize to the maintainers involved in the corresponding patch review process; this work indeed wasted their precious time. We had carefully considered this issue, but could not figure out a better solution in this study.
Here’s a better solution: don’t do it. You do not have a divine right to carry out this “research” no matter what.
> Does this project waste certain efforts of maintainers?
> Unfortunately, yes. We would like to sincerely apologize to the maintainers involved in the corresponding patch review process; this work indeed wasted their precious time. We had carefully considered this issue, but could not figure out a better solution in this study.
Have they ever heard about carbon offsets? How about finding/performing work that offsets the abuse of time of the maintainers involved? (Hint: it's not too late to do this to show you really are sorry. You won't be trusted at first but it's the right thing to do.)
Beyond any other damage they did, they are wasting the life time and energy of the kernel developers currently sorting out this mess. To me, that is on the same level as locking them up in a room against their will. This could be considered a felony.
In any case, they should make up for the economical part of the damage they caused, that is be billed for the hours the kernel developers spend in resolving the matter at hand at a high market rate (and probably above for punitive damages).
It would handle the issue of consent, but not answer the question they were researching: "Would this bug get into the kernel?" Because if the maintainers knew that this was research, not an actual ordinary patch, they'd be answering a hypothetical: "Would I accept this, if it were an actual ordinary patch?" I don't think people are wired to answer that exactly the same if they know it's a hypothetical as if they were answering it non-hypothetically, "Should we accept this patch into the kernel?" So the only way to really find out what would happen in reality is to really do it for real... So that's what they did.
What's weird is of course that they didn't tell anyone. In the fairly analogous situation of "white-hat hacker" penetration testing, the penetrators have authorization from either someone (fairly high up in) the security department of the organisation to be tested, or if that whole department is being evaluated someone even higher than that; without that authorization it's not white-hat testing but just black-hat cracking. They should have privately contacted mr Kroah-Hartman, and/or Linus himself, beforehand and asked if they were amenable to this. Then they could have had the option to accept and be in on it, not comment on but silently monitor the patches in question, and at the end of the experiment reveal the results to their fellow maintainers and revert any specific patches -- which they would have been continuously kept informed about out-of-band -- that had made it through review.
Provided of course that they, Greg K-H or Linus or whoever, would have been comfortable with temporarily deceiving -- or letting be deceived -- their fellow maintainers, of course. Or, well, even if they weren't exactly "comfortable" with it, seems to me there's a chance they might have gone for it because of the valuable knowledge it would have gained the community. The "reveal" afterwards, coming first from one of these trusted people, would probably go down a lot better in the maintainer community than it did as this secret external attack.
The university's ethics committee approved the research, and it was guided by a number of their professors. I see no acknowledgement of this in the statement; instead, I see the groundwork laid for hanging the students out to dry.
> The research method used raised serious concerns in the Linux Kernel community and, as of today, this has resulted in the University being banned from contributing to the Linux Kernel.
The research method _approved by the University's internal authorities_.
> We will investigate [...] the process by which this research method was approved, determine appropriate remedial action, and safeguard against future issues,
I mean, the "publicity" here was the paper announcement and some arguing on Twitter involving the professor, that's not automatic "department leadership needs to look into this" material. If the IRB didn't find anything, it also didn't have a reason to involve leadership. As said elsewhere, something should probably have been noticed at that point, but they intend to look into that. Seems fair, as long as that's what they do properly.
I'm not able to give them the benefit of the doubt; or at least, if they are so inept they should be rejected by the scientific community.
Imagine this: "After thorough research, we discovered that we were able to cause the deaths of numerous individuals by knowingly constructing an unsafe bridge and having it pass municipal inspection."
That's the level of unethical research that the U of Mn approved.
The department leadership never approved the research. Why does no one seem to understand that universities don't work like corporations? You never have to get your research ideas approved up the chain of command. You just start working on them (after running them by an IRB if you think that's necessary).
The IRB is not "leadership", and the IRB process is usually not very adversial, i.e. primarily based on how the researchers represent their own work. (which is a flaw, but one thats common to how this works, not necessarily some special failure of UMN - which is why more investigation is needed)
To me reverting those hundreds of patches sounds like an overreaction, which might cause actual damage. It's not clear (at least from what I have read in the thread) that this code, which may be wrong or at least useless, is part of any "let's check their patch process" experiment (or did I just miss that?)
That they did that in the past is clearly unethical and was generally a shitty thing to do, but this group also seems to do technical security research. That the student (allegedly) trusted his static code analyzer so much that he did not care to verify its findings doesn't speak for him, but Hanlon's razor may also apply here. Just banning that student (if he keeps submitting technically inferior patches) should be enough imho.
Translation: a whole bunch of people below us in the foodchain are about to get their asses kicked.
I am very curious to hear who approved wasting the time of Linux kernel developers, many of whom are volunteers, with this excuse for academic research.
Hopefully the reviewers that failed to do proper reviews of those patches get relived of their privileges too. And the process gets throughly reviewed to prevent reviews being a glorified rubber stamp process.
I don't think you understand how Linux works. It's run by criminally underfunded volunteers. No one's getting "relieved of their privileges" do to intensely difficult, under-appreciated work.
CS department security research is near universally not held to be in the scope of IRBs. This isn't entirely bad: the IRB process that projects are subjected to is so broken that it would be a sin to bring that mess on any other things.
But it means the regularly 'security' research does ethically questionable stuff.
IRBs exist because of legal risk. If parties harmed by unethical computer science research do not litigate (or bring criminal complaints, as applicable) the university practices will not substantially change.
> Leadership in the University of Minnesota Department of Computer Science & Engineering learned today about the details of research being conducted by one of its faculty members
Uhh... they didn't know what their own employees were doing?
Don't most universities have review/approval procedures before people can do experiments or studies?
I think GregKH complained to the university previously and then, when presented with continuing bad/bad-faith patches, decided to blow up their world very publicly.
I'm still expecting the: “We apologize again for the fault in the commits. Those responsible for sacking the people who have just been sacked have been sacked.” - UMN CS&E
Is there no oversight as to what research is being done? In social sciences and medicine there's the IRB, is there not some similar cursory oversight board?
IMO, the CompSci department would be the one most severely and immediately affected. Is it not better that the CompSci dept head addresses the issues of their department before the dean has to get involved?
There should be at least a minute at the regular deans meeting about this. Which means the head of the other departments know. When I was there CS was both an engineering degree and a liberal arts degree, so that probably means heads of departments like Mechanical Engineering, and Music have heard of it.
For now it can be a "something CS is taking care of, but you need to be aware of it in case they don't".
The title here should probably be "UMN CSE Department Statement..." rather than merely "UMN Statement ..." since it's coming from the department head and associate department head, not from the university as a whole.
Title on the webpage is now showing as "Statement from CS&E on Linux Kernel research - April 21, 2021" vs HN's "UMN Statement on Linux Kernel Research."
The recommended mode of reaching HN's moderation team is by email at hn@ycombinator.com
Though I suspect periodic searches for recent mentions of dang's username in comments are also employed. It's a matter of what's most efficient and convenient for the team, and what community practices have evolved.
I think dang might have something in place which alerts him when he is mentioned. Certainly he tends to show up quickly when people "page" him like this.
"We discovered some folks have been urinating into the campus coffee urns, in order research whether people will object to the flavor. We have told them to stop."
Your insinuation that I think there may have been no wrongdoing is unjustified and insulting. The behavior has been halted from both sides, no more active harm is being done; your urgency is misplaced. Somehow you think it's unreasonable to investigate before casting judgement -- as if justice itself is invalid if not met out at the pace of a twitter mob's attention span.
I beg your pardon; i do not insinuate anything on your part: I agree with you. Investigate this and see if there's been any wrongdoing or just honest mistakes and correct those that were made... at the professional, academic level; that certainly seems proper.
I am snarking about the apparent disconnect between the previous suspect activities that have apparently not yet resulted in that kind of oversight thus requiring this kind of sanction from the kernel team. I am snarking because the "poor us" these folks already pulled on LKML shows any debate is pointless, as this is firmly a matter of identity politics already.
In other words, yes, some people do prefer that flavor.
Thank you for clarifying. I agree that the researchers' initial response was comically out-of-touch, and can now see the perspective where your snark satirises their statements (deservedly so).
I apologize for my aggressive tone, perhaps I should take my own advice and be more inquisitive at first when replying to someone. :) I think the best version of my previous reply would have been something like: "It seems like you think it's unreasonable to investigate before casting judgement." which still expresses my (confused) perspective fairly well but leaves you with a more comfortable space to clarify.
I have investigated, as far as I feel I need to cast judgment. I'm just some hick on a hill in the woods. My censure amounts to a ghosted bit of snark.
If the people who employ these folks make snarky comments about them, that's no more meaningful. If they choose to examine the actual evidence of the behavior at issue here, they might find it upsetting as i do... or hell i dunno there's many reasons they might not. I can't imagine what explanation there might be for this behavior but i'll allow there might be one even I could agree with...
More likely: "We have concluded that since this is research on the coffee urn handling process it is not subject to review for compliance with our policies on research on human subjects."
Being unaware of whatever this is, until this HN post just now, I'm still in the dark as exactly what was being done which was apparently unethical since the statement doesn't mention any details. Anyone have any details on what the issue is?
It appears that the U of MN researchers experimented on Linux kernel developers without those developers' consent. The researchers didn't even go through their Institutional Review Board (IRB) until the experiment was done, which is not allowed. When they finished the experiment they then got approval from their IRB, and the IRB approved it even though the people being experimented on had not consented.
There are all sorts of rules when doing an experiment on humans in the US, and they generally require consent.
To be clear: I do not know all the facts in this case, and I'm not a lawyer. But I'm glad that GregKH took the "immediate ban" step, that seemed like the safest course of action with the info he had. I think experiments to see "what can slip through review" could be really valuable & important, but when humans are being experimented on, you normally need their consent first.
that helps a bit with regards to understanding why people are so upset about this.
but honestly, it seems like valuable research to me. it's unfortunate that it took some time away from busy kernel developers, and it's unfortunate that it ultimately makes the project look worse...
...but isn't that supposed to be part of the promise behind open source? it wouldn't surprise me if i learned that management of private orgs hire security firms to do this sort of thing. where does that come from in foss land, other than the ethers of others tinkering, researching and experimenting?
i think this whole calling for the researchers heads business is overblown. i read their paper, it looks like they approached the situation quite ethically to me.
The deception / con job was done last year. This time around, there was another series of patches from the same research group, which were claiming to be security fixes, but which scored really high on the you-have-got-to-be-kidding-me scale of incompetence. When the Grad Student was called out on it, he claimed it was due to a static code analyzer which he was testing.
This was not disclosed up front, and at this point, it is impossible to tell whether he was creating an incompetent, no-where-near-state-of-the-art code analyzer which gave bogus results, and then was too incompetent to realize it was bogus, and instead submitted the patches to kernel developers asking us to be his QA, without disclosing that this was what he was done ---- or this could be another human experimentation where they are trying to see how gullible the patch review process is at accepting bogus patches.
We have no idea, because his research group has previously submitted patches in bad faith, and UMN's IRB board gave it the A-OK ethics review. At this point, the safe thing to do is to assume that it was also submitted in bad faith.
If you wanted to know if the kernel review process is able to reliably catch malicious attempts you literally could have just asked the kernel maintainers and they'd told you that no, review can't go that deep. Or looked at non-malicious bugs and observed that no, review does not catch all bugs with security implications.
You'd very likely would have been able to get code past them even if you told them that there are attempts coming and got their approval.
Given that, you need a good justification why that wasn't enough for you, and what value you truly added over what already was the common view on the issue. But at least we got valuable recommendations from their paper, like "there should be a code of conduct forbidding malicious submissions" and "you could try vetting peoples real identities". (I guess they added to the last bit, giving additional data points that "works at a respected institution in a related field" is not sufficient to establish trust)
the project is 30 something odd years old now, and is no longer a hobby, it is now critical infrastructure that powers virtually everything.
it's unfortunate that something happened in the project that cost the maintainers a lot of hours, that comes with the territory of working on important software, i'd argue.
i don't want an espionagetastic fundamentally untrustable and inescapable computing hellscape, airplanes falling out of the sky, cars crashing or appliances catching fire because it "already was the common view on the issue."
if the paper raises awareness of the issue, it's a good thing for society, it seems. if money materializes to do background checks on kernel contributors, that seems a good thing, no? if resources materialize for additional scrutiny, that seems a good thing, no?
if anything, the "common view" / status quo seems terribly broken, as demonstrated by their research. while what they've done is unpopular, it seems to me that ultimately the project, and the society of which large chunks it now powers, may be better of for it in the long run...
So you are saying that because a non-controversial method to show the same issue wouldn't cause the publicity connected purely to their way of operating, it was right to ignore the concerns? Lots of slippery slopes there, and science has spend a long time to get away from such an "the end justifies the means" attitude.
> Lots of slippery slopes there, and science has spend a long time to get away from such an "the end justifies the means" attitude.
if the developers were working for a private company, there could be a similar loss in time for a similar exercise that could be approved by company leadership, no? if tim cook ordered an audit and didn't tell anyone, wouldn't there be developers at apple feeling the same way?
look, i get it, it's unfortunate and people feel like it's a black eye... but it's also a real issue that needs real attention. moreover, linux is no longer a hobby, it is critical infrastructure that powers large chunks of society.
> So you are saying that because a non-controversial method to show the same issue wouldn't cause the publicity connected purely to their way of operating, it was right to ignore the concerns?
what's the non-controversial alternative? alerting the organization before they do it? that doesn't work. that's why scientists do blinding and double blinding.
if you mean something else, then i'm missing parts of this (really complicated) debate.
> If you wanted to know if the kernel review process is able to reliably catch malicious attempts you literally could have just asked the kernel maintainers and they'd told you that no, review can't go that deep. Or looked at non-malicious bugs and observed that no, review does not catch all bugs with security implications.
> You'd very likely would have been able to get code past them even if you told them that there are attempts coming and got their approval.
If you want to know that the process isn't catching all attacks, that should be all you need. For the second case, getting patches past a warned maintainer is harder and should be even better evidence of the problems of the process, without any of the concerns. There is a wide range of options to learn about code review, and what they did was one of the extreme ends - just to find "yes, what everyone has been saying is true". And then not putting in the work to make amends after it becomes clear it wasn't appreciated, so now this other student got caught in his advisors mess (assuming the representation of him actually testing a new analyzer and not being part of a repeated attempt to introduce bugs is true - the way he went about it also wasn't good, but way less bad).
But you don't get splashy outrage, and thus less success at "raising awareness" with people that didn't care before, which is what your comment seemed to argue for.
> But you don't get splashy outrage, and thus less success at "raising awareness" with people that didn't care before, which is what your comment seemed to argue for.
the reason for doing it, is basic quality science. it's proper blinding.
the result is raising awareness, which if it leads to more scrutiny and a better and more secure linux kernel, seems to be a good thing... in the long run.
i mean, i get it. a lot of this security stuff feels a lot like gotcha qa, with people looking to make names for themselves at the expense of others. and yeah, in science, making a name for yourself is the primary currency...
but honestly, they ran their experiment and it worked, uncovered an actual, not theoretical, vulnerability in an irrefutable way, in a massive chunk of computing infrastructure that powers massive chunks of society.
papers like this one can have a lot of potential in terms of raising funds. this is the sort of thing that can be taken to governments, private foundations and public corporations to ask for quite a lot of money to help with the situation.
i think the discussion should not be around banning them as known bad actors, but instead should be around how to detect bad actors or better introduce safety and security into the project.
i'll tell you one thing, it has shaken my trust in the oss kernel development model as it operates today, and honestly that seems like maybe a good thing?
how many companies are literally printing money with the linux kernel? can't they throw a few bones at helping beef up code review and security analysis?
No development model is protected from malicious actors, and this is not unique to OSS. Could the Ministry of State Security sponsor a student to study at the US, and then after graduate, that student gets a job at Microsoft, and then introduces vulnerabilities in Windows? In theory all patches should get code reviews, but could someone get a bug past code reivew? Sure!
You can try to detect it before it happens, but very often you won't catch it until after it's landed in the source code repository, and some cases, it'll actually make it out to customers before you notice.
It's true for proprietary code; it'st true for open source code; it's true for x.509 CA certificates[1]. We should still do the best job that we can, if for no other reason that there are plenty of zero-days which are introduced by human error, never mind by malicious actors.
so if satya nadella hires security firms to try this on the nt kernel (do they still call it that) and they succeed, then they learn from it, tighten security and process, and then move forward...
but if a set of academic researchers try it on the linux kernel, nothing changes and then there's a bunch of internet drama with people calling for them to be fired because why?
honestly, i've believed in oss since i encountered it in the early 90s. but this is making me start to reconsider proprietary software again.
The more accurate anology would be academic researchers sending graduate students to get hired by Microsoft under false pretenses, and then demonstrates that they can introduce security vulnerabilities that don't get caught by Microsoft's code review practices --- and the submits a paper to the IEEE saying that obviously Microsoft's hiring and software engineering practices could be improved.
At least with OSS everyone can audit the code, and run their own security scanners on the open source code. If you think that somehow proprietary software is magically protected against the insider threat, you're kidding yourself. Even the NSA couldn't protect against an inside like Snowden.
> The more accurate anology would be academic researchers sending graduate students to get hired by Microsoft under false pretenses, and then demonstrates that they can introduce security vulnerabilities that don't get caught by Microsoft's code review practices --- and the submits a paper to the IEEE saying that obviously Microsoft's hiring and software engineering practices could be improved.
sounds good to me! (j/k, sorta)
except here's the key point, and here's where i think the issue is: "...obviously Microsoft's hiring and software engineering practices could be improved"
...this isn't about the people involved being bad at what they do, or them being bad people, or the project being silly in some way. it's about the people, the process they use and the project itself meshing together in an unfortunate way to create a real vulnerability for society. linux is no longer a hobby project. every effort can and should be made to ensure that it is secure as possible as linux is now so pervasive that defects can literally have life and death consequences.
this isn't about some maintainer failing to catch security bugs, this is about the growing influence and criticality of the project and the vulnerability of the project to security bugs, both technically and culturally.
the only real human failure is seeing egos get in the way of improvement.
who am i to be making these arguments? i'm just a nobody. a nobody who has to live in a society that is increasingly being built on this stuff...
They submitted patches that did nothing but introduce vulnerabilities, in a test to see if they would get through. They've done this previously for a research paper.
From my understanding, it wasn't this. It was that they didn't come clean or provide a quick emergency stop when they saw that the patches would make it through. Although, I'm having trouble sifting through the rage/drama.
This kind of research is interesting and important but should not be done by a computer science department, any more than the social science department should try to develop a process control system.
No, computer science departments should be adequately equipped and trained to handle this. Lots of areas of computing involve humans and need to be studied by people also knowledgeable about the technical side. (and "computer science" departments usually are not strictly "pure computer science" departments, it's just the usual label under which most of computing ends up)
Sociology or anthropology. Possibly, though less so, political science, economics, or even a B school. These are all disciplines that study the social behavior of groups.
As as study of a group activity it seems inapplicable to a psychology department, though that could be a bias on my part.
Can someone explain why engineering of bugs into products is bad but hacking to find software defects is good. I think trying to do this sort of research is good because obviously it presents a vector for malicious actors we should all be aware of. It’s good that the Linux team found these issues but I don’t see the intention as being any different than any other kind of hacking, just like some organisations will send you phishing emails once or twice a year to see if you’re susceptible.
What's valuable is fixing security issues. So if you "hack" some system and then provide either patches or useful guidance on how to eliminate a vulnerability that is helpful. By contrast, if a security issue is already known/is being addressed, exploiting that vulnerability just causes problems and wastes people's time.
Society works because of trust. Activities like these erode trust. It's like "let's try and pour a bottle of acid into the freshwater storage facility".
How do we stop more malicious actors doing things like this is really the question I’m asking. People who aren’t security researchers but say state actors who want Linux backdoors?
I am definitely against the line of research and hope someone acknowledges their fault. I also want to appeal that the issue it taking a political turn with a lot of unwarranted hate in forums and media mostly from people who are not part of kernel community. We are talking about young researchers and some possibly are misguided or innocent. Lets now make a political feast out of this just because we can.
Linux is used for very sensitive stuff. Stuff that involves a lot of money.
Messing with Linux is messing with that sensitive stuff, which means messing with important people and organizations. That is something that is very hard to get away with.
That guy just got himself into a legal supernightmare.
If someone is running a stock exchange on Linux and a guy introduces vulnerabilities on purpose, I am sure that person will be pretty upset and pursue litigation.
The researchers, the researcher's bosses and pretty much every person in command at UMN had to sign off on this being an acceptable method of conducting research. This shows such an extreme lack of good faith or judgement on their part that I do not believe UMN could, or should ever be forgiven.
Their actions show nothing but criminally bad faith taking place, all the way up to the top.
The only rational response from the Kernel team is to revert all code submitted by any member of UMN and to permanently blacklist every person who has ever attended UMN or worked at UMN for any reason or any length of time.
The only way I could see this getting forgiven is if every party that could have cancelled the project is forcibly removed from holding any position of employment or enrollment at UMN, going all the way up to the head owners and managers of the entire university.
> The researchers, the researcher's bosses and pretty much every person in command at UMN had to sign off on this being an acceptable method of conducting research.
That is nothing like how research operates at a university like UMN. At a research university, typically the only person that actually must sign off on a computer science experiment is the grad student / professor actually doing the work. Graduate students frequently clear their work with their advisors first, but even that isn't strictly required. A small amount of work technically requires IRB approval, but unless what you're doing is very obviously human subject research, it is unlikely anyone would even notice if you went rogue and ignored it.
“They introduce kernel bugs on purpose” - https://news.ycombinator.com/item?id=26887670 - April 2021 (1562 comments and counting)