Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It seems that OpenBSD already patched their source code and that wasn't to the likings of the researcher. In the future he will now delay notifying OpenBSD of vulnerabilities.

Why did OpenBSD silently release a patch before the embargo?

OpenBSD was notified of the vulnerability on 15 July 2017, before CERT/CC was involved in the coordination. Quite quickly, Theo de Raadt replied and critiqued the tentative disclosure deadline: "In the open source world, if a person writes a diff and has to sit on it for a month, that is very discouraging". Note that I wrote and included a suggested diff for OpenBSD already, and that at the time the tentative disclosure deadline was around the end of August. As a compromise, I allowed them to silently patch the vulnerability. In hindsight this was a bad decision, since others might rediscover the vulnerability by inspecting their silent patch. To avoid this problem in the future, OpenBSD will now receive vulnerability notifications closer to the end of an embargo.




This feels like some kind of prisoner's dilemma game theory problem. By defecting from the embargo, OpenBSD gained potential security for its users at the expense of all other users. Overall, this is a loss, unless you use OpenBSD. I have to agree with the researchers on this one; OpenBSD acted selfishly here.


Read that again. We asked to commit without revealing details, he said yes, that's what happened. I guess he changed his mind about that after the fact, but nobody promised not to commit. We didn't "defect" from an embargo unilaterally.


Perhaps "defect" is the wrong word given the circumstances, but the result is the same. There's a good reason for the embargo: this all takes cooperation, as it's not a Nash equilibrium. I still agree with their decision not to include OpenBSD so early in further disclosures, given Theo's short-sighted statement.


> Perhaps "defect" is the wrong word

It's precisely the correct word. Prisoner's dilemma are simple, mathematically. This was one. OpenBSD defected. The joke's on the security researcher, though, since this doesn't appear to have been their first time [1][2].

Robert Axelrod outlined, in his 1984 classic The Evolution of Cooperation [3] four requirements for a successful iterative prisoner's dilemma strategy. One is retaliating. Security researchers are letting OpenBSD play an iterating game as if it's an N=1, i.e. they're not retaliating. Given the community is playing "always cooperate," OpenBSD's best move is actually "always defect".

[1] https://lwn.net/Articles/726585/ thank you 0x0 [a]

[2] https://lwn.net/Articles/726580/ thank you 0x0 [a]

[a] https://news.ycombinator.com/item?id=15481980

[1] https://en.wikipedia.org/wiki/The_Evolution_of_Cooperation


So does the simple mathematical treatment also include language like "the joke's on ____”? Or was that more of a philosophical interpretation of yours?

Real life is messier than any model.

https://www.quantamagazine.org/in-game-theory-no-clear-path-...


Both your [1] and [2] seem to conclude that violating the embargo had no significant ill effects: "since... the underlying issue was already publicly known, OpenBSD's commits don't change things much." If "defecting" causes no problems for the other participants, does it actually count as defecting? (And if not, how is this a mathematically simple prisoner's dilemma?)


Nice analysis. It definitely seems to be the case.


I mean, I agree too. I sleep better not worrying about bugs that can't be fixed.

I'm mostly here just to correct misstatements of facts. You're welcome to your own interpretation, game theory optimization, etc.


Wellll to be fair, I'm sure if the researcher said no, he wouldn't have committed.


From what I've read, I don't see why everyone is giving you a hard time about this. It sounds like you did exactly what he agreed you could do, and then he changed his mind.


Sounds like a "we technically respected the embargo, just not in principle" sort of thing to me.


We're not mind readers. If he says it's ok, we think it's ok. If other vendors have fucked up months long patch cycles, that's their deal, not ours.


Now that it's more clear what role disclosure deadlines play in cooperating with security researchers, it probably makes more sense to just cooperate than point fingers.


What part of this commit description is "not revealing details"? https://ftp.openbsd.org/pub/OpenBSD/patches/6.1/common/027_n...


All of it? The paper describing the attack is longer than one sentence.



He said it's ok this time, but won't be so in the future (pretty clear from the future action). So your decision is still subject to the criticism.


They agreed, but now they regret the decision and wouldn't make it again. To prevent themselves from doing so, they will not speak with OpenBSD until later in the process.


What's the word for pressuring a person until they make a decision they immediately regret?


It's a loss, even if you use openbsd. If you break the embargo, you won't be notified in advance anymore. Basically, you get an advantage once but will get several losses for a very long time. Overall it's bad, even for OpenBSD users.


The researcher's reaction is correct. OpenBSD maintainers' lack of patience may have led to this vulnerability being discovered and exploited by other people.


The researcher’s lack of full disclosure may have lead to this vulnerability being discovered and exploited by other people.


Also an embargo lasting months seems excessive.

You also can not guarantee me that no one who gets this information early is not working for a bad actor.


I'm just so glad this long embargo meant that everyone had patches ready to go as soon as it expired! Oh wait, they don't. Good job CERT.


I wonder when the NSA and CIA was responsibly informed about this vulnerability.


OpenBSD wifi maintainer here.

I was informed on July 15.

The first embargo period was already quite long, until end of August. Then CERT got involved, and the embargo was extended until today.

You can connect the dots.

I doubt that I knew something the NSA/CIA weren't aware of.


In other words, its malfeasance by the security community for holding out.

There's only a few courses of actions. One is to sit quietly and let everyone eventually do the solution. And that doesn't work. No fire under peoples' asses, and the work is delayed.

The other, is to release it promptly. Then, at least we can decide to triage by turning down X service (even if wifi), requiring another factor like tunnel-login or what have you.

But truthfully, defect in a Prisoners Game played out here was the best choice. The rest of the community is "agree".


No one should care about a community that agrees that releasing silent patches is a good idea. This is exactly the same behavior that created the need for full disclosure in the first place. And no, there aren't just two options nor are processes binary. It's rather mind boggling how "the community" has managed to go full circle in such a short time and themselves become the opinionated people they were supposed to be the alternative to.


Really makes me wish you'd told the world. I know all the arguments against that, but this sort of thing is no good either.


Yes, but that would result in them not getting notified for any other vulnerability.


As far as I understood, this attack has no client-side mitigation that could be employed other than treating every wifi as an open network. The attack might already be known to hostile actors or may have become known during the embargo, but full disclosure without an embargo would guarantee that clients are at risk without mitigation. An embargo at least gives time to prep patches and protect at least a portion of the clients.


Either there is a possibility for patches to be prepared during an embargo or there is “no client-side mitigation”, you can’t have both. From reading the rest of this thread, it appears that it is quite possible to patch this on clients such that, if you are using a patched client, you are safe. Disclosing earlier would have lead to more people having patched clients earlier and hence being safe.


Patching the client is a fix. Mitigation would bea config setting that makes me safer (disable some unused functionality,...). So yes, you can have both.


That’s like saying that prior to introducing seatbelts, we should have allowed for a period of time to glue people to their seats because it is preferable to have a mitigation they can apply themselves than a fix the manufacturer has to put in.

If you don’t limit mitigation to "a config setting" (and why would you?!), a patch/new version is the best mitigation you can get.


I limit mitigation to a config setting because that’s what affected clients can do in this case. Everyone patching wpa_supplicant on their android handset is just not going to happen and it takes time for vendors to roll out patches.


> As far as I understood, this attack has no client-side mitigation that could be employed other than treating every wifi as an open network.

I've been doing that for years and recommend others do so as well.

The rise of HTTPS nearly everywhere helps mitigate things a bit. This same type of exploit 5 years ago would wreak havoc exploited at the local Starbucks WiFI.


You got everything wrong. If big vendors are unable to patch their proprietary products in an acceptable time, that shouldn't put others at risk. Users shouldn't choose their products...

Think about it in a different way: What if a vulnerability was discovered in TLS and FOSS implementations patched it, but there is an embargo for supposedly protecting some banking software? What if NSA/CIA/other agencies find out about it (they would know immediately) and use it to target users/activists?


This is why embargoes have deadlines. To make the necessary trade-off between "patch as soon as you can, potentially jeopardising the safety of users -- even users of non-proprietary projects" and "wait for everyone to be ready before you patch -- which also jeopardises users". The embargo system deals with this by forcing everyone to agree on a date, and if someone patches after that date then too bad. You may disagree that the deadline was so long, and that could be a fair criticism.

But pretending as though co-ordination of any kind is somehow bad (and then resorting to emotional arguments and so on) is pretty reckless.


I have seen and participated in this disclosure debate for 10 years now. I have come to the conclusion that, in the long run, the least harm approach is full disclosure. There isn't any wiggle room. There are no shades of grey. The whole coordinated response movement is misguided. There are some limited circumstances where it can make sense to delay disclosure, such as creating an imminent threat to human life, but generally full and nearly real time disclosure results in safer software sooner for end users without putting them at some unknown, but high risk level.


3 months is more of a joke than a reasonable time, but one can argue about that if he wants...

> even users of non-proprietary projects

Actually many FOSS projects get only notified on the disclosure date.

Hiding the vulnerability for such a long time makes more harm good. The vulnerability can potentially be exploited by security agencies that necessarily know about them and could also be leaked to a bad actor by an employee of one the vendors.

Hopefully WPA2 isn't that important, but potentially security sensitive users trusted something that was known by some to be vulnerable for 3 months! Bad actors could have used it against them.

The embargo resulted in potentially bad actors knowing about the issue, but not vulnerable users.


The state actor should be least of your worries compared to the millions of script kiddies would could use the vulnerability once it is disclosed publicly.


No as I would know about it by following security news?


Are you so great that you know all the vulnerabilities all the time since the second of disclosure?

Do you seriously expect the other billions of people on the planet to be that great too?


For most of them the day after, when I get a notification from my RSS app...

No. I also don't expect them to choose device based on security. That is very bad as vendors won't care about patching their older devices (look at Android devices, home routers...) and vendors won't care about patching their flagship devices fast as they have the possibility to request very long embargos.

Making compromises for those vendors and giving more time for security agencies and other bad actors to silently exploit the vulnerabilities (where FOSS projects would have made patches for users that care) is not the way to go. That philosophy actually makes everybody less safe.


What if [...] is FUD. What if Theo de Raadt works for the FSB? We need to work with the facts.

If you don't agree with an embargo and decide to break it, that's on you. But the consequence is that you shouldn't be surprised if next time you're informed later, or not at all. What OpenBSD proponents and developers are doing right now, is damage control. It may work this time, it may work next time, but it won't keep working every time so pick your fights right. It isn't the first debacle OpenBSD has with full disclosure either (hint: OpenSSH).

There are also millions upon millions of devices which won't get patched. Given the vulnerability is apparently the most vulnerable on Linux and hence Android, do you think all the smartphones running Android 4.3, 4.4, 5.0, and 6.0 will be patched [1]?

[1] https://en.wikipedia.org/wiki/Android_(operating_system)#Pla...


No, I don't. And its stupid, because it doesn't have to be that way. But telecoms have complicated the situation with their greedy firmware reinforced planned obsolescence.


But have the other OS makers released patches already?


For reference, the OpenBSD patch in question released on August 30: https://ftp.openbsd.org/pub/OpenBSD/patches/6.1/common/027_n...



tedunangst: "We asked to commit without revealing details, he said yes" "I guess he changed his mind about that after the fact."

The patch has obviously an explicit description:

"State transition errors could cause reinstallation of old WPA keys."

It's true, however, that anybody who analyzes the diffs would eventually figure that out, as Theo de Raadt argued.

My conclusion is also that the real error was even wanting to give the details to him at that moment, as there's apparently a history of him not respecting embargoes.


Oh, that's the problem? That's too much information? Well, shit.


I still fail to get what you wanted to express with your comment here. I've just quoted two sentences from another comment of yours on the same page, have you understood something else?


Not the first time OpenBSD does not respect embargoes, for example https://lwn.net/Articles/726585/ and https://lwn.net/Articles/726580/


"As a compromise, I allowed them to silently patch the vulnerability." The way I read that they broke no embargo


They were pressured by OpenBSD to do so, and regret it. That doesn't mean they broke embargo, but it also doesn't reflect well on them. Do you think Theo would've respected the embargo if they had said "no, do not patch until the embargo date?"


Yes. He would have tried to persuade them, perhaps cut out the researchers to persuade CERT.


Who says they were pressured?


A bunch of dudes on a linux mailing list lack the authority to prevent openbsd from fixing things.


True, they don't. However, this researcher has the authority to not notify the openbsd team in advance any more and he already announced that he'll keep his cards closer next time. What happens if sufficient researchers come to the same conclusion?


What happens if a vendor or researcher is in bed with the NSA and they use the exploit while embargoed?

The whole thing is a shit show and really I'm rather more behind OpenBSD's approach.

Edit just to expand on this as someone deleted a post ....

----

It's slightly more complicated than the prisoner's dilemma. The prisoner's dilemma doesn't account for a large facet of the problem which is being discussed here. If all the good parties participate and coordinate then we're better off. The problem is there are outlying circumstances which means that not everyone will be included:

1. If someone kicks someone out (OpenBSD) on political whim playing CYA, they no longer benefit.

2. If a party is not let in, they no longer benefit.

3. If someone is unaware of it, they don't benefit.

This turns it into a security monopoly where the big vendors get exclusive rights to embargo and exclude smaller vendors and control the disclosure process on their own schedule.

The first thing the people outside of the club find is they wake up on Monday morning and have to clear up a shitstorm of monumental proportions with less resources than the monopolised vendors who've had time to deal with it.

Then there's the assumption that the monopolised vendors are trustworthy which is 100% impossible to validate and therefore invalid.


Yeah, the hysterical part is how people think distros is leak proof. It just doesn't leak in nice public ways to allow "responsible white hats" to wag their fingers. Raise your hand if you can say you confidently know the full back channel distribution of a notification to distros.


Exactly that!

No bullshit please - you guys do a wonderful job of avoiding it and stamping on it when it does turn up. Keep up the good work :)


Ultimatum games [1] are a subset of prisoner's dilemmas. That covers Nos. 1 and 2. Assuming researchers want something from those they disclose to, it makes sense for them to cast the widest net possible while minimising the risk of defection. Balancing that optimization is a game as old as civilization.

> This turns it into a security monopoly where the big vendors get exclusive rights to embargo and exclude smaller vendors and control the disclosure process on their own schedule.

Not necessarily. It turns into a monopoly of those who can show themselves to be credible partners. This exhibits incumbency bias which in social context we call track record. It's not nearly as exclusionary as you're making it out to be.

> Then there's the assumption that the monopolised vendors are trustworthy which is 100% impossible to validate and therefore invalid

This is common in trust problems. You don't need to be 100% sure everyone you're dealing with is trustworthy to work with them because we don't live in a single-iteration game. Again, iterations of retaliation and forgiveness remove the need to have 100% certainty about a player's intentions.

[1] https://en.wikipedia.org/wiki/Ultimatum_game


Credible partners? Yeah right: http://securityaffairs.co/wordpress/56411/hacking/windows-gd...

No one is credible here. The very nature of a closed agreement of secrecy between arbitrary parties is the opposite of credibility.


I am generally ok with that. Embargoes are retarded.


Sounds like the researcher is at fault for putting OpenBSD on their list. If you cut a deal with someone who serially defects, at a certain point the onus shifts from them to your lack of foresight.


The problem isn't a "fool me once shame on...fool me you don't get fooled again", because the problem is one unscrupulous party is unscrupulous to different parties and the different parties at different times are unaware of it.


> the problem is one unscrupulous party is unscrupulous to different parties and the different parties at different times are unaware of it

Sure, but eventually you get called out on it in a public forum, like this one, and people stop giving you goodies going forward. I would consider it acceptable practice to, when considering dealing with OpenBSD (or people who are close to them), (a) withhold vulnerabilities until after the embargo date or (b) refuse to give any information unless they sign a binding non-disclosure agreement committing them to the deadline under pain of penalty. (The latter is an option because it appears, in this case, they broke the spirit if not letter of the agreement. The solution to that problem is legalese.)


Hi, I am the person you are accusing of mischief.

I didn't break any agreement. I agreed with Mathy on what to do, and that's what I did.

The fact that Mathy decided to get CERT involved and subsequently had to extend the embargo has nothing to do with me.

(edit: typo)


To be clear, I accuse you of nothing less than playing a rational response to the researcher's apparent "always coöperate" strategy. "Defect" in a prisoner's dilemma context does not mean "breach" in a legal one. (For example, an OPEC member defecting has zero legal consequences. It does, however, affect their standing in the next round of negotiations.)


'Defect' doesn't mean 'breach' in a legal situation, it also doesn't mean 'sociopath and/or economics professor' in a psychological one, but people form connotations, so be careful what you accuse. Anyway I think you're playing the PD analogy too much... But I'll play a bit too. Construct a payoff matrix. What does real defection look like? It's patching mid-July, when the patch was received, instead of waiting to the agreed upon end of August time. I see no defect here. There could only be one if, after CERT was involved and set a new date, Mathy asked OpenBSD to postpone the prior date agreement, and instead of cooperating they patched immediately for the biggest gains to their users. There is no mention of such a request, hence it probably didn't come.


I support your decision.

If Mathy was concerned, why did he wait to notify CERT? Should that not have been the first priority?


As a user I am completely fine with that.


Even when the author states that now as a result of that selfishness OpenBSD won't get notified about vulnerabilities until well after everyone else?


> OpenBSD won't get notified about vulnerabilities until well after everyone else

Which doesn't make a difference if OpenBSD still gets their patch out at the same time as everyone else. Unlike other vendors, it doesn't take OpenBSD four months to go from vulnerability notification to patch release, if you look at previous disclosure timelines they typically have a patch out in days.


What about the vulnerabilities that OpenBSD notice? Works both ways. And they have an active interest in such things and have discovered as much as any famous-for-five-minutes security researcher.


> [OpenBSD] have discovered as much as any famous-for-five-minutes security researcher

TL; DR OpenBSD acted rationally if they'd prefer to go it alone, which seems to be their culture. To their credit, it's worked pretty well so far. But you can't have your cake and eat it too. If they prefer a mad scramble after public disclosure, they'll get it. But they shouldn't get early notice from responsible researchers.


See my comment here. It sort of replies to this anyway: https://news.ycombinator.com/item?id=15482285

I don't believe that embargo is healthy or responsible! If anything its a monopolising factor.


It sounds rather like he is trying to blame OpenBSD for his own mistake. As multiple people from OpenBSD have said, he agreed they could apply the fix, so they did. He didn't have to say they could. The fact that CERT persuaded him to extend the embargo later is not their fault.


Author doesn't know what FreeBSD, Debian and OpenBSD people cooperate and share knowledge, so most probably OpenBSD developers will know about the issues, just not from an "official" email.


Furthermore by not even attempting to include OpenBSD in some embargo agreement, there's no reason for OpenBSD to not patch as soon as they hear about it. Indeed that's what seems to have happened on the linked 'evidence' about them not respecting an embargo of a linux distro group they're not part of.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: