Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You're demonstrating no understanding of how these things work.

For updates to be deployed, the patches need to be integrated, tested, packages/updates build, and the update mechanisms tested. For complex systems -- like, say, embedded hardware -- this might involve targeting quite a few different devices and testing matrixes.

Even scrambling, this can take days, and leaves users blowing in the wind in the meantime.

This is why we have coordination with vendors PRIOR to public release, such that when the vulnerability is publicly disclosed, updates are available through standard update pipelines, the process is documented, and the update is known to be correct and not introduce deployment regressions.

A vulnerability of this severity needed no marketing. Grandstanding for non-technical users simply increased the likelihood that they'd be exploited while vendors rushed out fixes.



I'm the guy who had to do it at my company, and in a previous career I'd be the boots-on-the-ground dealing with it at a rather larger company. I understand that vendors want a few weeks. I have lived the reality of ponderous engineering processes which need weeks to approve the smallest imaginable change. The for loop does not care what we want and does not get slower at counting to big numbers just because we are slow at counting to small numbers.

I understand that vendors find it inconvenient to field questions from users like "Are you vulnerable to Heartbleed?", most particularly when they are, in fact, vulnerable to Heartbleed. I respect that Yahoo feels embarrassed that there is a screenshot showing usernames and passwords in the clear. I think that the feelings of Yahoo users who would be discomfited that their email accounts are available to anyone with a command line deserves at least as much deference.

I also think it is a radically borked threat model which suggests that attackers only find out about vulnerabilities when the man-on-the-street does rather than when really-savvy-vendor-folk do.


> I also think it is a radically borked threat model which suggests that attackers only find out about vulnerabilities when the man-on-the-street does rather than when really-savvy-vendor-folk do.

Do you have a study? I remember an article here that suggested most Windows attacks were created by reverse-engineering MS patches rather than by discovering the vulnerabilities or reading about them on mailing lists; if that's the threat model then co-ordinating so that most vendors release patches at the same time is safer even if it means waiting longer for a patch.


Coordinated release of patches in closed-source software is possible because the people dealing with the source code are NDA'd. Rails attempted a coordinated release of the YAML bug and it was a total clusterfuck: they "soft-released" the bug with a vague notice about potential database corruption, and 1000 people simultaneously re-discovered the bug over the next two days by looking at the code. Then, everyone involved in Rails got the scope of the bug slightly wrong, and variant vulnerabilities followed for the next couple weeks.

Once you have a whiff of where the bug is, it's dramatically easier to find it. You don't need to know exactly what the bug is; you just need to reduce the problem from "read all of OpenSSL" to "read a small subset of OpenSSL". Once that narrowing of the target space happens, independent discovery is inevitable. The people most motivated to do that discovery work don't have any of your best interests at heart.


This isn't about inconvenience, it's about having patches in user's hands the moment the vulnerability hits the public.

> I also think it is a radically borked threat model which suggests that attackers only find out about vulnerabilities when the man-on-the-street does rather than when really-savvy-vendor-folk do.

And yet, this is true. A small number of people with a vulnerability provides a small threat exposure, because their attacks are simply more likely to be targeted.

Everyone with a vulnerability provides a large threat exposure, because suddenly every single script kiddie on the planet had a window to target a Python script at Yahoo or GitHub or Amazon and troll through web server's memory.

You think it was worth exposing GitHub's private company repositories to every script kiddie on earth, just because a small number of people had an incredibly valuable zero-day that they would wish to hold in reserve for high priority targets, lest it get burned and they lose the zero-day?


This is why we have coordination with vendors PRIOR to public release, such that when the vulnerability is publicly disclosed, updates are available through standard update pipelines

Are you talking about Responsible Disclosure? Cause I thought that existed because if security researchers tell vendors in private only, then the vendors sit on it and do nothing, but if you tell the public first, users are vulnerable before the vendor releases a fix.

Isn't the only reason there's a public release as a threat to keep the vendors honest?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: