Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Zed's new project: Vulnerability Arbitration (vulnarb.com)
238 points by acangiano on April 3, 2011 | hide | past | favorite | 90 comments


With regards to use of the SSL-is-PGP trick discussed earlier, I think you may run into a problem in the real world of megacorps: the peons in charge of appsec may be several fiefdoms away from the peons who could actually access the private key for the SSL certificate.

Suppose you report a SQL injection in X university's website. I would have been responsible for getting that fixed at my old job. I know what campus the guy with root on our web tier works at, but working on memory of our org chart, I can only narrow it down to a thousand people, literally. Extraneous copies of that private key aren't floating around internally. Indeed, I rather suspect that if there exists a process by which I can learn it, we just failed ISOSomething. How am I supposed to figure out what your cypher text telling me what is wrong actually says?


The person in charge of appsec is guaranteed to be many fiefdoms away from the sysadmins. In a lot of very large famous companies, you'd be wise to put money on the notion that the appsec guy hates the guys that manage the websites.


And vice versa! As a sysadmin I wouldn't want to give the private key to someone with the knowledge of how to use it for interception of the traffic it's protecting ( and possibly the opportunity!)


and the people in charge of the web servers may not be authorized to be handling the security-related material they will be decrypting with the key and forwarding off.


Extremely good point.


security@megacorp : Hey, we just had a bug report, could I get a copy of the private key for our root domain so I can decrypt it?

webit@megacorp : lawl


Zed Shaw is a very clever man. However, I've read the site time and time again and I'm not sure what problem this is supposed to solve.

There are already services that pay you to disclose your vulnerabilities to them and project manage the whole thing with the vendor for you, such as Tipping Point, The Zero Day Initiative and iDefense.

The whole SSL thing is a clever little hack, but is more of a solution in search of a problem. For a security researcher, secure comms with a vendor is hardly a problem that hasn't already been solved (and if the vendor wants to communicate over unencrypted SMTP and they're aware of the risks that come with it, then it's their own fault if somehow the comms are compromised).

I'm sure it'll be another very smart and funky thing from Zed, but I don't see why people would use it over something that pays them for bugs.


That's a really good point. I'd be competing with companies that pay people to give them defects, which is really hard to go against. I sure don't have the money to do it, so maybe this would just be for open source projects at first.


Interesting.

What if you flip it around? Instead of making it something for researchers to use and competing with tipping point et al, how about making it something for open source projects to use to manage the process of handling security bugs? I'm not aware of anything in that space, but there may be some mileage in exploring it.

I could see this working in a SaaS model a la github - free if you keep everything open. Paid for if you don't.


That's possible, and also for the SMB market that's producing lots of software but might be just a few devs. Think: iphone, ipad apps, small sites. There's not much for them either.


Another possible angle - as Thomas points out, researchers with "reputation" don't need this.

But what about up and coming researchers? Where do they "earn their rep" in the current system? I could see this maybe taking off as a way for not-yet-reputable researchers to document their history/trackrecord, in much the same way as developers can use their github repos as documentation of their programming history. Maybe in a few years time we might see junior security researchers listing their Vularb account on their resumes...


'Up and coming' Researchers generally improve their reputation by schmoozing journalists, doing conferences and so on. If for example, you didn't want to do through someone like ZDI you could just post the bug to full disclosure. If it's a big enough bug, then the media will come to you.

Unfortunately in this industry 'rep' isn't earned, it's about who shouts the loudest, and it leads to some uncomfortable situations.


Using the public SSL cert is an interesting idea, but I foresee problems.

Dev: "Hey some dude on the Internet posted a vulnerability for our product but it's encrypted with our webserver's public SSL key. Can I have our private key to decrypt it?"

IT: "Fuck off."

Perhaps provide a secure way for the security team of large companies to submit another public key they control?


Well that gives a reasonable use case for this: http://dpaste.de/61O8/ (http://news.ycombinator.com/item?id=2402136)

Makes more sense now.


This is a much more complicated answer than the industry standard, which is to post a SHA-1 hash of a summary of your exploit, along with (sometimes) the vendor name. I'm not sure how important the extra problems that this solves are.

Yes, it allows a vendor to "prove" they fixed a vulnerability. Perhaps if you're looking at vulnerability research as an outsider you think this is a significant issue. But it really isn't. If Chris Valasek or Tavis Ormandy say that (say) Adobe hasn't fixed their finding, everybody (including Adobe) will believe them. On the flip side, when Adobe fixes Valasek or Ormandy's bug, they're going to say that right away. Believe it or not, vulnerability researchers like it when vendors fix their bugs; it's part of how you keep score.

And even if you think proving vulnerabilities is a real issue, all this does is allow vendors to decide autonomously to prove something. But vendors don't need tools to do this, and neither do researchers. The real problem is that vendors sit on vulnerabilities for months or years; giving them one more (complicated) way to publish doesn't really help much. Meanwhile, the researcher can disclose any time she wants; she justs posts the exploit to Pastie or F-D. Done and done.

And yes, this creates a central location for customers to see outstanding security issues with products. But Zed can do that right now without getting researchers to do anything differently. He can just follow security researchers on Twitter and RSS their blogs and wait for them to post things. Then he can create entries on his site. Charlie Miller even posts numbers, like, "I have 193882 binned crashes in Quicktime and Apple has fixed none of them so far". Some visualization tools might come in handy. It's less fun than crypto schemes, but probably more useful.

Also worth mentioning: there already are "vulnerability arbitration centers" that do exactly this. Also, they take over the project management with the vendor. Also, they publish formal advisories to alert the public. Also, they pay the researchers. Sometimes a lot. One of them is TippingPoint's Zero Day Initiative, which also runs the annual Pwn2Own contest at CanSecWest. Another is iDefense.

Finally, you can consider whether this addresses the problem of "how do I communicate a vulnerability securely to a vendor". That is indeed a real problem and this is indeed a viable answer to that problem. But every vendor that has a vulnerability response process already has a secure channel for receiving reports. From spending the last 5+ years of my life communicating findings to vendors who don't have that process in place, let me assure you that "secure channel" is the least of your problems. Forget getting your email intercepted; worry more that your finding is going to end up in the vendor's public bug database tagged as a "feature" (seriously, go pick a slightly non-mainstream vendor with a public database and do searches for "segmentation fault" or even "nessus", as in "bug: product crashes and reboots when nessus is run against it").

---

If, as a vendor, you want to do something to streamline vulnerability reports, run - do - not - walk to create this web page on your site:

http://news.ycombinator.com/item?id=804257

I highly recommend you just crib from what 37signals did.

If you don't, and rely instead on the notion that a researcher could in theory use the RSA key in your SSL cert to send you a secure message, please bear in mind that many --- perhaps most --- researchers will interpret your lack of guidance on this as a license to simply publish your flaws directly on a mailing list. You didn't, after all, tell them not to, or tell them where they should instead send their findings.


> The real problem is that vendors sit on vulnerabilities for months or years; giving them one more (complicated) way to publish doesn't really help much.

Just making sure you actually read what I wrote, since that's specifically stated as the problem being solved, and at the top of the site it says researchers publish vulnerabilities, not the vendors. I think you misunderstood and think the vendors publish these.

Not replying, just making sure I understand that you read what I wrote and what you're actually saying in your statement above.


First, thanks for the careful response. You and I argue in much the same manner as pure sodium argues with water. You're being nicer than I am this time.

Second, the point of posting a catalog of vulnerabilities encrypted under the RSA keys of vendor SSL certificates is that the vendor can authenticate a posting; the vendor, after all, is the only party that can decrypt them. I think we're clear on who the actors are here.

You appear to be trying to build a system that exerts pressure on vendors to fix and publish security findings by allowing researchers to safely claim publicly that they have findings.

But researchers already have several tools for doing this. One of those tools (my least favorite) is things like the ZDI, where you are paid hundreds or thousands of dollars to let a big company handle the problem for you. Another popular solution that has the virtue of simplicity is, again, simply posting a SHA-1 hash of your finding, like, "ad1ad1ccb6da145406edef884e0595b4b1f5c4ae IE8".

The latter solution is exactly as amenable to pressuring vendors as "vulnarb.com" is. You can even help. Just collect and aggregate those reports. No public key cleverness is required.

My read on you --- and please take this as a compliment --- is that you are a cleverness junkie. The trick of sending encrypted messages using SSL certs instead PGP or S/MIME is indeed clever. But not every clever solution serves a real problem.


Yep, you know way more about this than I do. I've wanted to solve this problem for a while so any feedback helps.

The only thing you seem to be missing in the above is the consumers. It's not about getting the vulnerabilities transmitted to them, it's about posting that they exist, who did them, how severe they are, and in a way that can be verified by all three parties.

AFAIK, I can't currently go do that with ZDI right? They're sort of paid to keep this secret.

As for gathering existing SHA1 hashes, I'll look into that. Could be a way to seed the database ahead of time.

And no, I'm not a cleverness junkie. I mean, it's a shell script that's like 8 lines long. I just saw a problem in getting a vendor's "public key" and then realized I could do it this way.


How does this SHA-1 hash come full circle? Everybody on the web will know that I hashed something. The vender knows that I hashed the findings I sent to them, but so what? Do I have to post the hash on my own domain, so the vender knows that the findings they received are from the same person who owns the domain where the hash was posted?

I don't see what this hash achieves.


Hash + reputation = documented finding.

Note that the hash isn't my idea. People have been doing it for years.


Subtracting "reputation" as a requirement in that equation has to have at least a small positive value, doesn't it? I don't know how many independent researchers there are who have a chicken-and-egg problem with reputation because they don't know how to disclose both visibly and responsibly. But if there are any, Zed's method seems beneficial.


If I could have understood with that much additional information, I wouldn't have asked in the first place.


I think point of hashing is to prove that researcher has indeed found the vulnerability first and reported it to the vendor. It is proof for "this summary was written by me 2 months ago".

Hash must be posted to some archived/public place (tweet, mailing list, etc) that can be referenced at later date when actual summary is released.


[deleted]


I should have been clearer. Researchers post SHA-1 hashes publicly (if they care). They send the actual details directly to the vendor. The vendors you care about publish PGP keys. The ones who don't can't really be trusted to handle security advisories anyways. No, really: they really do put them in their public bug databases!

What made the vulnarb.com idea interesting is that by combining the two actions: a safe public notice and a secure vendor communiation --- you could create a public clearinghouse that consumers can consult to see if (say) Google is holding back on disclosures.

The issue here is, you don't need an elaborate crypto scheme to do this. Tavis Ormandy doesn't have to post an encrypted bundle anywhere to notify the public that he has a new Microsoft bug. He can just say "I have a new Microsoft bug" on Twitter. Reputation is so compelling that really, nobody bothers even posting SHA-1 hashes anymore. If you work for a credible vuln research shop and you post a message saying you have a finding in Adobe Reader, you have it. Case closed.

Zed could build the aggregator for these reports if he wanted to. It would be valuable. But that's just data entry. Zed programs. I don't blame him. I program too. I wouldn't want to build that site either.


> Tavis Ormandy ... can just say "I have a new Microsoft bug" on Twitter.

You're right, but what about the nobodies that haven't built up a personal trophy case of exploitable bugs? vulnarb.com may be solving a problem that doesn't exist for people that are already at the top of the vulnerability researcher club, but as with any group of people, there are hundreds if not thousands of people that aren't known and don't care to be the "l33t" ones giving conference talks and swapping private keys with Schneier and Knuth.

This could be a gateway for those people, college students and unknown hackers from non-first world countries (the alleged comodo hacker types for instance), to responsibly and legitimately get into the field. If marketed right, vulnarb.com could be a perfect way to post these notices, without getting trolled to oblivion on F-D.


Sorry I deleted my comment just as you posted. Thanks for the explanation.

Here is the deleted comment:

How does a vendor go from said SHA-1 hash to what the vulnerability is? I saw this as a rough draft for a way to easily publish vulnerabilities without letting the public view them but still letting the vendor have all the information.


Interesting idea. However, If TippingPoint will pay me for my vuln, why would I want to use you as an another place to "secretly" disclose my vuln?

In addition, as someone who manages vulnerability disclosures for a Fortune 500, I would have a hard time getting the SSL private key for my company.


I think its an excellent idea, because if the company's secret keys are compromised already then that is a meta-vulnerability that needs immediate attention :-). And only the company should be able to get access to their secret keys.

I would include in the encrypted message the length of the encrypted message. While its unlikely that an attacker would take a previously encrypted message and try to add confounding/confusing text to it you would want to avoid having the message compromised between sender and receiver.

There is no way (as yet) in your stream to know if the company has picked up a copy of the vulnerability. But that may remain true as there is a question as to negligence if you know about a vuln and don't fix it, are you negligent for consequential damages it causes?

There is also no way to know if two or more reports are the same vulnerability or not. It would be useful if there was a way to somehow indicate this but that too is a hard problem. The effect might be that when a vulnerability is discovered the manufacturer starts seeing a bunch of reports but they are all the same one, and then after the patch the reports won't go down until the patched version is distributed widely.


Zed will act as a clearing house? There's no danger scale (CVSS or whatever) for customers and no comp for, ahh, researchers. I don't understand how this will work.

Also clearly demonstrated that it's easy to submit reports. So...

1. Spam vulnarb.com with vulns in my competitor's software.

2. Spam vulnarb.com with vulns in my own company (maybe I can spin this... "1239801211 vuln reports against us... vulnarb clearly not to be trusted... our robust and mature security team always available...". Any press is good press right?)

3. Spam anybody I like in there, bury my product's issues. How exactly does Joe Customer realistically compare software using this? ("I want to view PDFs, should I use Acrobat, Foxit, evince, xpdf?").

3b. Spam vulnarb.com with faux vulnarb.com vulns :-p


So, block spam and identify people to avoid is what you're saying? Done. Now what?


Faux vulnerability reports might be slightly harder to spot than your average Nigerian Princess.

Will you validate my exploit for IBM RACF on OS/390?


It's easy. Vendor decrypts it. It gets ignored. Done.


The process isn't clear to me. So the vendor communicates "fixed" to you? Or do I report to the vendor and to you simultaneously using this mechanism, and then I subsequently follow up with you? Does the vendor communicate with me? You? Both of us? How?

What happens in disputed cases? (eg I have reported unsafe calls in signal handlers to be told "no working exploit, no fix"). Now what? I still have to publicly disclose vulns in my own infrastructure to get traction? What have I gained by going to vulnarb? Even assuming consumers consult such a service what can they learn if the details (severity) of the issues is not available?

I don't think I can sell this to my CSO ("there's a guy on the 'net that says I should send him encrypted text of exploits that I've found in our infrastructure"). And for stuff that doesn't touch my day job: what's your sales pitch over ZDI?

I thought the shell script was neat but sorry, I'm not really compelled by your service proposition.

EDIT: your _current_ service proposition ;-)


The vendor indicates "fixed" by posting the decrypted exploit data against their entry in Zed's database.

If the bug was benign in the first place, they can post the decrypted exploit without worrying about it or doing anything.


Even when a bug has been fixed in trunk a vendor may not wish to reveal the details of an exploit (either publicly or "just to zed").

A sensitive researcher may not wish their details of their techniques or findings to be revealed to a wider audience.

For context only (I don't want to do the responsible disclosure debate again...) please consider eg recent bugtraq SCADA announcent or Sockstress.


So. I submit a vuln that says "Product X has vulnerability Y."

The vendor decrypts this, and marks it as fixed in your database by submitting "decrypted" text that says "I like ponies."


The encrypted “I like ponies” would not match the encrypted “Product X has vulnerability Y”


Seems like a neat hack to find a certificate for an entity that doesn't necessarily publish a PGP key.

However unless the company's primary website is on SSL how is it going to work? Choosing a random SSL secured site under a domain might lead to something outsourced which the original company would find it hard to obtain the private key for. (Even the primary website could be on Akamai and then the company may not have access to the private key.)

If you start requiring a company to provide a specific domain for the key you might as well ask for a PGP key instead and store the PGP encrypted messages.


Can you find instances of this practice? I'd like to know so I can investigate.


For the Akamai thing, compare what certs you get for say www.oracle.com and www.akamai.com (I see the same cert).

The CN doesn't match there so that case is a bit different to a third party hosting a valid cert. You probably would want to check for things like this somehow though.


I think this approach exposes a number of problems that are pretty relevant. First off vendors by decrypting the content of the researcher's submission are basically giving up vulnerabilities to the bad guys that want to target unpatched systems (read: what malware most of the time does). Second thing I see no reasons why the vulnerability disclosure process should change and migrate to SSL based encryption: vendors can already expose "lies" if they feel like it with PGP encryption and honestly having a "middle-man" in between that can potentially have sensible data looks like a bad idea. So how about: researchers post on the website the SHA-1 of the POC or just a line saying "product X is vulnerable", then the owner of the website asks the vendor for confirmation. If the researcher has no "karma points" then the submission is hold back until the vendor confirms, if the researcher has "karma points" (already multiple confirmed and valuable submissions) then the advisory gets published immediately regardless of whether the vendor acknowledges it or not. Still does it actually help anybody? how would you convince consumers to actually pick products or similar based on that website? To me it looks like the people interested in this kind of information have already other means (twitter, ml, direct contact with vendors), the others don't care.


> First off vendors by decrypting the content of the researcher's submission are basically giving up vulnerabilities to the bad guys that want to target unpatched systems

They already do this. After a vulnerability is patched and fixed they release what it did.

> Second thing I see no reasons why the vulnerability disclosure process should change and migrate to SSL based encryption

It's a proposed first step, and makes sure that no vendor can avoid the system by simply not publishing a key. I envision in the future their GPG key would also be on the site and could be used.

> vendors can already expose "lies" if they feel like it with PGP encryption and honestly having a "middle-man" in between that can potentially have sensible data looks like a bad idea.

Uh, not sure what the "middle-man" is, but if you mean vulnarb.com then no, the point is that it's industry standard asymmetric crypto so I wouldn't know anything. In fact, I'd have incentive to not know anything so that I'm not getting sued.

Finally, your proposed karma points system doesn't seem to solve anything. If everything is encrypted so that the receiver is only able to decrypt (not even the sender can do that), then there's no point in preventing people from publishing.

Are you afraid that someone would slander a company? Couldn't the company simply decrypt and publish their lies then sue them for slander?


> They already do this. After a vulnerability is patched and fixed they release what it did.

Usually researchers' submissions are way more detailed than the advisories you see from vendors. Meaning that you cannot just decrypt the content of what the research submitted. It's true that oftentimes you can find the bug by reading the advisory and using tools to diff the patch but it's a long shot compared to just publishing the original researcher's submission imo.

>Uh, not sure what the "middle-man" is, but if you mean vulnarb.com then no, the point is that it's industry standard asymmetric crypto so I wouldn't know anything. In fact, I'd have incentive to not know anything so that I'm not getting sued.

yep I meant vulnarb.com. I'm not saying that you have anything decrypted and ready to use, I'm saying that it's pointless to have another recipient for sensible data. Cause in the extreme scenario where the private key is stolen than it's just more attack surface to get to the submissions. If we assume that no leak of this sort happens, still what's the reason for a third party to have this sort of data compared to say sha-1 hash of the Poc?

Actually my point about the karma system is that you can achieve the same goal you have in mind without needing any sort of data from the researcher other than "product X is vulnerable" and then the karma points will determine whether the researcher is reliable or not.


> Actually my point about the karma system is that you can achieve the same goal you have in mind without needing any sort of data from the researcher other than "product X is vulnerable" and then the karma points will determine whether the researcher is reliable or not.

No, that doesn't actually solve this problem because then it just turns into a he-said-she-said.

Also, you seem to leave out that the company can decide to decrypt it or not. Hell, they might come to an agreement with the researcher to just have the researcher assert it's fixed and never decrypt it. Although, I think that it should be fully exposed so people can see how it's really done and there's incentive to really fix it, not half-ass fix it.


>No, that doesn't actually solve this problem because then it just turns into a he-said-she-said.

then I fail to understand what you plan to do with the encrypted stuff that you get from the researcher since the only one able to decrypt it would be the vendor. At that point the scenario is: researcher says this is vulnerable, vendor denies it, somebody needs to decrypt the content of what the researcher has sent to the vendor. In contrast what happens without having the encrypted content is: researcher says this is vulnerable, vendor denies it, the researcher can publish the un-encrypted original advisory on your website if he feels like it.

Regardless the SSL trick is a nice one I just don't see the point of the third-party involved and I somewhat doubt that a website like this can be useful to the end-user or to put pressure on the vendor.


Let's try a few of your supposed scenarios:

1. Researcher posts it. 2. Vender pulls it down and looks at it, then says it's harmless or not an issue. 3. Researcher thinks it is, and since the vendor says it's not, the researcher posts a cleartext version for them. 4. If it really is, then the vendor should have paid attention.

Next scenario (not sure why you can't think past solving these yourself):

1. Researcher posts bogus spam. 2. Vendor decrypts, sees that it's bogus spam. 3. Vendor posts the decrypted version and flags it as spam. 4. It gets taken down or ignored if it is, and researcher is blocked eventually.

Next scenario:

1. Researcher posts a vulnerability. 2. Vendor fixes it but half-asses it. 3. Researcher tells them it's not fixed, they say yes it is. 4. Researcher calls their bluff and posts the original cleartext. 5. If it really is fixed, then no harm done. If it's not, then vender screwed up.

Pretty much, in most of the situations you've envisioned, you've assumed that the communication is one-way from researcher->vendor. Is there a reason you assumed that vendors wouldn't be able to post their replies back or decrypt and post the decrypted vuln for others to see?


You are missing the point: is there anything of the above that can't be done without a third party having access to the encrypted content vs having a simple hash of the email/poc sent to the vendor?


Doesn't this mean that anyone who administers (or breaks in to) a webserver or ssl reverse proxy belonging to the vulnerable party would be able to decrypt the report+POC?

Isn't that a pretty serious issue given that such reports can be extremely sensitive and at least in most mid to large size vendors like cisco, microsoft, google etc. they would normally only be available to the security teams + the responsible engineering team? Certainly not to ops.


Yes, but it's better than the most common scenario now, which is you generally don't have a public key for the security team at all.

Besides, if you can't trust your SSL administrators, why do you have them administrating your SSL?


It seems to me that trust is a more subtle metric than on and off. If nothing else, if you're trying to keep a secret it behooves you to tell as few people as possible, even if you trust them all.

There are lots and lots of security team pgp keys out there. Clearly not every vendor has a secure process, but certainly the people who make up the volume of the reported incidents do. Vendors who do have them might be worse off if people started using this scheme.

cisco: http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xA291...

google: https://services.google.com/corporate/publickey.txt

microsoft: http://www.microsoft.com/technet/security/bulletin/pgp.mspx


I am trying to imagine the response you'd get from, say, a message oriented middleware vendor without a dedicated security contact, upon sending them a message encrypted in such a way that only their SSL certificate could be used to decrypt it.

It's funny to think about it as "mass hysteria!"; the people who care about these reports are on the dev team and can safely be expected not to really understand what an SSL cert is, let alone have access to it or the patience to use OpenSSL's command line.

But more realistically what's going to happen is that the message will get chucked entirely.


That's fine, they can ignore it since ultimately it's not about them but about everyone else. People then evaluating similar products can go look and see how they compare on their security track record. Once that happens I imagine they'll figure it out.

Also, I'm probably imagining a few open source tools to make this easy to work with. The shell script is just a little test case to prove you can do one part of it. There'd need to be a few more things before everyone can actually play the game.


If you want to compare security track records, you have any of several huge vulnerability databases to consult; Secunia is the one that seems to have done the best job of SEO.

But there are two huge problems with this approach.

First, nobody cares. Nobody really evaluates products based on "security track records". If I did a "Month of Cisco Bugs", do you think any company in the world with more than 500 employees would switch to Juniper or Astaro? No, they would not.

Second, vulnerability track records don't directly measure product security, or even product security team responsiveness. They measure researcher interest and researcher effectiveness. Microsoft, Adobe, and Apple are deluged with reports, because researchers are incented to target them. It also tends to take (say) Microsoft longer to fix things than J-Random-Vendor; QA is harder, releases are more expensive, and more security issues are on the plate to begin with.


> First, nobody cares. Can this really be true? If so, why do you have a job?


Companies will select products with terrible security track records, then pay us to come in and beat up the vendor.


Those are three mega-corporations, and not small businesses though. Most places I've worked with outsource the software, web server administration, and SSL administration all to a single client, and don't necessarily pay for a security team to maintain GPG keys, but just for maintenance and ongoing development.

Allowing the web team to securely receive a vulnerability report using infrastructure they've already built opens up a secure reporting process to thousands of smaller businesses.


I think it could be useful to encrypt the same report to multiple recipients - just generate multiple pass.enc files.

This allows the researcher to publicly send the same report to multiple vendors (and it will verifiably be the same report) and adds the option of introducing "neutral" third party arbitrators as well.

For example, the researcher might want to publicly CC all vulnerabilities to CERT (to pick a random example), to put a little more pressure on the vendor and share the burden of following up and keeping the vendor honest.


That shell script does that already. Go ahead and run it for multiple sites with the same file:

  sh encrypt.sh www.google.com mysecret.txt
  sh encrypt.sh www.apple.com mysecret.txt
  sh encrypt.sh www.microsoft.com mysecret.txt
It then will make a directory for each in results using different passwords and their respective keys.


That is exactly what I don't want. :-)

I want the same random key sent to multiple recipients. That way they can all verify that they are seeing the same thing.

Edit: Just to clarify, without using the same random key for all recipients, the reports are not verifiably the same until everyone shows their random keys. If you use the same random key for all recipients, you don't have this problem.


Could be an option, although I'm not sure what's better given that most vendors wouldn't want to have other vendors know they're vulnerable until they fix it.

Also, I'm pretty sure that if adobe and microsoft both have a vulnerability then they'll talk to each other.


I'm mostly thinking about the arbitrator scenario, or key escrow for each report.

Your current scheme suffers from the fact that once the report has been generated, the only person that can verify that a given document is the same as the submitted report is the vendor - who may not be inclined to cooperate.

It's admittedly a little far-fetched, but if there is a dispute as to the contents of the report and the vendor refuses to disclose the secret key, it ends up being the vendor's word against that of the researcher.

This can be solved by keeping the random key around or by sending it to multiple recipients. Or both. Your current strategy of immediately deleting it is pointless while the original plain-text report still exists but leads to the conundrum above.


I wonder how easy it is inside a corporation to get access to the SSL private key? I'm guessing a company like Microsoft doesn't hand that out to their developers easily.


You would essentially proxy the request over to ops for decryption. That's how we handle GPG-encrypted vulnerability reports in the company I work for. We don't have out the private key to everybody, I think only two people have it, and they forward back the decrypted result internally to a security mailing list.

The first couple of times you tried I would expect a general "WTF?" response from ops (indeed they should ask some pointed questions and not just immediately comply), but once you got over that it wouldn't be hard to put a process in place to deal with this routinely.


Or, instead of building a process for getting your ops team to do unusual things with your site certificate, you could just put up the web page with your GPG key on it.


Yes, ideally GPG would be better, but if you only did GPG then companies could simply not publish a key and avoid the whole system.

I'm envisioning some combination whereby, if they are already registered on vulnarb.com then you just get their GPG key (hopefully signed by their SSL key), otherwise you just use the SSL key if that's all you have.

After that I haven't explored it further. The advantage of an SSL key is that they usually must have one, it's secure, and they can't prevent people from using it.


Vendors who don't provide some secure channel to send findings to are already subject to vigilante-style disincentives: if you don't provide a key, researchers will drop zero day on you publicly. Remember, public drops used to be the primary way things got disclosed.

You appear to think we need to worry about vendors gaming the system. But on the researcher side, the standard definition of "responsible disclosure" is, "I wait 60 days after I send you something and then I go public whether you say OK or not". That isn't just my take on the situation; it's what actually happens.


It's starting to look like doing this for open source projects is a good start. They seem to be less capable at this, try to hide the vulnerabilities more, and reporting to them is much harder and well organized. Also, based on my experiences with how the Ruby projects does it's bullshit security, the users could really benefit.

Thanks for your input Thomas.


Kudos to Zed, for at least attempting to find a workable middle-ground. It's a change from the usual and useless finger-pointing.


The fingerpointing isn't for want of a better system of disclosure. It's because fixing bugs costs money.

Very few vulnerability researchers have any notion of how much it costs to roll out a dot release; the idea that it could cost tens of thousands of dollars strikes them as nonsensical.

Meanwhile, very few software vendors have any notion of how big a deal (say) memory corruption is; the idea that it cost the public hundreds of thousands of dollars strikes them as nonsensical.

All the tools in the world aren't going to get researchers to care about why it's taking 6 months for a vendor to publish their findings. To them, 60 days is an extremely generous window. And all the tools in the world aren't going to educate vendors on the true costs of software vulnerabilities.


I mostly disagree with this. Right now the corporations have pretty much stacked the deck against consumers when it comes to security. A simple site where you can go and at least see the amount of vulnerability in a product, and also any past ones could do a lot to change how things are handled currently.


How would this differ from a site like the Zero Day Initiative? (aside from that it looks like there's much less hassle to use this system; or maybe that's actually the point).

I think this is interesting, but the two "types of vendors" I see this being useful for are:

1.) Vendors who don't have an already established method for communicating vulnerabilities (and therefore the researcher doesn't have a good way to interact with them).

and 2.) Vendors who chose not to work with researchers (of which there used to be a lot, and are still a few).

I see type 1 quickly instating a policy to deal with vulnerabilities after something of there's get posted to the site (so they can more quickly deal with future issues), and I see type 2 continuing to ignore vulnerabilities (and possibly being hostile to the researcher and the site for posting things). So maybe the benefit is shaming "type 2" vendors?


What's the major vendor that won't accept vulnerability reports from researchers?

Remember: literally the one and only responsibility a vendor has here is "provide a secure way to send us findings". They don't have to provide a mechanism to post things publicly. If you sit on a finding for 60 days, nobody† will blame a researcher for going public with it.

At least nobody with any influence over your reputation or hirability


I don't know about "major" but off the top of my head I can think of a couple hardware vendors who make one-way-transfer appliances who not only won't accept vulnerabilities from researchers, but claim that testing their product for vulnerabilities violates their terms of service, and opens you up to legal action.


The only people in the universe who use those one-way-transfer appliances are big companies. The only reason you'd ever audit those appliances is because the big company told you to. Vendors say a lot of dumb things to contracted researchers (I've heard it all!), but they back down real quick when the guy who owns the budget for the F-500 bank tells them "fix this and shut the fuck up."

The only case I can think of in which a vulnerability researcher could run afoul of a vendor in this scenario is if after they did their gig for the big company, they also wanted to post advisories on their website. We tried to thread this needle back in 2005 and gave up pretty quickly. The fact is, if a big bank pays you tens of thousands of dollars to whack a vendor, everything that happens afterwards should benefit the vendor; they, after all, paid for the work. The advisory on your website doesn't help the vendor. Why fight for it? Most of the time, your NDA and MSA strictly prohibit you from publishing anything anyways.


I can think of another; as long as we're delving into "completely hyptothetical" territory:

In the one-way-transfer example I listed, don't think "big company"; think "who besides a company might need a one-way-transfer". Maybe they're more "agency" than company, maybe in fact their name is an acronym (a lot of times those acronyms have three letters).

In that hypothetical example, the situation resolves itself the best way that it could:

The "agency", after reading your report about some fundamental insecurities in the product, realizes that it is currently deployed in places where the potential damage from the product being insecure is closer to "people being killed" than "money being stolen". So they pull it "off the market". That's an example where the vendor being actively hostile doesn't actually ruin your day (and you get to sleep soundly that night thinking that maybe you've actually found something that will be more useful than a cross-site-request forgery vector in an online dating site) .

As an alternative (and one which I should have used initially, but didn't think of at first), consider this:

You've found that a product being used by a client to facilitate check and credit card payments is a horrendous piece of shit (think "the custom crypto to encrypt the user database is literally an off-set substitution cipher that repeats every 13 places...the effort required to break it is less than that required for a cryptogram in the newspaper).

The client is glad to hear this, but is not in a position to either 1.) force the vendor to fix the issue, or 2.) replace the product with a more sanely developed one (the product is PCI compliant, after all, and the only reason you are there on-site is for them to pass a PCI audit).

So in that second scenario, you know that there are other people using this product, and it will most likely never be fixed. But yes, you've signed an NDA with your client, and all that you can do is make sure they're aware of the issue.

I don't think the system Zed is proposing will actually help in that example though; as you mention, what you discovered belongs to your client.


What if this turned into an open source client combined with a standardized API for reporting vulnerabilities directly to vendors and similar escrow services?

Ultimately, yes, it's meant to be easier and more open than current initiatives.


I think that could be interesting then.

One thing that can make it more useful than ZDI is that you lower the barrier of acceptance for a bug. In ZDI, you send them the bug, they decide if they want to buy it from you, and if you accept, they work with the vendor to get it resolved.

I can see numerous examples of bugs that they wouldn't necessarily be interested in buying from you, that a system like this could still provide for.

I happen to be in the camp who thinks the biggest problem with vulnerability reporting is the lack of response from vendors. A system like this, if it were to become popular, could serve as another way to keep them honest. But in order for it to be ubiquitous, you'd pretty much have to handle dealing with the "biggies", which I think would mean submitting vulnerabilities in the way they advertise.

I agree that it shouldn't ideally be up to the vendor how they deal with vulns, but I think you'd have to have tremendous momentum to shift that burden from you to them.


http://www.secunia.com/ does this already. There's heaps of other sites too that do this.


It sounds like to me there's an excellent opportunity here for some clever security researchers to set up a company like Underwriters Laboratories (http://en.wikipedia.org/wiki/Underwriters_Laboratories).


I wonder if we'll use this in a couple of years with DNSSEC keys instead of certificates used for https.


This is a fantastic idea, however, how will it be sustained if the idea takes off?

Non-profit? Consortium funding (scary), other?

Has zed taking this into account and documented?


Why not use GnuPG?


Using GPG is a much better idea and is in fact what most vendors, including the Microsoft MSRC, the Adobe and Cisco PSIRTs, and the Apple product security team already do.


I suspect for purposes of key distribution. If <huge company> doesn't feel like playing Zed's game, they could easily forgo issuing a GPG key. By using SSL, he's strong arming them into participating.


Yep, exactly. People reporting vulnerability shouldn't have to beg to do it, they should be able to do it and it's up to the corporations to provide more convenient means.

That and I thought it was a pretty cool hack to encrypt a payload using just a shell script and openssl. :-)


If a vendor bothers to fix a security patch, the ad/disclosure is already signed, most of the time. Harvesting the keys from i.e. the bugtraq archive seems convenient enough.

But yes, I also like the SSL idea, now if only verisign and comodo would be more trustworthy than the WOT...


Zed, could you make your e-mail address a bit easier to find? I don't use twitter and figured you wouldn't want me phoning you up.


How is he strong arming them by sending an encrypted blob around that only the vendor can see? The vendor can still choose to ignore the issue or sue the researcher.


Cool idea - hope he has some good legal cover :)


Huh? Why?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: