Yes, this has to be one of the most impressive - and scary - things I've read in a while. Fantastic work by the team, and thank goodness there's security competitions like these to work as a rewarding outlet for their extreme talents.
I'd rather see this money used for increasing software security through all layers. [1][2]
See, all these vulnerabilities started as major or minor bugs. And these originate from somewhere. While 100% bugfree software may be too hard to be worth the effort in most applications, there is a huge difference between the ideal state "the first exploit hits you hard, and after two or three more severe bugs in your software, you are out of business." Instead, we have "you can 'loose' huge amounts of data every few months and are still in business." And no regulator, no expert in a law suite, actually nobody, wants to have a look at your source code anyway. Even if you don't hide it through SaaS or other means, almost nobody asks for the source. [3]
Instead, public money is used to declare "cyberwar" and to buy zerodays - which creates an incentive for people to keep their findings private, instead of reporting early on. And more imporantly, these create an incentive to put in such bugs in the first place. [4]
[2] audits, bug bounties (every bug, not just obviously-security-related bugs), better static analysis tooling and improving type systems and programming languages as a whole, donate to projects like OpenBSD and Mozilla / Rust, etc.
[3] ... unless it is about copyright. But I've never seen such a request in a software-security related incident.
[4] An attacker doesn't even need to establish a full-blown backdoor. They can just contribute some code with a missing or slightly-wrong check, and see how to exploit it later on, after enough time has passed.
We should also invest more in changing "100% bugfree software may be too hard to be worth the effort in most applications". Ultimately, everything else is just stop gap.
My first reaction was: This is insane. Nobody is perfect. Let's try to reduce the bug rate to 1 bug / 100,000 LOC and we have achieved more than we can hope for.
However, thinking more about it, if you have a system with 1 million LOC and reduced your bug rate to 1 / 100,000 LOC, this means you need to fix 10 more bugs and you are 100% bug free. This doesn't sound infeasible at all. (ALthough it may be hard.)
many companies have bug bounty programs where they will pay people who discover bugs.
I've read criticism of Pwn2Own that argues that some people will find an exploit and save it for the competition rather than disclosing it to the company. This would give time for the exploit to be discovered by others who would use it.
They are basically already funded by public money, problem is just that the vulnerabilities found won't make it back to the vendors until they are burned.
Y'know Y Combinator? That Startup Accelerator that runs this site?
They will just let you borrow $100k (well, $120k) if they like you and your idea. See http://www.ycombinator.com/ for more.
Snark aside, assuming the contestant took the whole year to work on the exploit, the chance of a $100k payout is small compared to the risk of not actually finding the exploit, combined with the fact that they could be making twice that working at a place like Facebook or Google.
Also, you'll note that the Ars' paper doesn't talk at all about the contestants to the contest who's hacks failed to succeed.
If you want to make the most money doing security research, your best bet is going to be finding people who will pay you to do security research. You get paid even if you don't find anything!
And $120k in exchange for equity is a sale not a loan or a gift.
Perhaps they came across the exploit under different circumstances? Perhaps they've exploited it themselves in the real world, but the lure of the $100k prize is worth giving-up-the-goods?
It's a fair point - bug bounties were controversial when companies started doing them, and even then, by reducing it to monetary terms, economics states they are only as effective as their payout.
For example, in the so called "Cloudbleed" writeup, it was evident that Cloudflare was leaking authorization headers from Uber. If the security researcher who discovered it had far fewer morals, instead of reporting the issue to get it fix, it's possible they had the power to change the bank account/whatever that each Uber driver gets paid into, to an account under the researcher's control.
For all their hard work and honesty, the security researcher (aside from the benefits of an awesome job at Google), won a grand prize of... a T-shirt from Cloudflare (hopefully it said "I broke the Internet and all I got was this Tshirt").
Maybe the pwn2own exploit did come from different circumstances. In that case, someone was able to reuse previous work and make an easy $100k. (If you're jealous of that, residuals in Hollywood are gonna make you move to LA.) Whatever portion of that $100k that came from VMWare, it's cheap for VMWare to know about this bug and to be able to close it, when their business model rests on their security claim that the guest cannot escape to the host.
>economics states they are only as effective as their payout.
It's a bit more complicated than that as the value of the bug bounty only has to be greater than the lowest economic value to all of those that are aware of the bug to be effective.
That incentive, even though small, dramatically reduces the size of the total number of individuals who will know about the bug before it's disclosed. Since the set of people who are willing and able to exploit the bug for gain is small keeping the number of people who know about an undisclosed bug small reduces the probability of an overlap.