That's the rub, isn't it? I feel some of the least secure places are the ones which never mention (or realize) that they have a security problem.
I don't have any particular opinion of Gitlab, but it does seem to be that acknowledging fault is more valuable than the alternative. If I were to attack a service, I'd probably tend to avoid the one that actively updates it's security regularly.
I feel some of the least secure places are the ones which never mention (or realize) that they have a security problem.
It is an age old conundrum that people seem to struggle with a lot. As part of my previous profession, I've security reviewed hundreds of open source libraries. They fall into 3 categories:
1. The libraries/applications that have either had a really security conscious developer behind it or a very rocky past such that it was given an extraordinary amount of attention security wise. These account for about 1% of software
2. The "normal" libraries/applications that nobody care about with regards to security. As long as the code works everyone keeps using it. They are often insecure by default, but the lack of attention by SME means they wont be judged. They account for 80% of software.
3. The horrid ones. Built-in code execution as a feature. Authentication systems with more critical bugs that you can possibly image. We never talk about these because... well, we could spend months on polishing a turd. Nobody dares to publicly speak up and say "don't ever use X, Y and Z" for fear of the repercussions. They account for the rest of software.
GitLab used to be in category 2, but got moved into 1 about two years ago when security professionals started to give it attention.
As to your feeling, 99% of software have security issues, some worse than others. We rarely talk about them.
hacker101.com and join the community Discord. There's a ton of Bug Bounty hunting content on the internet. Plenty of room to explore and find your niche.
I can't find the article right now, but I remember reading an article many years ago on Wired about the Obama campaign and their use of targeting and "big data". Really interesting stuff about how they were buying TV spots and the Romney campaign couldn't figure out why, and how their use of technology was a massive advantage.
The White House also evaluated personalization technology to use on whitehouse.gov so that they could serve particular "stats" during major events (election season, SOTU, etc.).
The researcher can still disclose it, they just aren't going to get permission to disclose it on the Hackerone program. Most things out of scope don't get publicly disclosed as far as I know.
Without seeing the communications it's hard to say, but "When the security researcher -- named Vasily Kravets-- wanted to publicly disclose the vulnerability, a HackerOne staff member forbade him from doing so, even if Valve had no intention of fixing the issue" sounds like more than just not being able to disclose on the H1 program.
I submitted an XSS on the tesla website to hackerone, it was marked as a duplicate. A week later, shared it with an XSS mailing list and got an angry email from HackerOne soon after. Public disclosure violates the terms of their reporting program EVEN if they reject your report.
I'm really curious how much of what is reported to HackerOne ever gets and actual patch. It kind of seems like there are bunch of known vulnerabilities idling on their platform without quick fixes. Should be interesting once the HackerOne database is inevitably leaked.
HackerOne should start requiring companies pay researchers for duplicates - that the company already knew of a flaw should make them more liable, not less.
> HackerOne should start requiring companies pay researchers for duplicates
That would create a perverse incentive for researchers to tell their friends about the vulnerability so that they can resubmit it and also get a bounty.
The problem could be solved on the side of the researchers by splitting the bounty among all submissions of the same bug, but anyone else with access to the report (employees of either HackerOne or the relevant company) could try to get a share by having someone create a duplicate report.
First come, first served seems like it would be the hardest to game, as the first reporter is guaranteed to have actually done the work (not counting rogue employees who create bugs to "find" and report).
There should probably still be some kind of reward for duplicate reports to avoid discouraging researchers, but something symbolic like publicly acknowledging that they found a bug might be enough to provide validation.
> First come, first served seems like it would be the hardest to game
For external parties, yes. However it's the easiest to game for those liable, since you can just mark whatever you want as a "duplicate" and refuse to pay the bounty.
Offering bounties for public disclosures helps remove a lot of perverse incentives.
I like your first idea of splitting the bounty. I think its unlikely employees of HackerOne or the relevant company would risk their job for a small share in a bug bounty.
Splitting the bounty does nothing to fix the incentive problem, since it's the same outlay from the vendor whether they fix after 1 report, or a year later after 20.
In reality, vendors (or at least, serious vendors) aren't gaming H1 to stiff bounty hunters. If anything, the major complaint vendors have about H1 is that they aren't paying enough --- that is, they deal with too many garbage reports for every report that actually merits a fix.
I wonder if you could scale it so that the goal behaviors were also a market equilibrium. So no complicated prohibitions for going public, but each additional report (aided easily by going public) would cut into your own earnings some percentage. But on the flip side, each additional report costs the company money too, so they have monetary incentive also for pushing a fix before someone else finds it or you decide to give up waiting and go public with it anyways. With each on appropriately decreasing scales so there’s always appropriate minimum and maximum payouts.
I assume it'd be hard to convince companies it may be in their better interest to set up an incentive structure this way. But perhaps a third party platform could find some such mutually beneficial equilibrium.
If they get a duplicate report they should let you know the disclosure timeline and keep you posted on progress fixing it. If they're not doing that they have no right to prevent disclosure.
It seems weird that HackerOne put themselves in such a deeply loser position to try to be the ones to prevent submitters from revealing security issues. Why not be a neutral party, and let the companies try to enforce rules on the hackers in these cases?
Eh that one is on you I think. How long did you wait? If we have 5 researchers report the same vulnerability in 30 days we're going to count it as duplicate and still expect to have a full 60-90 days from the first report to deploy a fix.
It was pretty low hanging fruit. I was going through an XSS tutorial and used their site for practice. `<script>alert(1)` could be saved into several user fields including Name and would then be executed on every subsequent pageload around the site.
If there was some indication that someone had reported it recently I maybe would have waited longer, but I suspect this bug had been known for months.
> Kravets said he was banned from the platform following the public disclosure of the first zero-day. His bug report was heavily covered in the media, and Valve did eventually ship a fix, more as a reaction to all the bad press the company was getting.
> The patch was almost immediately proved to be insufficient, and another security researcher found an easy way to go around it almost right away.
Even in the scope of the original comment, doesn't it create a pretty perverse incentive to allow companies to mark HackerOne bugs as WONTFIX and then ban researchers who disclose them?
Isn't security through obscurity largely to be avoided? I thought the working model for most security researchers was: if it's not worth fixing, it's not worth hiding.
More to the point, I thought that responsible disclosure always came with an expectation of public disclosure. The advice I've always been given is that you should never disclose with conditions -- ie. "fix this and I won't tell anyone."
It should always be, "I am going to tell everyone, but I'm telling you first so you can push a fix before I do."
b) Researcher reports vulnerability that falls under X
c) Since it's out of scope, it's closed as N/A
d) Report is locked because company doesn't want to publicly disclose a vulnerability in their system via the Hackerone platform
What's the problem here? Just go with normal vulnerability disclosure. Bug bounty programs are a two way street, and respecting the scope is part of that.
Edit: I guess the important part is that the researcher was then banned for disclosing the report. Seems reasonable, honestly. I don't agree with it, but I understand it.
Acknowledgement is one thing. Disclosure is another.
If Steam had no problem acknowledging that this functionality exists, they should have had no problem with it being disclosed. There lies the problem. In the bathroom with the needle in their arm; "...there's no problem here..." but if you swing the door open they'll still try to shut it. Because they know they're wrong.
If HackerOne isn't going to help you they have no right to hinder you. If they want to strongarm everyone into effectively the same agreement as an NDA then there literally is no point in turning vulnerabilities into HackerOne.
They seem to only exist as a cow-catcher on the locomotive of software vendors too lazy to actually fix crappy code.
"Who needs to fix code and shell out bounty if you can pinpoint and silence the researcher?"
> If HackerOne isn't going to help you they have no right to hinder you. If they want to strongarm everyone into effectively the same agreement as an NDA then there literally is no point in turning vulnerabilities into HackerOne.
The article gets this part wrong: the hacker isn't banned from H1, which he says in his blog post -- "Eventually things escalated with Valve and I got banned by them on HackerOne — I can no longer participate in their vulnerability rejection program (the rest of H1 is still available though)." HackerOne is in no way punishing the hacker for his reports and/or public disclosures, for what it's worth.
(Disclosure: I am on the community team at H1, though I've had effectively zero involvement with this.)
You can define whatever you want for your project's scope, but when you're distributing self updating binaries to an audience the size of steam's and you act this casual about an admin escalation exploit, you deserve whatever damage to your reputation that you get.
Thing is, as a result of the ban the next disclosure was immediately public. This left more people vulnerable than the responsible disclosure method would have.
Hence, this practice by steam makes all users of steam less secure (doubly so as they actually don't want to fix these issues). This is something the public deserves to know, so they can act accordingly.
Bug bounty programs exist primarily for the companies’ benefit. If you do not respect the security community, the best you can expect is for researchers to publicly disclose the vulnerabilities. At worst, black hats will find them and sell them since they can be very valuable.
Retailiated as in he was banned from their bug bounty program. The program with a scope that they went outside of. I think it's reasonable to be banned.
Obviously it would be better if Valve fixed the issue and gave a (possibly reduced due to out of scope) bounty.
That makes sense if the application is, like, a SAAS app, and the scope is, like, "don't employ credential stuffing or test any of our 3rd party dependencies that have not given us permission to be included in this scope".
But this is software people install on their desktops, and Valve has no say in how security researchers approach that stuff. Valve can and maybe even should exclude LPEs from their bounty scope (if that's not what they're focusing on right now), but they can't reasonably ban people for publishing vulnerabilities they've scoped out of the only mechanism they've provided for submitting and tracking vulnerabilities.
My guess is that their traction and market dominance are far more fragile than you think. Because their drivers are contractors and not employees, they can't be forced to only work for Uber, so even though they created the pool of drivers, they have not "captured" them. So a huge cost is the ride subsidies which maintain their market position. Once the ride subsidies end, there is no reason that a locally focused company can't compete for the same drivers and riders.
From what I've read, when Uber started their app technology was borderline magical (pushed the boundaries of what smartphones could do), but since that's no longer the case, there is much less of a barrier to entry.
I want to make sure that I understand your analogy.
- The right to secure encryption is like the right to bear arms
- Government mandated weakened encryption are like gun control
- The victims of weak encryption (stolen data for example) are similar to innocents harmed by gun control? (I'm not sure on this one, please correct me if I'm wrong).
- Saying weak encryption is bad is like saying gun control is bad
I'm not trying to straw man you, if that's not what you mean, please correct me.