A $1.3 billion revenue company being too tight to pay this after all, even on their 2nd chance, is so short-sighted it's absurd. They're putting out a huge sign saying "When you find a vuln, definitely contact all our clients because we won't be giving you a penny!".
Incredible. This must be some kind of "damaged ego" or ass-covering, as it's clearly not a rational decision.
Edit: Another user here has pointed out the reasoning
> It's owned by private equity. Slowly cutting costs and bleeding the brand dry
It all makes sense if you consider bug bounties are largely:
1) created for the purpose of either PR/marketing, or a checklist ("auditing"),
2) seen as a cheaper alternative to someone who knows anything about security - "why hire someone that actually knows anything about security when we can just pay a pittance to strangers and believe every word they say?"
The amusing and ironic thing about the second point is that by doing so, you waste time with the constant spam of people begging for bounties and reporting things that are not even bugs let alone security issues, and your attention is therefore taken away from real security benefits which could be realized elsewhere by talented staff members.
I don’t agree. Bug bounties are taken seriously by at least some companies. Where I have worked, we received very useful reports, some very severe, via HackerOne.
The company even ran special sessions where engineers and hackers were brought together to try to maximize the number of bugs found in a few week period.
It resulted in more secure software at the end and a community of excited researchers trying to make some money and fame on our behalf.
The root cause in this case seems to be that they couldn’t get by HackerOne’s triage process because Zendesk excluded email from being in scope. This seems more like incompetence than malice on both of their parts. Good that the researcher showed how foolish they were.
This feels like a case in the gray area. On the one hand, companies need to declare certain stuff out of scope - whether they know about it and are planning to work on it, or consider it acceptable risk, as the point for the company is to help them improve their security posture within the scope of the resources they have to run the bug bounty program. What's weird here is that the blog author found an email problem that wasn't really in that DKIM/SPF etc area, and Zendesk claimed that exemption covered it. Without a broader PoC to show how it could be weaponized, it's hard to say that Zendesk was egregiously wrong here - the person triaging just lacked the imagination to figure out that it would be a real problem. Hell, later in the write up we learn Zendesk does do spam filtering on the inbound emails, and so it's not crazy to think a security engineer reading the report may assume that stuff would cover their butts, when it failed miserably here. (A good Engineer would check that assumption though)
That said putting my security hat on, I have to ask - who thought that sequential ticket ids in the reply-to email address were a good idea? they really ought to be using long random nonces; at which point the "guess the right id to become part of the support thread" falls apart. Classic enumeration+IDOR. So it sounds like there's still a potential for abuse here, if you can sneak stuff by their filters.
>Without a broader PoC to show how it could be weaponized, it's hard to say that Zendesk was egregiously wrong here
The implications of being able to read arbitrary email contents from arbitrary domains' support (or otherwise) addresses are well known, and any competent security personnel in ZenDesk's security team should know this is exactly what can happen.
Something similar has been discussed on HN before: https://news.ycombinator.com/item?id=38720544 but the overall attack vector of "get registration email send to somewhere an attacker can view it" is not novel at all; it's also how some police database websites have been popped in the past (register as @fbi.gov which automatically gives you access; somehow access the inbox of @fbi.gov due to some public forwarding, for example)
Yes, I expect a security engineer to hold knowledge. That's why they have a job, instead of replacing the security them with an LLM. If nobody in the team has that experience, it speaks exactly to the issue that has been outlined in the OP: not enough knowledge of security issues beyond the basics.
>Without a broader PoC to show how it could be weaponized, it's hard to say that Zendesk was egregiously wrong here
There was a PoC of how to view someone else's ticket (assuming you know the other person's email and approximately when the ticket was filed).
>it's not crazy to think a security engineer reading the report may assume that stuff would cover their butts
It sounds like they got a report saying "I can spoof an email and view someone else's report". Why would they assume the spam protection would protect them when they have a report saying it's not protecting them?
I suppose my point is "read someone else's ticket" is far from the worst case scenario here. It certainly sounds like zendesk didn't care to protect ticket contents ... Which the more I think about it is pretty egregious, as support tickets can include PII.
In general, I do expect for the folks reading hackerone reports to make some mistakes; there's a lot of people who will just run a vulnerability scanner and report all the results like they've done something useful. Sometimes for real bugs you have to explain the impact with a good "look what I can do with this."
Also, the poster didn't share their submission with us, just the responses. So it's hard to know how clear they were to zendesk. A good bug with a bag explanation o would not expect to get paid
>Sometimes for real bugs you have to explain the impact with a good "look what I can do with this."
I'm not sure. Anybody that keeps up to date with security (e.g. those working in a security team) should know that ticketing systems also contains credentials sometimes. For example when Okta was breached, the main concern was that Okta support tickets contain.... session tokens, cookies, and credentials!
What's the point of having a security team that can't directly link external experience to their own system? Learning the same mistakes that have already been known?
I use a competitor to HackerOne. I view all submissions pre-triage and would have taken it seriously, even if I made a mistake in program scope. I have paid researchers for bugs out of scope before because they were right.
HackerOne is an awful company with a terrible product. Not the first time I’ve heard of their triage process or software getting in the way of actual bug bounty.
My only experience with them was when I found a pretty serious security bug and noticed the company in question had a bounty with them. Opened an account on H1, reported the bug, got "not a serious issue", promptly closed the H1 account. If the company is incompetent or relying on an incompetent 3rd party bug bounty service provider, I won't deal with them. I don't need this in my life.
The company did fix the issue a few months later, so there's that.
They all are. Bugcrowd once told me that, "yes, it's not a security issue or even a bug, but we recommend providing small (100€) rewards for non-bugs to keep researchers engaged!"
Everything is bad sounds like a defeatist stance.
Fact is they are better than triaging everything yourself and also better than outright ignoring all vuln reports.
It’s an imperfect system I agree - but it’s the best we have
It's incredibly hard and resource intensive to run a bounty program, so anyone doing it for shortcuts or virtue signaling will quickly realize if they're not mature enough to run one.
“2) seen as a cheaper alternative to someone who knows anything about security - "why hire someone that actually knows anything about security when we can just pay a pittance to strangers and believe every word they say?"”
It doesn’t make sense, companies with less revenue aren’t the ones doing this. It’s usually the richer tech companies.
It is also larger tech companies that have basically infinite attack surface.
So my argument is that it does not matter how much they spend on security they will get hacked anyway, only thing they can do is keep spending in check and limit scope of hacks.
> A $1.3 billion revenue company being too tight to pay this after all, even on their 2nd chance, is so short-sighted it's absurd.
I'll give an "another side" perspective. My company was much smaller. Out of 10+ "I found a vulnerability" emails I got last year, all were something like mass-produced emails generated based on an automated vulnerability scanning tool.
Investigating all of those for "is it really an issue" is more work than it seems. For many companies looking to improve security, there are higher ROI things to do than investigating all of those emails.
https://www.sqlite.org/cves.html provides an interesting perspective. While they thankfully already have a pretty low surface area from overall design/purpose/etc, You can see a decent number of vulns reported that are either 'not their fault' (i.e. wrappers/consumers) or are close enough to the other side of the airtight hatchway (oh, you had access to the database file to modify it in a malicious way, and modified it in a malicious way)[0]
We also had this problem in my previous company a few years ago, a 20-people company, but somehow we attracted much more attention.
In one specific instance, we had 20 emails in a single month about a specific Wordpress PHP endpoint that had a vulnerability, in a separate market site in another domain. The thing is, it had already been replaced by our Wordpress contractor as part of the default install, but it was returning 200.
But being a static page didn't stop the people running scanners from asking us from money even after they were informed of the above.
We have a policy to never acknowledge unsolicited emails like that unless they follow the simple instructions set-out in our /.well-known/security.txt file (see https://en.wikipedia.org/wiki/Security.txt) - honestly all they have to do is put “I put a banana in my fridge” as the message subject (or use PGP/GPG/SMIME) and it’ll be instantly prioritised.
The logic being that any actual security-researcher with even minimal levels of competency will know to check the security.txt file and can follow basic instructions; while if any of our actual (paying) users find a security issue then they’ll go through our internal ticket site and not public e-mail anyway - so all that’s left are low-effort vuln-scanner reports - and it’s always the same non-issues like clickjacking (but only when using IE9 for some reason, even though it’s 2024 now…) or people who think their browser’s Web Inspector is a “hacking tool” that allows anyone to edit any data in our system…
And FWIW, I’ve never received a genuine security issue report with an admission of kitchen refrigeration of fruit in the 18 months we’ve had a security.txt file - it’s almost as-if qualified competent professionals don’t operate like an embarrassingly pathetic shakedown.
I saw a great presentation from Finn.no on their bug bounty program. They had had great success, despite the amount of work it took. Much more so than the three different security companies they hired each year to find vulnerabilities.
They also had a security.txt file and had received several emails through that, but all of it was spam. Ironically they had received more real security vulnerabilities through people contacting them on LinkedIn than through their security.txt file.
Your milage may vary, but it didn’t seem like the security.txt file was read by the people one would hope would read it.
I understand, this is exactly why I noted "even on their 2nd chance". The initial lack of payout/meaningful response was incompetency by not understanding the severity of the vuln. Fine, happens.
But after the PoC that showed the severity in a way that anyone could understand, they still didn't pay. That's the issue. The whole investigation was done for them.
I previously worked for a mortgage software startup that attracted interest from big banks.
To ease concerns about our scalability and longevity, we move from a tiny office to an office with a lot of empty space.
This strategic move supposes signaled to prospective corporate clients that we were committed to sustaining our solution over the long term, rather than just a few years but in the end the company went out of business. so much for that.
Yet the same corporate will eat anything that Google or MSFT does while we all know they kill projects just like anyone else or like any smaller company going out.
I am actually seriously interested in what people there do day to day. I’m wondering this about a lot of very large companies, I would definitely watch a documentary about that.
Hour-long meetings about whether the copy should read "data center," "datacenter," "data-center," or whether it is really even correct to say any of these at all. And then negotiating with the design folks to fit in the extra character. Only to throw it all away because nobody thought about the fact that it has to support 5 different languages.
I wish I was kidding. Used to work at a place that did crap like that, pulling in developers for these time sucks because "only they really know the correct technical usage for our industry."
It's not weird to pick one and keep a consistent style, for example by looking at Google or at Wikipedia or some other source if the dictionary lists both or neither, but to have meetings about it?!
I work at similar size company. Basically they are like most companies building out the next 5 years while also keeping the lights on at four nines. There can be a lot of depth to product that you dont see. Anyone who says "why you need X people" often havn't tried a side hussle where you see 360 all the activities involved.
Building at scale without racking up big bill and hitting SLAs require a decent amount of effort.
At some point, most of your engineering time is spent on trying to understand what the previous team did. There's probably some engineer at Zendesk banging his head on the table because his boss wouldn't let him fix the sequential ticket IDs when he found them two months ago.
I work in one of the biggest companies of the world (employee and revenue wise) and it's basically a run-off reaction of well-articulated desk employees jerking each other off that, telling each other that they are so very important.
And the common management approach to anything not working immediately is "throw another 1.000 employees into the project" and the middle-managers measure their success by how many employees they are managing so it's a train without breaks. Hope it goes bankrupt soon.
Big companies are places where you get kudos for only taking two weeks to solve a problem you’ve solved elsewhere in two days. To an extent it’s Little’s Law. The latency requires more “CPUs” to handle the traffic.
This is super loud to me RN because some of these "big" companies are case studies in Mythical Man Month's "N channels of communication" as well as weird flashbacks to discussions on costs context switching and schedulers in various CS courses.
Every single click in ServiceNow takes a full 2 seconds to do anything. For a ticketing system. Insane.
What’s more insane is that it is still better than the vast majority of ticketing software. I don’t know what it is about ticketing and Helpdesk that it ALwAYs ends up like that.
> I don’t know what it is about ticketing and Helpdesk that it ALwAYs ends up like that.
The curse of B2B software is that every new big customer wants some custom feature or configuration that is the "deal breaker" for their multi-million dollar contract signing. And everyone except engineering is eager to give it to them because it's not their problem once the ink is dry. Support and renewals are the next guy's problem.
What's interesting is that Frank Slootman touts this transformation as a huge success in his book and talks at length about his conflict with Fred Luddy (who originally authored the simple ticketing incarnation of the ServiceNow monsterblob). The focus on keeping things simple is highlighted as an example of nerds' nearsighted thinking.
I'm sure it's a huge success for the few earning the profits from ServiceNow.
Like any SaaS, the more feature boxes you check, the more potential customers you can "satisfy". And the worse the UX gets for the average user (which then gets driven to purchasing more support).
Great for business (the few), terrible for users (the many). No contradiction there.
Never said that, but a competent engineer should be able to build like 75% of the main functionality of Zendesk over a weekend.
Now, I understand there's probably a lot more to it which is why I would expect it to be a company of around 50 engineers and 150 business/marketing/etc and that's being generous.
The hill I'd die on is that, with money not being a scarce resource and a technically feasible challenge present, a team of 200 should be able to build and sustain almost anything in the world. And that's being even generous. I think realistically a team of 50 should be able to build almost anything
That’s a very HN take but the reality is that the tech is usually never the hard part. Selling, supporting, legal, all the certifications and enterprise contracts you have to do for a product like that are the hard part.
Software developers being surprised that software companies need to do a lot more than just write code is kind of like sailors being surprised that global logistics involves a lot more than handling a ship.
Never said that, but a competent engineer should be able to build like 75% of the main functionality of Zendesk over a weekend.
Now, I understand there's probably a lot more to it which is why I would expect it to be a company of around 50 engineers and 150 business/marketing/etc and that's being generous.
The hill I'd die on is that, with money not being a scarce resource and a technically feasible challenge present, a team of 200 should be able to build and sustain almost anything in the world. And that's being even generous. I think realistically a team of 50 should be able to build almost anything
Bug bounty people do this all the time. It's almost always a sign that your bug is something silly, like DKIM.
Later
I wrote this comment before rereading the original post and realizing that they had literally submitted a DKIM report (albeit a rare instance of a meaningful one). Just to be clear: in my original comment, I did not mean to suggest this bug was silly; only that in the world of security bug bounties, DKIM reports are universally viewed as silly.
Incredible. This must be some kind of "damaged ego" or ass-covering, as it's clearly not a rational decision.
Edit: Another user here has pointed out the reasoning
> It's owned by private equity. Slowly cutting costs and bleeding the brand dry