I lead Product Security at Cloudflare, thanks for the writeup Albert and the fantastic security research throughout the past year.
Once this issue was fixed we investigated all prior email routing configurations to ensure that this had only been found as part of Albert's responsible disclosure to us.
Since some comments are addressing that this happened 7 months ago. Our disclosure policy is to allow researchers to write about us once the issue is fixed, but give us a week heads up before they publish so we aren't surprised, can coordinate any public comms we want to make, FAQs that need to be written for inbound questions from customers, and can tailor our response to the issue at hand. Can answer other questions if you have any.
What's missing from your statement is how you plan to prevent this kind of thing (a programming oversight so simple it shouldn't have landed in production) in the future. Something like mandatory security reviews.
It's obviously not good in general, it's 'obscurity' (as in 'is not security') really, but it seems pretty harmless for a gradual roll-out feature toggle? If someone cares enough and knows enough to find they can get past it, let them play with the beta feature?
After all, it got them this responsible disclosure.
It wasn't part of the vulnerability, it just allowed OP (who happened not to be legitimately in the beta) to find it.
it should have never hit production in the first place
this isn't some kid writing their first dynamic webpage, it's a public multinational that proxies a large percentage of the internet as security product
the fact this wasn't caught internally is extremely worrying and makes me wonder what other sort of basic quality issues they have lurking
100% agree. This reflects EXTREMELY BADLY on Cloudflare. This changed my perception almost 180 degrees of what I thought of Cloudflare before. This is not a thing you'd expect devs to do at a moderately important company, let alsone slip through QA, code review and many other checkpoints. The fact that this happened at a company which routes half the internet's traffic through their servers is SUPER worrying. Glad I'm not a CF customer anymore. Never likes them MITM everything.
The client being able to opt itself in to the beta? That's what I'm talking about, not routing anyone's domain - that shouldn't have been allowed in production, beta or not, agreed, and clearly so does CF.
But client managing to opt itself in to a beta, as my GP comment was about, is no big deal. Worst case you're getting something free that you're supposed to pay for.
Beta security shouldn't be worse than the rest of prod, private beta members shouldn't be trusted more than non-members; so if the odd non-member finds a way in it's fine.
I agree. As an architect at a listed company, trusting user supplied data like this would usually get caught in either planning or at CR before hitting QA.
> It's obviously not good in general, it's 'obscurity' (as in 'is not security') really, but it seems pretty harmless for a gradual roll-out feature toggle?
Not at all. How often do people complain of temporary solutions becoming permanent? Doing it wrong out of the gate is a surefire way to ensure it makes it to production if there's no further review.
I don't understand your comment at all. I'm not saying anything is 'ok for now while it's a beta'. I'm saying 'client-side validation of beta feature toggles is pretty much fine'.
The beta feature had a very bad bug that allowed hijacking other people's email. That is entirely independent of controlling access to the beta feature.
It's only mentioned in the write-up because OP wouldn't have been able to explore bugs in the beta feature without finding access to it first, which he didn't otherwise have.
Well, the whole point of Beta program is to limit the participation and thus the exposure of attack vectors, and reduce the impact during the testing phase.
I suppose I wouldn't personally think of it as limiting exposure like that - prod is prod, beta feature or not - but if one does for a given product then yes sure it's a bad bug.
Do we really think Cloudflare Email Routing private beta was private to somehow trusted parties only though? Presumably 'N-mutual trusted parties' too, for regulatory compliance. I assume not; not least because the product security lead is here in the comments saying they vetted logs etc. after the fact to ensure that only OP took advantage of this.
I've built numerous systems that could be called "beta programs" - and the goal of none of them was security related - but rather product feedback related.
No, why are you shipping it to production if you need feedback on security? Beta feature security should be best effort, as good as anything else in production, because it is production, beta or not.
Usually rollouts like that are slow to prevent a flood of people using it at once, and to slowly open the flood gates. Not many people will manually change the flag, either. I don't think the fact that the feature flag was just client side is a "smoking gun".
What was missing was the server side ownership check. We decide which customer owns the real "example.com" which is very battle tested logic, but had missed the check in this new service. The client side validation is expected too, though
I've been in this industry since 1996. Must every programmer make this mistake for themselves before they learn? I see this over, and over, and over again, at company after company after company. I'm really surprised a company as seemingly competent as Cloudflare would make this mistake, to say nothing of the larger error of allowing forwarding to be setup on an unverified domain.
I weep.
Edit: I'd appreciate a response from Cloudflare on how this slipped through the cracks and what changes they are making to prevent such mistakes in the future. Not trusting user input is among the most basic thing I'd expect a programmer, especially one working at CF, to know in 2022, so I'm assuming this was some sort of miscommunication between teams.
At this point I'm nearly an old man yelling at a cloud, but anytime a junior engineer starts I try to go over a handful of basic security things. Seems that most people coming out of school are aware of SQL injection and XSS, but the concept of client-side security is no security is not sinking in, even after it's been explained.
I've spent a lot of time trying to figure out why, and the best I can come up with is that they just think something like the following myths:
* Nobody would ever do that.
* Nobody would ever hit the API directly.
* It would be too hard to do that for somebody who isn't the developer.
* The need for an auth token would make it impossible to hit it with curl or another client.
* Nobody can change the client code because it's obfuscated and unreadable for humans.
Doing it on the server side is harder and a pain in the ass, so there's a lot of incentive to do it on the client and not worry about the server. It particularly doesn't help that as an industry now we're obsessed with speed and quality be damned. We can always fix it/do it right later (spoiler alert: you never do and you never have time because there will always be important new features that are high priority).
There's a bigger problem in that schools are doing a terrible job of teaching programmers proper development practices. If you're lucky, as a student, your school might have an add-on course for secure development. The problem is, secure development practices shouldn't be an afterthought but rather a baked-in core philosophy.
It wasn't mean to be client-side only, it was meant to be client-side and server-side. Unfortunately, the server-side check wasn't happening in the way intended allowing Albert to find this vulnerability. We have a special document that describes bug classes that need special attention and this particular incident has been added to it.
So nobody here can think of any possible scenario where bypassing a server-side check for 'Account X can access Feature Y' could directly lead to a security issue?
This is absolutely 100% a vulnerability - insomuch as that CloudFlare should have an explicit policy that ALL account features are enabled / verified server-side, not client side.
Think of all the spam that would have happened, had this been discovered on underground black-hat forums.
Do you want to offer some? It's not clear this even bypassed payment that should have been due. That would be worse, and still not really a vulnerability.
> Think of all the spam that would have happened, had this been discovered on underground black-hat forums.
What spam would have happened as a result of early access to a new Cloudflare feature, that's independent of any (other) bugs/security flaws in that feature?
(Also, even with the actual vulnerability here, what 'spam' would have happened? This hijacks recieving. Worse, yes, but I don't see how it helps spammers.)
Accessing functionality you should not otherwise have access to is by definition a vulnerability. CF apparently agrees since they paid out a bounty for it.
> CF apparently agrees since they paid out a bounty for it.
Not really, it was mentioned as part of a report of the main, much more critical issue of 'hijacking email with Cloudflare Email Routing' - note that's the title itself, not 'accessing a cloudflare beta feature'...
jgc is just responding to the OP, don't strawman him. OP is the person focusing on the client-side beta check, so jgc is responding to that particular thing. Has zero bearing on the overall issue.
I'm not 'strawmanning' him, I'm saying responding to that line of argument (or at least in the way that he did, accepting the claim it's bad and defending on the back foot) lends it undeserved credence.
I agree it 'has zero bearing on the overall issue'. That is pretty much my entire point.
How was this feature able to ship to production w/o the server-side check being tested?
Does CF have a launch process for new features? Does that process include reviewing the special document and testing the new feature against the various bug classes?
It occurs to me that this speaks to a more fundamental pain point in software testing.
It's so common to write tests that say, in one block of code, "these are the inputs, a function or endpoint is called, and these are the expected outputs." But in theory, every single one of those tests should have variations like "if this exact same input was provided, but with a user with different permissions, as well as a non-logged-in user, the endpoint should fail with a permission error."
But then you get into a quandary: if you want to be able to customize your test code granularly, or have your inputs and outputs generated at runtime, you need to be able to write your test setup code as code, not [{ input, endpoint_name, output }] configurations. But if you need code, then you can accidentally run code that calls things directly without the abstractions that would run it with the extra variations.
At a certain scale, this speaks to needing either static analysis of your testing code, or extremely diligent review processes, beyond simple code coverage.
Does anyone have go-to libraries that deal with this at scale?
Because we are humans and we make mistakes. Not everyone is an expert at every one thing and whilst this is a pretty bad bug and should have come up in the planning phase when considering security risks, or a security review before hitting production, shit happens. Get off your high horse.
We put processes in place because not everyone is an expert and because humans make mistakes. Shipping new features at a company of CloudFlare's maturity and scope should have a launch checklist. That checklist should include verifying the security of new API endpoints. This was a big goof up and I think my criticism is warranted, though I could have worded it more charitably.
This type of response makes me think you don’t actually understand how important security is during the product development process and to customers. And that GP is on the mark being critical. Someone messed up and humans are humans, everyone gets that. But that doesn't absolve CF of some due explanation to people who are rightfully very concerned. CF got really lucky that a bad actor didn't find this. I don't like high horses either but you could also use a dose of humility.
I very much do understand the important of security in product development. I’ve been part of numerous security audits amongst other things I shan’t get too deep into here as it’s not the point.
The point is that software is developed by humans. Humans make mistakes. Find me a single Fortune 500 company that hasn’t had an embarrassing security issue due to human error. As I said, shit happens.
The response time for fixing this and everything else entailed in a big bounty process looks good to me. I’m sure whatever team worked on this feature will learn from it.
Disclosing a vulnerability immediately after it is discovered has a few problems. One is a risk to the customers, as script kiddies will create git repos full of tools to automation exploitation of the vulnerability. Another risk is that people will jump to conclusions without a proper root cause analysis being performed that determines how this happened, what is required to prevent it from happening and if there may be more aspects to this vulnerability than was were originally thought to exist. Another reason to not disclose immediately would be that in most cases it will violate the agreement the penetration tester or security researcher has with the bug bounty program. Disclosing immediately would mean they do not get paid for their discovery. This payment for bugs concept provides an incentive for people to help a company fix their bugs that their own developers and QA teams may have overlooked.
The bug was initially reported to Cloudflare's private bug bounty program since there was no public program at the time. Like it or not, the private program does not have the same disclosure policy as the public program does.
In early July I asked if the report could be disclosed, seeing as things had changed since the bug was originally reported. Cloudflare agreed and the report was then moved to the public program. As to why it was disclosed now rather than in February when the public program launched, that was my fault for not asking earlier.
Why would anyone even allow the biggest man-in-the-middle on the Internet to route their e-mail in the first place? It's a situation that will never be private and secure.
A smaller man-in-the-middle would be more secure?
I mean if email forward is really necessary (in my case yes) and successful delivery of email is really important (so I don't want to use my own VPS and the headache it brings) are there any better options for me than CloudFlare? My domains already use CloudFlare for their CDN etc.
If it's smaller and not connected to the US intelligence agencies, it certainly would. But if you use CloudFlare to begin with then there is no absolute notion of security and privacy, but this is of course a viable option if one doesn't do critical, important stuff.
Hey Albert, awesome first post. You made a great deal to include the _right_ amount of detail, and your writing style is sufficient enough to keep me engaged through the entire post. Keep going!
You've obviously got a strong career in Security in the future. Have you looked at any Crypto projects? Seems like there are some massive bounties on https://immunefi.com and similar sites.
Security professionals of this caliber often make $37k in monthly compensation, each and every month. That's only $230/hour. If you can do work like this, your consulting rate is at least that for penetration testing.
Bug bounty programs are a bad deal for researchers. The payout for this bug is absurdly low.
Yep. On the hiring side, you can absolutely see this when you get someone's resume. A person with in-industry experience will often not list their HackerOne profile (if they even have one), while students mostly do in my experience.
Payouts are a joke and progress is slow. It wasn't that long ago people were overwhelmingly just arrested or threatened for reporting these kinds of things but thankfully that's becoming rarer.
The amounts for these bounties though seem to be a token gesture and not much else, especially considering the damage someone could have caused with this.
In this case the second $3000 bounty was due to a 2x promotion at the time. The guideline for a critical is $3000, but Cloudflare does occasionally award bonuses for severe vulnerabilities (e.g. https://hackerone.com/reports/1478633).
6k seems to be a really low payout given the potential impact (particularly with respect to personal data), the work needed to discover such vulnerabilities, the revenue of cloudflare and the potential money that could be made by a blackhead. Or am I naive?
Sure, but the client-side check completely limits the exposure Cloudflare have because big enterprise corporates don't get themselves into a private beta by playing with client-side checks. Only a few companies were using this feature at the time.
Yup, 'Burp' refers to the free version of 'Burp Suite'. I don't use Burp Suite anymore though. Some months ago I started using mitmproxy (https://github.com/mitmproxy/mitmproxy) due to it's Python scripting API. I have never looked back since then.
Nice catch! Frankly I think this points to a real danger of any central widely used service of which Cloudflare is an example. Any vulnerability or exploitation of such a point jeopardizes far too much.
> The bug has since been fixed and Cloudflare has kindly allowed me to publish this write-up.
Seems wild that they can deny you the opportunity to publish your findings for 7 months after the fix for a beta product went live. What is that about? Was that a requirement to get the bug bounty? Seems like a great way to encourage finding another buyer instead.
> Since some comments are addressing that this happened 7 months ago. Our disclosure policy is to allow researchers to write about us once the issue is fixed, but give us a week heads up before they publish so we aren't surprised, can coordinate any public comms we want to make, FAQs that need to be written for inbound questions from customers, and can tailor our response to the issue at hand. Can answer other questions if you have any.
So now email is unsafe for MFA/password reset messages too? (actually I've long argued it's not really any safer than SMS, at least in countries where there are decent regulations around porting numbers).
Albert, you seem to work a lot with or for CloudFlare in regards to security.
Would you trust CloudFlare handling your important email forwards going forward? I recently switched to them for my forwards a few weeks ago, I would feel better if your answer is a yes .... :)
wow this is insane. who knows how many emails were skimmed on cloudflare? definitely will not be trusting this service because how many other vulnerabilities are not yet discovered?
I lead Product Security at Cloudflare (and I'm one of Albert's biggest fans, he's contributed a lot to our bug bounty, thank you Albert).
Once he reported the issue we investigated all prior email routing configurations to ensure that this had only been found as part of Albert's responsible disclosure to us.
I place greater trust in organisations that are proactive about their security posture, versus an organisation where this type of vulnerability would never have been publicly disclosed.
Proactive but not preventive. You know only after an incident occurs. While I appreciate it, the risk isn't mitigated at all unless you don't use the said service. ex. Heroku
Wow did Cloudflare outsource all their development work to some cheap agency or how come they develop features like this with zero security and pretty damn naive, simplistic and childish bugs you wouldn’t expect from a company that routes half of the internet’s traffic through their servers.
Once this issue was fixed we investigated all prior email routing configurations to ensure that this had only been found as part of Albert's responsible disclosure to us.
Since some comments are addressing that this happened 7 months ago. Our disclosure policy is to allow researchers to write about us once the issue is fixed, but give us a week heads up before they publish so we aren't surprised, can coordinate any public comms we want to make, FAQs that need to be written for inbound questions from customers, and can tailor our response to the issue at hand. Can answer other questions if you have any.
We disclosed the issue here earlier this week once Albert told us he was writing a blog: https://hackerone.com/reports/1419341