I got hit with a ~40Gbps DDoS last week. These attacks are on the rise. Some responses to folks above: Success working with upstreams is quite varied. Some care, some don't, and it can be difficult to get to folks that can help- even if their networks are impacted as well. Some carriers immediately turn this into a sales opp. - buy more bandwidth, buy more services.
In our case it was based on DNS reflection from a large number of hosts. I've contacted the top sources (ISPs hosting the largest number of attackers) and provided IPs and timestamps. I've received zero responses.
Geo-based approaches yielded no helpful reduction in source traffic.
Also, during this event we discovered an upstream of ours had misconfigured our realtime blackhole capability. As a result, I'm going to add recurring testing for this capability and burn a couple IPs to make sure upstreams are listening to our rtbh announcements.
Very concerned about the recent microtik CVE, as that is going to make for some very large botnets.
Personally this all is very disappointing because it creates an incentive to centralize / de-distribute applications to a few extremely large infrastructure providers who can survive an attack of these magnitudes.
> Very concerned about the recent microtik CVE, as that is going to make for some very large botnets.
To be pedantic there is technically no recent Mikrotik CVE WRT Meris. It was patched in 2018(?) shortly after discovery.
From their response to the Meris botnet[1]:
> As far as we have seen, these attacks use the same routers that were compromised in 2018, when MikroTik RouterOS had a vulnerability, that was quickly patched.
> Unfortunately, closing the vulnerability does not immediately protect these routers. If somebody got your password in 2018, just an upgrade will not help. You must also change password, re-check your firewall if it does not allow remote access to unknown parties, and look for scripts that you did not create.
It goes into more detail to further check/harden the device in the blog post. A lot of issues stem from having Winbox or other admin access not properly firewalled off and open to the world. Blessing and a curse of the power you have with these devices I guess.
I work for a DDoS prevention provider, and business is booming at the moment.
I'm not a salesman nor do I care that much about the company, but seriously, let someone handle this stuff for you. There are at least a handful of us left, and we are good at what we do, generally.
I get that, but it's one of those things a lot of people go through, and blow a ton of effort on solved problems. Because it's not one of the things that's so obvious and commoditized...yet.
Peace of mind is worth a lot. If this person were with my company for example, this wouldn't even be a comment.
I say my company meaning my employer. I have exactly zero stake in mentioning them by name or advertising them.
TBF, I agree with the sentiment...cloudflare does seemed poised to own the world.
But they don't do everything well.. Not everyone wants a stupid landing page or captcha.
And Cloudflare will say... wait that's just free tier stuff, our enterprise stuff is X. Which is the whole problem, people associate free tier stuff with them, while they do offer better things. And those things compete directly with others who offer...those same things.
If Cloudflare is the endgame, why are companies like mine still getting customer acquisition?
Others have plenty of network capacity... that's not even a thing.
Voxility has some very aggressive cold-sales department. They've approached me many times to sell their product.
I've ignored the first few mails. Still they kept coming.
I've told them curtly but politely we weren't interested. They tried to engage a sales conversation from that. Ignored them. Still they kept coming. Future outreach mails not even acknowledging the earlier conversation.
I told them they needed to learn when to let go and, literally, to "FUCK OFF". Still they kept coming.
I stopped their spamning with a custom SpamAssassin rule. That's what it took to get rid of them.
Don't support those spammy sales techniques. Don't do business with them.
Don't Google, Amazon and Msft have their own CDN products with DDoS protections? I know they probably aren't at cloudflare's level of sophistication but in a few years time you don't think it will be a native offering from the big three as well as cloudflare duking it out in this space?
They do, but think about it. These companies charge money for traffic, so there's always a bit of a conflict of interest. My company has tons of customers from companies using cloud solutions who don't want the hit in pricing.
I do think one day they'll squash everyone out, but not anytime immediately.
Any of the players in say, the typical Forrester wave report is fine. However, don't necessarily trust the report's findings...just for the list of names.
One big thing, to me. Call each as you would as a customer. Apparently some are easier to get ahold of than others.
Pick one where you can get a competent human on the phone quickly. Because when you're getting DDoS'ed and threatened by your bosses at 1AM, it's going to be a big peace of mind.
DNS based amplification has been very popular for many years now. By this point if a DNS resolver is still being used by amplification no email or contacting the ISP will do anything as they have received many other similar emails.
Modern UDP-based protocols handle this in two ways. First, prefer to make responses no larger than the request, so there is no amplification.
Second, if the response has to be larger than the request, send the requestor an address-specific value in a small initial response, e.g. HMAC of a secret and the sender IP. Then any request that incurs a large response has to contain that value. If the sender is spoofing the IP address and can't receive the small response sent to that address, they can't cause a large response to be sent there.
This can't be done with DNS because of "security" middleboxes. They ossify the protocol because they reject anything they don't understand, and they don't understand new versions of the protocol even if both of the endpoints do. So the protocol gets frozen in time and no security improvements can be made because of the things that claim to be there to improve security.
That sounds like its time to push standards forward, announce deprecations in advance, and have as many end services as possible adopt erroring if what they are receiving isn't standards compliant.
There is little actual reason for security middleware to not keep up.
Everything is working as intended though: we're talking about security middleware, not security middleware.
This stuff is built on the foundation of puffing out EnTeRpRiSe ScAlE egos with "look at all this vast complexity that I made, I am a god". It's not built on a technical foundation of always moving the needle forward just because you can and because it's cool and the right thing to do.
Sooo, all the $$$ get spent on dashboards and analytics screens and front panel designs and logos and stuff. The actual DNS bits? Probably /r/programminghorror material.
The point of deprecations is to eventually force a bad experience for those who are not keeping up. They definitely do work, but the time periods to affect change can be long. In the tech sphere many seem to interpret a long transition period as not working granted the usual pace of change.
That's different. There are ways in which ipv4 is subjectively better than ipv6, and "the catastrophe of needing more addresses" has not really panned out yet.
Resolver software is massively distributed, you don't force anything. The only place that can force anything from the top may be root servers, but even then, many resolver operators are probably just downloading root zone in bulk via https from somewhere to precache it and don't contact root servers at all.
I don't think Facebook's apps work when there's no access to DNS. At least it didn't seem like it when I was working to keep that capability for WhatsApp as it moved into FB datacenters.
I don't think very many other applications will work without DNS either, although I never did much competitive testing.
Sure, they could, but at least while I was there, there was no interest in doing it, and amazement than anyone else would want to (and push back on declaring at least a handful of IPs as stably allocated enough to be included in app downloads).
The last 100 times were in the past few hours, roughly. Are you assuming nobody uses FQDNs or URLs anymore? Better yet, are you assuming only humans use those?
I dictated the domain for our home automation system to our early-20s cleaner ("dictionaryword dot dictionaryword"), and after a few minutes she asked me "what do you usually Google to get there?".
> As a result, I'm going to add recurring testing for this capability and burn a couple IPs to make sure upstreams are listening to our rtbh announcements.
Could you or someone else expand on this? How do you coordinate rtbh with your ISP? And how do you check for whether it's working? I'd love to learn more on this topic. Thanks!
Many ISPs (better called transit providers in this context) offer a service whereby you announce to them a route (over BGP) with a specific BGP community, sometimes over a special session, sometimes inband with your normal transit sessions and they will blackhole (route to discard, null0, /dev/null) all traffic to that IP. Unlike normal internet announcements these are generally (exclusively with the providers i've worked with) available down to the smallest IP unit (v4/32, v6/64) so you can blackhole an IP which is being attacked without impacting other IPs inside the same subnet.
How do you test it? Very simple. Announce an IP (or a few) as blackhole and test to make sure things don't work (from that IP).
Very simply could setup something to ping something on that provider's infrastructure from that IP and... if it starts to work, alert!
To my knowledge there's also ways to integrate network observability tools (like kentik) to automate this to a degree, for those that are big enough or DDOS events are common enough that is useful to do so.
I imagine getting paged because of a DDOS is slightly easier when it's telling you it already null routed a few IPs so the rest of your network isn't screwed and you just have to identify how problematic those specific IPs being out of service is and whether you need to take action or wait for them to get bored.
Can you give more detail on what you mean by the "recent Mikrotik CVE?" As far as I understood, the recent botnet was utilizing devices still unpatched since the original issue (2018?), as well as credentials gathered then even for patched devices that were affected but passwords weren't rotated.
According to Mikrotik the recent botnet of hacked routers are only used for proxying traffic, not generating it. If this is true then they're only useful for hiding source addresses, an attacker actually loses power if using them for a volumetric attack.
If that were true, it would be trivial to trace back the traffic to the origin of the attack since you would see 40Gbps incoming at the target, one set of intermediaries, and 40Gbps to a single source (which is also unlikely given 40Gbps uplinks are quite a big monthly expense). They might be using the Tor network (or creating a makeshift proxy network), but it would seemingly be a waste of bandwidth on the target routers. The regular C&C approach seems more practical since it can make use of available bandwidth and leaves less of a trail.
It's quite unlikely there's a single link sending the traffic; that would be super easy to block. Most likely this is used to either hide the command server or the actually compromised servers. While symmetrical load can be quite obvious, it's less so if we're talking about 2000 links sending at 20 mb/s each. Especially when those links also have legitimate traffic.
I fail to understand how DNS reflection attacks are possible. Isn't it in the interest of any ISP to block outgoing spoofed IP packets ? So as not to be accused of letting those attacks originate from them ?
If you're thinking of residential ISPs which own the IPs that their customers use, it's mostly pretty simple to prevent spoofing (or at least, prevent spoofing outside their address range), but once you get customers that can bring their own IPs, it can be more cumbersome and the default is to not care.
There are, of course, proposals and protocols to make things better, but not caring is easier, and strict enforcement is hard, especially as the bandwidth goes up.
There's also enough ISPs that don't do egress filtering that name and shame isn't effective, and anyway, what am I going to do if I'm connected to an ISP that doesn't filter? I have exactly one meaningful offer of connectivity, so that's the one I've accepted, regardless of its merits. Many ISPs have a similar hold on customers.
The routing need not be symmetric. If I've got two /24s and two ISPs, maybe I advertise each /24 on only one ISP, for traffic engineering purposes (I want to steer some of my clients to send through one ISP and some clients to send through another), but that doesn't mean I want or need to send the return traffic back through the same way it came in.
Also, the routing/BGP configuration is often separate from the filtering config, so making sure things are synchronized can cause problems.
It's far less time intensive as an ISP to just not filter once it starts getting complex. After all, you don't get a lot of customer service calls when you let ill-advised packets through, but you do get calls when you break things by dropping packets your clients were authorized to send.
It may be so, however my own need for clarification was the same as yours. If ISP xyz operates a /16 ip block, and only forwards source packets from its /16... Then, I guess, my question: how do spoofed packers usually travel past their first hops on their route to the target?
[edit] after-thought: I imagine attackers might easily spoof their source as any of their ISPs /16 or /8. Feels like that's not entire story here though.
I think you're still missing the point. Let's say I run John Doe's chop shop. I have purchased my own /24 (192.168.1.0/24). I want redundant connectivity so I have service from Comcast and qwest. I use bgp to decide every I'm routing my IP through Comcast or qwest at any given moment so they both need to allow traffic from that IP space even though it may not be from me. X100000. So it's easier to just not filter at all.
The ISP originating the spoofed packets isn’t apparent to the person receiving the attack. The source is spoofed to the victim’s address so neither the DNS operator nor the victim can see where the spoofed packets originated.
Anecdotally I've noticed bot-like behavior on HN down-voting posts almost immediately, before a human could have read them. This is followed, typically, by a correction over a few minutes and then regular up- (or down-) voting resumes. I can imagine that HN is subjected to quite a lot of novel attacks of this nature, and I don't think it's in anyone's interest that they broadcast the details of it. My advice: be patient and take downvotes with a big grain of salt. They may not mean what you think they mean.
The mac addresses on the ethernet layer are rewritten at each hop. You would only see the mac address of the router your router is connected to, not a chain linking back to the origin.
Who would then have to determine where it's coming into their network from, and go ask that ISP, who would have to do the same, ad nauseum. And all parties would have to be paying enough staff to handle that load in addition to, you know, making sure their services work.
- the DNS operator doesn’t care. These look like normal requests.
- if they did care, asking an ISP to packet trace ingress traffic is not trivial. At any large scale ISP there are hundreds to thousands of direct peers that could have originated that traffic.
When you get to the level of traffic that it matters to check this, you are at the point that you cannot actually check this. Just think of the amount of traffic per second, the number of source addresses, and how many people it will take for how long to research that minute worth of traffic from last month.
I'm no sure how things are now, but 15 years ago (back when I was doing network admin type stuff), most routers did destination routing in hardware, source routing hit the CPU and was thus, very slow. I think that was the primary reason not to do that back then---perhaps it's still the case now?
It's not really recent (2018) just that it's still being used after all this time. If you're using a default config on the home router you're basically already fine (save for changing the default login).
Thats not really a solution. If you're getting hit with a 40Gbps attack and are running a website and you can simply use Cloudflare, that's a perfectly valid solution. Sure it means that some folks can't crawl your sites and a small number of people might get hit with captchas, but it's better than having no site.
I think people who make this argument tend to forget that Cloudflare is an infrastructure provider that the website owner employs. It's not really MITM if the site owner explicitly asked them to terminate TLS so that CF can provide load balancing, tunneling, and a number of other services. It's the exact same as using an AWS ELB. Yeah it terminates TLS, but you can't really say its doing MITM since the site owner specifically configured it for that purpose.
> Sure it means that some folks can't crawl your sites and a small number of people might get hit with captchas, but it's better than having no site.
Yeah, I just feel like people overreact to news.
Moms of 2021: this person in the news had a very bad case of X (covid, covid vaccine, hazelnuts, idk), I should avoid X preemptively
Nerds of 2021: this website in the news had a very bad DDoS attack, I should avoid DDoS attacks preemptively
The vast majority of people do not need to break the internet[1] for DDoS protection. It is really not that common. I know exactly nobody whose personal website got DDoS'ed. I do know people whose personal website is behind Cloudflare to preemptively avoid this problem.
I run a website myself where people can host all sorts of contents, I can totally imagine not everyone is happy with that. Never been on the receiving end of any kind of abuse though (people even ask me if I'm not afraid of that!). And if I were, I'd talk to my ISP -- they were previously involved in lawsuits for internet freedoms (i.e. on the good side), perhaps they are also happy to help me keep a site hosted with them before I need to consider moving to big brother corp for protection.
> It's not really MITM if the site owner explicitly asked them to terminate TLS
Nobody means MITM in the attacker sense when the service being MITM'd literally asked the proxy to proxy their traffic. Obviously. Saying that Cloudflare MITMs connections is a way to carry both meaning and judgement, similar to how I will talk about middleboxes on corporate networks that block evil haxxor tools that I need for my daily work (y'know, wireshark and such). I call those MITM boxes because that's what they do but also because I think they're more evil than good and the term reflects that (even if there are obviously pros and cons, same with Cloudflare).
> It's the exact same as using an AWS ELB
Hmm, if I understand what AWS does correctly, their load balancing service just routes traffic internally. It's not a transparent proxy where you think you're talking to one company but really you're talking to another. The manager at BigBank also understands intuitively that if they host their data at ExampleCorp, that ExampleCorp needs to not have data breaches. But if Cloudflare it just removing malicious traffic, it's not immediately obvious that they are in just as sensitive a position. The privacy policy rarely if ever mentions such proxying services.
I take your point though that it's not that different. This is also why I'd never host with Amazon or configure my email servers to be Google's, but yeah Cloudflare proxying gets more comments than hosting the whole thing at what some people perceive as an evilcorp. Not sure if that's for the aforementioned reasons or not.
In our case it was based on DNS reflection from a large number of hosts. I've contacted the top sources (ISPs hosting the largest number of attackers) and provided IPs and timestamps. I've received zero responses.
Geo-based approaches yielded no helpful reduction in source traffic.
Also, during this event we discovered an upstream of ours had misconfigured our realtime blackhole capability. As a result, I'm going to add recurring testing for this capability and burn a couple IPs to make sure upstreams are listening to our rtbh announcements.
Very concerned about the recent microtik CVE, as that is going to make for some very large botnets.
Personally this all is very disappointing because it creates an incentive to centralize / de-distribute applications to a few extremely large infrastructure providers who can survive an attack of these magnitudes.