I'd say there is a 98% chance this is a bug in some firmware and a 2% chance AT&T is intentionally trying to block Cloudflare DNS.
I get why people are paranoid about ISPs blocking content and net neutrality, but let's not cry wolf prematurely. The technical details here strongly suggest a bug rather than intentional blocking of 1.1.1.1 DNS traffic.
> For IPv6, we have chosen 2606:4700:4700::1111 and 2606:4700:4700::1001 for our service. It’s not as easy to get cool IPv6 addresses; however, we’ve picked an address that only uses digits.
shows "connect: Network is unreachable". Am I using ping6 wrong?
We also need to confirm IPV6 works outside AT&T's network.
Edit: Just tried Google's DNS. 8.8.8.8 works, but their IPv6 doesn't, so I guess this was a bad test.
Edit2: Learned about nslookup, but it does not seem to work with either Google or CloudFlare's DNS.
nslookup reddit.com # Works
nslookup reddit.com 1.1.1.1 # Works
nslookup reddit.com 1.0.0.1 # Works
nslookup reddit.com 2606:4700:4700::1111 # Does not
nslookup reddit.com 8.8.8.8 # Works
nslookup reddit.com 2001:4860:4860::8888 # Does not
nslookup reddit.com 2001:4860:4860:0:0:0:0:8888 # Does not
Edit3: Apparently my ISP doesn't support IPv6 yet.
You're using the IPV6 address correctly, does https://test-ipv6.com report everything's dandy for you? If it does maybe they're blocking traffic or there's something else going on.
I'm using Bell in Ontario. It could be either my Router doesn't support it, the Apartment isn't wired up to support it (if that's required?), my ISP doesn't support it in my area, or my Bell internet plan doesn't cover IPv6...
I'll ask them about it when they ring me up next time asking for more money.
fwiw, I am an AT&T customer in Atlanta on their fiber service.
the nslookup reddit.com 1.1.1.1 does not return for me, if I connect to work via VPN it does. 1.0.0.1 and 8.8.8.8 do work without VPN. while the AT&T modem shows IPV6 I did not test.
System Information
Type Value
Manufacturer Pace Plc
Model 5268AC
You are definitely wrong. No daemons have to be running, ping operates using standard ICMP echo messages that are a part of any complete IP stack. Any meaningful OS will respond to pings unless prevented from receiving them by a firewall. It wouldn't surprise me to find that some embedded implementations skip that part for size reasons, but even in that category most devices I have available to me still respond. It's a basic network connectivity diagnostic tool.
What is unfortunately common though is people blocking ICMP at their firewall, either at the host level itself or further upstream. Sometimes they just block echo requests, but often they block ICMP entirely which breaks things in very weird ways from time to time.
Blocking ICMP in any way is generally to be considered harmful. It's not 1997 anymore, the "ping of death" is not a thing on any OS you should actually be connecting to the internet.
I have AT&T internet, and the BGW-210 gateway with the latest firmware. And my area was upgraded to native dual stack ipv6 about a year ago. So I tested it out and the ipv6 CloudFlare DNS (2606:4700:4700::1111 , 2606:4700:4700::1001) works perfectly fine. https://imgur.com/a/grUzeDD Its only the ipv4 1.1.1.1 that dose not. And AT&T made a statement why that is.
""With the recent launch of Cloudflare's 1.1.1.1 DNS service, we have discovered an unintentional gateway IP address conflict with 1 of their 4 usable IPs and are working to resolve the issue,"
A few of you will be disappointed to know its not a evil attempt to block you from using it. Same way they have literally never blocked the ability to use any other DNS service before.It's simply a bug caused by the way the BGW-210, and Pace 5268AC operate and make use of 1.1.1.1 internally in some way and it will be fixed with a firmware update.
AT&T isn’t blocking 1.1.1.1, just tested it on my uverse connection. As much as I hate AT&T their internet is pretty solid with the exception of datacaps
A more interesting use case though it would have its dangers is them showing a message to AT&T users that their ISP is doing things to damage the internet and that they should call and complain. People got mad at the idea of CloudFlare slowing down network requests by FCC members in protest of their shenanigans.
This is what happened to me as well. It worked for a day or so and then stopped.
I have ATT U-verse internet service and use their Arris BGW210-700 gateway
One interesting thing is that if I go to the gateway management page, and use their diagnostic tools, I'm able to ping / traceroute the address - but I can't from any devices connected to the gateway
From gateway diag page:
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: seq=0 ttl=64 time=0.568 ms
64 bytes from 1.1.1.1: seq=1 ttl=64 time=0.156 ms
64 bytes from 1.1.1.1: seq=2 ttl=64 time=0.164 ms
64 bytes from 1.1.1.1: seq=3 ttl=64 time=0.144 ms
--- 1.1.1.1 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.144/0.258/0.568 ms
traceroute to 1.1.1.1 (1.1.1.1), 30 hops max, 38 byte packets
1 1dot1dot1dot1.cloudflare-dns.com (1.1.1.1) 0.285 ms 0.177 ms 0.090 ms
The times on the pings make it look like its hitting a loopback address instead. Pings to 8.8.8.8 from the diagnostics page take about 23 ms. No way 1.1.1.1 is completing in under 1ms haha
A possible explanation is that the traffic from active use of 1.1.1.1 caused some backend service to get overloaded with traffic due to a faulty assumption that the address would never be used by customers. Anyone keep traceroutes while before the patch to see if there were errant stops or delays?
They had the choice of "fix the whole backend" or "block 1.x on the user end".
Guess we know which one was easier. If all this wild speculation is true, maybe they're working on a fix to the root cause and will roll back the patch when complete.
This would make the situation both due to incompetence and intentional.
1.1.1.1 is well known (based on the announcement from cloudflare anyway) to have tons of random traffic. That's part of the reason it wasn't implemented by others as a valid address for anything. Could the fact that they're simply allowing traffic at that address cause additional stress on AT&T's network?
I ask because I don't know. I figure any traffic headed that direction would go anyway it just wouldn't get routed very far with no valid destination.
Yeah. And there's also a lot of traffic going in Facebook's direction, for example. Hey, let's blackhole that too - and alleviate the stress on our network that comes from people using it. (In non-sarcastic tone: that doesn't make any sense.)
Based on what I understand, the amount of traffic headed to 1.1.1.1 is much more significant. I agree with you though, that wouldn’t be justification to block it. It looks like they’re also blocking 1.0.0.1 and the relevant ipv6 addresses which shouldn’t have the same traffic issue.
I doubt it's all that significant, it's a really small portion of traffic compared to a web page, javascript, css or images... and with caching even less of an impact.
The problem isn’t DNS traffic. The problem is that for years people have been using 1.1.1.1 in the configuration of software and devices when they didn’t have an up address to configure. The result is that when 1.1.1.1 becomes routable all that additional traffic flows there and AT&T along with other provides carries that traffic. I was wrong that AT&T was blocking it for honorable reasons but this is a still a significant amount of traffic.
I was using 1.1.1.1 with AT&T Fiber and it stopped working. I didn't really question it, I figured maybe something went down at Cloudflare so I just switched my Mac back to using the defaults again. It never even occurred to me that AT&T might be blocking it.
Maybe stupid question, but why would AT&T block it?
A few others have mentioned this already, but 1.1.1.1 has become a colloquial private address, used either as a blackhole or as a destination for internal traffic. Sort of like how 555-5555 technically isn't reserved (only 555-01xx is, according to Wikipedia), but practically, it's not really a workable number and phone companies don't hand it out.
According to the announcement post, part of the reason that Cloudflare was allocated the 1.1.1.1 address is that they were ready and willing to handle the expected inundation of all kinds of bizarre traffic.
It seems that one of those "off-label" uses of 1.1.1.1 is an internal / network control interface on [some?] AT&T networks. I'm just speculating, but it's definitely possible that 1.1.1.1 suddenly becoming publicly routable and pointed to a real thing caused some problems. "Patch it out" may be an acceptable emergency response depending on the breakages, but not really acceptable long-term.
You're absolutely right about this. This is almost certainly just there to block people who mistakenly paste in an example configuration somewhere.
Back in 2010 there were problems that came up when IANA started allocating out of 1.0.0.0/8 (e.g. [1]). Things that were once assumed to be unused started being used, leading to strange issues.
Also, why on earth would AT&T block 1.1.1.1 and not Google DNS and OpenDNS?
when 1.1.1.1 was first announced a few weeks ago, many people pointed out at the time that it was already blocked because so many people had effectively polluted it by over-using it for demo examples and testing traffic. CF announced they knew this and intended to do a project analyzing the data. Perhaps this done, whether conveniently or not, with the same intention. We'll see if they reverse it.
Having it seem like a bug would be an effective way to block it intentionally. The timing of such an unusual regression is suspicious. The fact that 1.0.0.1 is also blocked is also suspicious.
> Having it seem like a bug would be an effective way to block it intentionally.
Just like how only the true messiah denies his divinity, it doesn't give innocent bugs much of a chance.
In fact, now we can show that all bugs are suspicious, with apologies to the interesting number paradox:
The least intentional looking bug is the most effectively hidden, and therefore should probably be suspected of being intentional. Since it's now suspect, it's longer the least intentional looking bug, so the next least suspicious bug suddenly deserves a bit more scrutiny, and so on.
This is an unrelated yet related question.
I am trying to access apple support, I use at&t. When I go to support.apple.com I get an error message stating : Access Denied.
You do not have permission to access "http://support.apple.com" on this server. And gives me a long reference hash. This is at&t denying me access?
everything from 1.0.0.0/8 to 1.0.0.0/15 would encompass those IPs so who knows what but my guess would be some routing or other strange internal usage of some of those subnets
Anyone work at AT&T who could give us the inside scoop on these firmware changes? Snapping a photo of the blocking code would be a valuable public service.
- If the action was malicious, the people involved in writing this code are likely okay with it and not likely to leak details of it.
- If the issue is a bug, the people involved in writing this code are probably working to fix it, and not likely to leak details of it.
- People not involved with making it would likely leave an internal access trail (independent of EXIF data) when they access that code.
Which is to say, expecting an Ed Snowden every time a company does something unethical is kinda silly, otherwise we'd have Google's search algorithm by now.
What's that saying about not attributing to malice, what is more easily explained as stupidity or incompetence or whatever? (Occom's Razon and all that).
AT&T routers also don't let you use a 10.x address at home (possibly to prepare for carrier grade NAT, although there is an official 100.x address reserved for that; so fuck you ATT).
I'm so sick of my AT&T router/modem for various other reasons. I hate how you are required to use it for many of their offerings (including Fiber to the home).
There are a number of tools out there for putting their router behind your Linux box. Most of them configure ebtables or use scripts to forward the 802.1q authentication packets to/from the router.
Wouldn't it be possible to use your own router and treat the AT&T router essentially like a modem? I ask because I'm about to move to an address that can get AT&T fiber.
Sort-of. It has a DMZPlus mode, but all it does is assign the public IP to an specific internal device and uses NAT, as well as forwarding all ports, to make it look like that device is onthe public Internet (even though the modem has the same public IP). You can still plug in other devices and they get private IPv4s or parts of your IPv6 prefix and it NATs (the IPv4) those as well (it's to support their VoIP phones and TV service).
It's a shitty hack and it adds a weird layer of indirection that's kinda buggy and doesn't always flow traffic the way you think it's being flowed. The IPv6 stuff gets confusing as well because the modem is still dishing out public IPv6 address, so if you want to advertise them as well, you've got to start slicing up your prefix.
I wonder if anyone has considered some sort of legislation whereby internet service providers are not allowed to block or disrupt service to certain parts of the internet in order to promote their own business model.
The argument I've made is that if they're blocking certain parts of the internet, then they shouldn't be allowed to call themselves an Internet Service Provider.
I think ISPs would be welcoming to that change. They'd market as "WWW-Providers" or "Social media providers" and most people would be happy.
But hey, if you have advanced needs, no problem, let me refer you too our Gaming Provider and Streaming Provider subsidiaries.
Oh you need actual technical access to the internet because you write your own software? Tricky, but I'm sure our Business Technology Services Provider subsidiary will have the service you need. (You do have a business, right?)
"Mom, we need to move downtown, where there are two competing shady ISPs and not just the one we've got here, so we can buy different packages from both to get 95% of the Internet we need."
"Hold on... they have what?! I'll talk with Timmy's mother - and you don't go anywhere. The nerve of her to her own child roam around unsecured just like that. What if you'd hit one of those pedophile sites?"
(Meanwhile this whole exchange is probably already obsolete because who visits their people's houses when you have phones?)
Maybe not-really-ISPs should be made ineligible for certain privileges / rights given to real ISPs. Like not-really-doctors can't do everything that real doctors can (grasping for a better analogy).
Other entities could punish them by revoking peering agreements. Or if CloudFlare wanted to play hardball, they could deny access to their CDN from AT&T IP ranges. That would be punishing AT&T customers further, but it would get their attention quickly and they'd complain to their ISP.
The arguments for anti-net-neutrality has basically come down to "let the free market sort it out." I don't agree with that, but if we can't have net neutrality, at least define to the customers what the "internet" means.
And in that case, the town just lost it's internet. What makes you think the residents won't remember this come election day?
The problem with the "let the market decide" is that there is no free market for Internet access in the US!
In most areas there is effectively a government imposed monopoly on who can provide you access. So there is no "market" to normalise things. You simply cannot vote with your feet.
In Europe, where the regulatory framework is different, people would just switch ISPs if one started acting in bad faith.
>In most areas there is effectively a government imposed monopoly on who can provide you access.
And that government is elected by the people, right? Which means they could make this an election issue and vote candidates that don't support monopolies, right?
I don't understand what part of my statement you're arguing with.
Most people don't have the grasp on the technicalities to even be able to make the decision to vote for a specific candidates because their internet access is sub-par
Not to mention if you vote for someone you also get all the other things that candidate aligns with, not just better internet.
(not super sure how voting on city/state level works in the us, but it should be accurate enough)
Except they haven't, really. They can still turn on their phone and login onto Facebook and watch stuff on YouTube. Someone telling them they no longer have Internet will just sound silly.
but they'd just call themselves a "networking communications service provider" or something, or call themselves nothing, and people will still just use them.
Great point. Like at some point Hershey was on the verge to lose ability to call it's chocolate 'milk chocolate' because it's contents didn't have enough of it and cocoa.
"Last year, a number of industry groups lobbied for a change to the FDA’s definition of chocolate — a change that would have allowed cocoa butter to be replaced with vegetable oil. At the time, Hershey’s spokesman Kirk Saville told the Harrisburg Patriot-News that “there are high-quality oils available which are equal to or better than cocoa butter in taste, nutrition, texture and function, and are preferred by consumers.”"
In many parts of southeast Asia you can find plenty of "web access" providers that literally give you a private IP behind a NAT in their "LAN", and they are much cheaper than "real Internet". Free WiFi is almost always a similar thing. They are sometimes called InterNAT instead of Internet service.
NN seems like probably a good idea, but it's crazy to me how the whole internet went crazy over something with at-most marginal effects, but barely a peep over FOSTA which has already taken out vast swathes of valuable websites, craigslist personals perhaps most notably.
It's very unfortunate that people are simply fatigued of fighting this fight.
Also see the UK as well for an example of how previously unregulated speech has become regulated because the authorities have pushed over and over again, backing off every time there's a loud enough protest, but trying again after a short time.
All the stuff in the UK is voluntary (except the traffic analysis snooping stuff, but that's centralised and the Americans were doing that to their own citizens when it was theoretically illegal, so, meh). All the big famous ISPs you see advertising on TV have decided to volunteer to censor, but it's not a law. Smaller specialist ISPs just say "No". Mine even had a thing saying look at this great endorsement and it was a link to Hansard (the official parliamentary record) where a Peer was moaning that bad people can get uncensored Internet service from that ISP and the law doesn't stop them.
Nope. It's fascinating how many people believe this, but it isn't what that law says, and so sure enough such sites are accessible via my ISP. The ISP is required by law to provide some means by which consumers can choose not to be able to access "adult" content. It does this during sign up, if you pick "Yes, block adult content" it informs you that they choose not to do business with you and suggest you use a different ISP.
>Nope. It's fascinating how many people believe this, but it isn't what that law says
They do because it's true and that's exactly what the law says.
Digital Economy Act 2017 14 (1):
>A person contravenes this subsection if the person makes pornographic material available on the internet to persons in the United Kingdom on a commercial basis other than in a way that secures that, at any given time, the material is not normally accessible by persons under the age of 18.
Section 23: Regulator’s power to require internet service providers to block access to material
(1) Where the age-verification regulator considers that a person (“the non-complying person”) is—
Like its predecessor, the Digital Economy Act 2017 has a huge amount of text that's basically predicated on the relevant Minister pushing the button. And of course this text is a huge mess (which is why it doesn't take effect immediately, the intent is you can come back and fix it before pushing the button) and so in reality nobody pushes the button. Section 23 is one of those parts. The hypothetical regulator doesn't exist, the infrastructure for all this doesn't exist. None of this is actually law.
Go read the "commencement" section - it's actually eye-opening to do this for other laws you've heard are supposed to have drastic effects.
This is almost funny. We have the exact opposite problem in Sweden, it was just in the news today. One ISP has been convicted for allowing access to facebook even though the user has reached it's data limit for the month. This is unfair competition since the local swedish newspapers are still blocked when you reach your limit.
That's exactly the problem: Facebook holds a special position on that ISP. Imagine a new social network trying to compete. If users can access Facebook when they can't access the new social network it's yet another reason to avoid switching.
This is so-called zero rating. EU net neutrality regs are usually interpreted as banning it, at least on fixed line connections (mobile is more sketchy). Enforcement by country varies wildly, though, as is often the problem with EU regs.
They were blocking 1.1.1.1 on some firmwares long before cloudflare's dns service started. From what I've read, the routers use it on some internal interface.
It's likely incompetence, not malice. If they didn't want people using other DNS, and were willing to fuck with ip addresses they don't own to accomplish that, they'd be blackholing google's and opendns's public caching nameservers too.
It might even have been a conscious decision. Even though it's horrible and the people involved in developing the firmware need re-education. The decision probably went like this: we need an internal address to do something. We can't use 10, 172.16, or 192.168 ranges because those might conflict with internal LANs. 1.x is safe because we all know nobody uses them. The correct decision obviously would have been to get at&t corporate to commit to never using some tiny corner of their address space, and use that. Or 127.a.b.c if that works on the OS. Those options are only needed if they really need an extra IP address. They might not need one after all if they designed their firmware better.
Whenever I've needed IP ranges for similar purposes (i.e., default IPs for container or VM internal / private networks) I've used ranges from RFC 5737 (192.0.2.0/24, 198.51.100.0/24, and 203.0.213.0/24). These are for reserved for documentation purposes, so it is highly unlikely that a customer would have these going in their own internal network. Not the best solution, but better than tying up a public /24 that we own.
We used to use RFC1918 (172.16/12 IIRC) addresses for the communication between internal nodes in a cluster-in-box system that I worked on, which worked great until we had a subnet collision on a customer's network. Leaves me wondering if link-local (169.254/16, fe80::/10) would have been a better option - while technically the customer could decide to make the external (customer-facing) network have a link-local interface, the chances of that configuration actually happening are pretty slim.
I'm still not entirely sure what the best option is there. Maybe some clever use of network namespaces, with a named pipe to bridge between the "internal" and "external" universes? Just typing up that idea makes me cringe though.
BTW, for those wondering what this particular failure scenario is:
Let's use Docker's default 172.17.0.0/16 subnet as an example. So your docker host has iptable DNAT rules that routes a given "external" IP address (10.0.1.15) to a given docker container (172.17.25.92). That works great, unless you have a workstation on a subnet such as 172.17.81.0/24. When that workstation sends a packet to 10.0.1.15, that packet gets routed to the destination container 172.17.25.92. That container goes to reply, but the reply packet never makes it out to the original workstation because the container host thinks it is bound for something else on its version of the 172.17 subnet.
One workaround to this is to have the container host also put in an SNAT rule, so that anything that it forwards to a container would have the source IP address re-written to appear to come from the container host's IP, or the docker0 bridge IP (172.17.0.1/16)
On a similar note, Docker for Mac assigns (or used to?) the IP address 192.168.99.100 to the VM that runs Docker. One day I was working in a coffee shop and got really confused as to why I couldn’t connect to my application, even though the server was running. Then I realised the coffee shop WiFi was using 192.168.99.0/24 for client IPs.
I can't wait till the world comes around to the true advantages of IPv6. It's not just about adding more global addresses...nodes participate in multiple first class networks now (one of those networks is often the global internet). I'd be much more comfortable with smart devices in my home if they're on a universal local network with a public internet federation service for things like software updates. IPv6 makes this possible.
In a cluster-in-a-box scenario, you could modify the OS's network scripts to have the cluster-specific private interface start after the general LAN interface is up. Check both 10/8 and 172.16/12 to see if they're used by the public interface, and use whichever one isn't for the cluster network.
Which is the exact problem that we're seeing, here. "Oh, I know, I'll just use a segment allocated to somebody else; it's not like they use it!" Aaand...whoops, they do.
It's allocated to "DLA Systems Automation Center," a branch of the US military. The addresses are probably used on NIPRNet/SIPRNet, but not publically routed. (Much like 22.0.0.0/8.)
My personal favorites are 44.128.0.0/16, the explicitly unallocated test network for amateur packet radio to internet gateways, and 100.64.0.0/10, the address range for bidirectional carrier grade NAT.
Curious to know if they block Google’s DNS servers as well. That 1.x space was a RIPE research segment, so it’s possible that some internal AT&T group was using it with the assumption it would not be publicly routable and got bit. I was enjoying the shorthand ping of 1.1 for my router at home until Cloudflare took it over. Needless to say, if that was the case for AT&T, their ‘fix’ is not at all acceptable.
Because users were able to connect and after the firmware update they are not? And also because they didn't even let you change this setting to begin with.
There is not enough data to attribute this to malice yet, but it does not look good (see CloudFlare's tweet).
And they singled out this one instead of Google’s, which has been around since well before NN existed and is far more well-known, because...? I remember seeing talk about this on dslreports a couple weeks ago, IIRC it’s not a deliberate block, they were using this IP or a range internally.
I think they'll block 8.8.8.8 if the anger for blocking 1.1.1.1 isn't too loud.
I think they're blocking 1.1.1.1 because customers are now using DNS that isn't them, which deprives them of valuable data on which domain names their customers go to, which they can sell to advertisers. Yes, there's other ways to get that information but the DNS server is an easy one.
> I think they'll block 8.8.8.8 if the anger for blocking 1.1.1.1 isn't too loud.
On what basis? Google started Google Public DNS in 2009 and, as far as I know, it was never intentionally blocked by any ISPs. The issue with 1.1.1.1 is a lot of hardware treats it as though it was reserved for private networks. For instance, I can't access 1.1.1.1 right now since I'm connected to a Cisco router. So this could very well be a technical issue.
But even if 1.1.1.1 is taking off more than 8.8.8.8 did, your assuming the DNS queries people are sending are secure anyway. I'll admit I'm not completely up-to-date on the whole "DNS over TLS" thing but I haven't noticed any support for it on my fully-updated Windows machine or Android phone. I'd love for someone to correct me, but I don't believe any major electronics ship with secure DNS by default. If people are sending DNS queries unencrypted the ISPs can just sniff them.
> On what basis? Google started Google Public DNS in 2009 and, as far as I know, it was never intentionally blocked by any ISPs.
Net Neutrality wasn't considered much of an issue back then, it was just taken for granted (and the administration at the time was attempting to enforce it as vigorously as possible).
Forcing independent internet technical infrastructure off the internet and through their own proprietary infrastructure would be the opening shot you would expect if they wanted to open that battle. After all, you gotta boil the frog slowly, and nobody but a tiny minority of technical users would really care about not being able to use third-party DNS servers.
> I can't access 1.1.1.1 right now since I'm connected to a Cisco router.
I've never seen or heard of a Cisco router doing anything that would interfere with access to 1.1.1.1.
Their wireless LAN controllers on the other hand, use 1.1.1.1 as the default (but entirely configurable) Virtual IP to use as an anchor for the captive portal.
If you can't access 1.1.1.1 behind a Cisco router it's likely because someone set it up incorrectly.
> I've never seen or heard of a Cisco router doing anything that would interfere with access to 1.1.1.1.
I have news for you...
"After very little research we quickly came across Cisco mis-using 1.1.1.1, a quick search for “cisco 1.1.1.1” brought up numerous articles where Cisco are squatting on 1.1.1.1 for their Wireless LAN Controllers (WLC). It’s unclear if Cisco officially regards 1.0.0.0/8 as bogon space, but there are lots of examples that can be found on their community websites giving example bogon lists that include the /8. It mostly seems to be used for captive portal when authenticating to the wireless access point, often found in hotels, cafés and other public WiFi hotspot locations."
I'm guessing they aren't blocking, but internally routing that ip that does not go where it should. Many cisco/Airspace wireless network gear would put the sign in network on 1.1.1.1
What's the theory exactly? What would be the benefit for AT&T to block a new 3rd party DNS? Did they do similar things in the past for other 3rd party DNSs such as OpenDNS, Quad9 or Google's? Seems odd to target this one service in particular.
> I would think that being able to see what people are looking up would be quite valuable to an ISP
Definitely. So if this truly was their strategy, why are they blocking 1.1.1.1 instead of pointing it at their own DNS? It would be less immediately obvious what’s happening versus outright blockage. I really think people are prematurely attributing this to nefariousness.
Net neutrality started disappearing long before it was even called "net neutrality" --- a lot of residential ISPs won't even let others send packets to the full 64K port range of TCP/UDP to the IP it gives you, blocking some of them for "security reasons", throttling/cutting off certain protocols like BitTorrent, censoring "malicious" sites, etc. If we want true Internet connections we're going to have to fight a lot harder...
I would guess it has something to do with cisco asking them to help alleviate issues with their 1.1.1.1 squatting on a bunch of devices. I tested it when it came out, and if I set my DNS to 1.1.1.1, then logged into a hotel wireless network (that I knew was running those devices), as soon as a request was made, I was logged out of the captive portal.
I would have expected 1.1.1.1 to already be blocked if anyone filters on bogon-space (or has dealt with i
Is there a database of who blocks what? I searched but didn't find a collection anywhere.
Unless we are looking at port 25 and whatnot. Yes, it is not allowing you to use a (not technically)-arbitrary port, but most would agree that the internet is better off for that.
Using unallocated IPs for "internal" or bogus purposes is sketchy, continuing to use them after they are allocated is something else. Especially so nearly a decade on.
There is when much of the code was "write once, read never". There's more than a a few dozen MB blobs of dense perl5 code that we had no clue what it actually did, and was told not to touch it, lest many things break.
I had to end up touching one of them, because of things breaking with that subsystem and the new ticketing system that was being implemented. It had the wonderful line
database_user = root
database_password = [current mysql root password]
Every time I write some crap code at work, someone on HN tells a story about such horrors that I no longer feel bad. Thanks for making my day better :).
This team provide a great side service - you can setup BGP with them using an internal AS. It's one of the few ways you can get practical experience setting up BGP in the home with a third party. I'm running it right now.
> A bogon prefix is a route that should never appear in the Internet routing table. A packet routed over the public Internet (not including over VPNs or other tunnels) should never have a source address in a bogon range. These are commonly found as the source addresses of DDoS attacks.
With CGNat, you're lucky if you even get a routeable IP address anymore. ISPs have actually gotten substantially worse over the past ten years in this regard.
You can't be too mad about the full port range. Residential ISPs blocking port 25 outbound (spam malware) and inbound (people installing mailer services as an open relay by default) contributed to tonnes of unwanted traffic.
I know there was an amount of collateral damage, but if you think about it, it's been many years since malware would get in user desktops and just send spam, largely due to this.
It's the internet, blocking ports without explicit reason is totally unacceptable. It's also in most cases since people will just tunnel their traffic over ports used by other applications, such as 80.
The right response is to contact the owners of the servers/services they're running and tell them to configure them correctly - if they continue to abuse them or don't show the technical skills, then that's another matter.
Blocking things like Windows file sharing ports by default is fine, as long as you have the option to turn that off. Other ports, including mail, should be open.
I had one provider interfering with war thunder traffic somehow. packet loss always in the 20%+, which disappeared immediately if tunneled trough a vpn. switched provider and while war thunder now works, I can't play anymore dwarf fortress remote on my ipad.
even diagnosing the issue and finding someone on the other side that understand the topic is hard. I'm no network engineer and definitely neither are the support guys.
it's just a roulette. you have to change until you find one that works. and it sucks.
Because as it stands right now, AT&T sells you access to their network. What happens on their network is for AT&T to decide. With the FCC striking down net neutrality [1], AT&T is probably testing out the waters.
[1] According to google, it's defined as:
"the principle that Internet service providers should enable access to all content and applications regardless of the source, and without favoring or blocking particular products or websites."
They've done this before, during, and after net neutrality. AT&T regularly blocks entire ranges at the IP level because they are "suspected of cyberattacks." I've frequently had issues with web hosts who are blocked only on AT&T, and this was the case in October-November (before the FCC vote) while I was launching a new site.
If you can construe some horizontal where Cloudflare and AT&T are competitors it could of course still be illegal for AT&T to block the others services simply under antitrust law.
NN hasn't actually been struck off yet though, it's still on the books isn't it? Pai needs to sign something and for some reason he hasn't. It was in the news a few days back.
The issue is that the cable companies have monopolies set in law already. There are numerous regulations designed to stop any new last-mile telecom companies from starting up, which literally guarantees a monopoly for the few companies that already exist in the vast majority of the US. As good as Net Neutrality sounds in theory, all we really need to do is drop the regulations and allow new players to enter the game and the market will fix it for itself.
The guy which himself banned a site from his service? Surely if he has the right to block others do to. After all, it's a free market and private companies are allowed to do what they want, you don't like it, go to someone else. Remember, only the government can censorship.
Cloudflare are not a monopoly or duopoly. Cloudflare isn't a critical link in the chain between consumers and the wider internet. Being a Cloudflare customer isn't a necessary part of internet access.
This isn't malice. AT&T has an internal IP they assigned to 1.1.1.1 because it was unused and they used it as an image caching proxy so it browsing the internet would feel faster on early phones. I've seen it when I was reverse engineering on Android a while back.
This is actually the reason that 1.1.1.1 gets so much traffic. People just assume it's not in use and can be abused a bit. Once it's available on the internet then all that excess traffic that was going nowhere gets transferred there.
Still, it looks more like malice since there are other addresses besides 1.1.1.1 that are also blocked.
Let's not act like using a "probably not in-use IPv4 but we can't really be sure" is a crime against humanity. If you're designing any kind of large scale system over the internet you end up hitting the problem sooner or later (like how some VPN solutions started using 5.x.y.z to be sure not to clash with LAN IPs for instance). The real solution of course would be to switch to IPv6 where any vendor can claim some private address anywhere without any realistic risk of collision but we all know that we're not ready for that yet.
By their own admission CF receives a ridiculous amount of garbage traffic at this IP, it was not absurd for AT&T engineers in the past to thing "well, we need an IP that we can be reasonably sure nobody is going to use and is never going to conflict with anything on any network, 1.1.1.1 seems reasonable". Seeing everybody in this thread jumping into conspiracy theories instead of the much more likely configuration issue is a bit disappointing for a community that's supposed to understand technology.
> it was not absurd for AT&T engineers in the past to thing
That is an utterly unreasonable conclusion.
The same logic resulted in Y2K, which was generally a huge waste of time, money, and resources.
The same logic has resulted in the anemic adoption of IPv6, which is NOT a correct solution, because it doesn't work properly for large swaths of the public.
The correct answer always was, and will continue to be, to use internal routes for internal routing, and external routes for external. Clashes with your LAN? Too fucking bad.
This sort of pushing out of externalizes onto the customer results in the same exact outcome anyway: customer gets screwed.
The customer always gets screwed. Don't rationalize the incompetence of engineers who should know better, and corporate execs who don't give a fuck.
It's not reasonable. We have RFCs for a reason, which define which subnets can be used for public use, and which can be used internally. This has been written down for a long time and anyone working at ATT that can make these kind of decisions should know better.
AT&T regularly assigns my phone an IP in 10/8, instead of using 100.64/16 as they should [1]. IIRC, they used to even have the gall to use 172.16/12, which is crazy when you consider the amount of corporate networks using those addresses.
This caused issues where my phone would try to get on wifi, but the DHCPACK would be sent along on the existing interface rather than the one coming up. So the wifi icon was continually bouncing back and forth. My only solution was to go into airplane mode and bring the cellular down before bringing the wifi up. I don't think Android ever addressed this issue, and I had to switch around the entire subnet to avoid the conflicts.
If I knew enough about how Android worked, I'd write a patch to have all android interfaces in their own linux netns, with the dhcp client exec'd in that netns, that way you'd never have to worry about this sort of conflict.
I think it says that you can use it even if you're not doing CGNAT, provided that you are a service provider.
> In particular, Shared Address Space can only be used in Service Provider networks OR on routing equipment that is able to do address translation across router interfaces when the addresses are identical on two different interfaces.
(edit: no, very clearly, "Devices MUST be capable of performing address translation when identical Shared Address Space ranges are used on two different interfaces." )
Also, is it wrong to assume that cellular networks are able to handle "address crashes" due to the inherent centralization that comes as a result of having clients maintain the same IP (and same connections) as the device hops from tower to tower? Maybe I don't understand the topics at play here...
They used an IP that was originally reserved for what reserved IP's are used for. Now that Cloudflare convinced 1.1.1.1 to be released, I'm sure AT&T wants service continuity and had to make this decision, which is well within their rights as an ISP. I dislike AT&T so if this was entirely opinion-based, I would be against them here. But this is a knee-jerk reaction to a well justified decision.
Except that it wasn't classified as a private IP address. They should've use something like 10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16.
The 1.0.0.0/8 range was owned by IANA from _1981_ up until 2010, when it was transfered to APNIC. (The 2.0.0.0/8 range was also owned by IANA until 2010, thentransfered to RIPE NCC).
If you want to get technical, use of the space could be construed as theft.
As for the continuity issue, it was stated that it was an old device, so they have no responsibility to continue supporting it, and considering the age of the device in question, it may not be able to connect to the existing network.
It would be reasonable for AT&T equipment to block any private IP traffic outside a customer's private network by default. That's ignoring that AT&T's network is public, and it wouldn't make sense to use a private IP for a service they provide.
Reversing that would likely require AT&T to make a firewall change to literally every piece of equipment they operate, and that's assuming that they don't use the blocks internally. That, and I can guarantee that some customer, somewhere, would be using whatever IP they chose.
I'd be more inclined to agree with you if AT&T were to come out and say what the problem is along with assurances that they are working on rectifying the situation and expect to have 1.1.1.1 available in X days.
If you own a device that is supported by a custom ROM such as Lineage OS then you can flash that and not worry about this change.
Otherwise, you can purchase a different device, preferably a Nexus/Pixel, or at least one that's unlocked. If that's impossible for you then, yes, you're stuck with AT&T's "best efforts."
Shanghai. One of the largest Chinese data-centers with direct peering to all major national networks. I'm inside, testing a new colocation unit we just put there. Pinging 1.1.1.1 in 4.2ms, wow! Putting it in resolv.conf. Nothing works. WTF? Turns out they route 1.1.1.1 across the whole DC to one of their internal services "for engineers' convenience". Not gonna change. TIC.
Technological websites noted that by using 1.1.1.1 as the IP address for their service, Cloudflare created problems with existing setups. While 1.1.1.1 was not a reserved IP address, it was and is used by many existing routers (mostly those sold by Cisco Systems) and companies for hosting login pages to private networks, exit pages or other purposes, rendering the use of 1.1.1.1 as a manually configured DNS server impossible on those systems. Additionally, 1.1.1.1 is blocked on many networks and by multiple ISPs because the simplicity of the address means that it was previously often used for testing purposes and not legitimate use. These previous uses has lead to a huge influx of "garbage" data to Cloudflare's servers.
That’s intentional, from what I remember. All non-DNS traffic is analyzed for research purposes (not by Cloudflare though).
A wake-up call for all those (ab)users of public address space is also desperately needed. All IPv4 addresses will soon be allocated. Failure to use only private address spaces will cause problems, very soon.
CloudFlare likely did this on purpose, because so many people can't get their heads out of their own asses and follow spec. Now there's a big spotlight on the people purposefully breaking the network. And it will be fixed, eventually, whereas previously, AT&T would have just said "take a hike".
I always thought it was strange to see the example loopback address listed as 1.1.1.1 or 1.xxx.xxx.xxx in many of tutorials and official network certification guides and why they did not use a private. This is more than likely why many users are having problems because they are being routed to a loopback address on their router or another router. Hopefully network admins and engineers will choose a non public ip space as their loopback address to resolve the problem.
Indeed. I wish most people used TEST-NET-1, TEST-NET-2 and TEST-NET-3 in documentation and training material.
RFC 5735:
> 192.0.2.0/24 - This block is assigned as "TEST-NET-1" for use in documentation and example code. It is often used in conjunction with domain names example.com or example.net in vendor and protocol documentation. As described in RFC5737, addresses within this block do not legitimately appear on the public Internet and can be used without any coordination with IANA or an Internet registry.
> 198.51.100.0/24 - This block is assigned as "TEST-NET-2" for use in documentation and example code. It is often used in conjunction with domain names example.com or example.net in vendor and protocol documentation. As described in RFC5737, addresses within this block do not legitimately appear on the public Internet and can be used without any coordination with IANA or an Internet registry.
> 203.0.113.0/24 - This block is assigned as "TEST-NET-3" for use in documentation and example code. It is often used in conjunction with domain names example.com or example.net in vendor and protocol documentation. As described in RFC5737, addresses within this block do not legitimately appear on the public Internet and can be used without any coordination with IANA or an Internet registry.
Everything there is an example, not actual best practice. But a lot of admins probably just go with 'eh, my textbook used 1.1.1.1, so will I'.
Really the only place I saw 1.1.1.1 regularly though is to set the router ID, and making a loopback address is not the best way to do that to begin with.
I can understand not being able to remember 192.168.X.X. But if that's the issue, why not use 10.X.X.X which is private AND easy to remember? Is it really hard to remember 10?
That's so crazy, I actually experienced this today.
I've been using 1.1.1.1, and today went to the library for a quick work break. I pulled out my laptop and tried to connect to the wifi, and it wasn't working. After a few minutes of troubleshooting, I tried deleting my custom DNS entry in my network settings and that did the trick.
Yes, they can; regardless of your resolver they can collect that if you're not using DNSec.
Same goes for https handshakes leaking your target domain(otherwise SNI wouldn't work), so DNSSec alone is fairly pointless for regular web traffic obfuscation; and of course the IP is in each TCP frame regardless.
It becomes more a matter of are they doing it yet(re:DNS monitoring in this manner); with enough people using third party resolvers(I'd argue google's public DNS already has enough usage to warrant it) they will be.
Optimally you'd VPN at all times to a provider you trust or one you've setup yourself.
What it all really boils down to though is that the populace simply can't be trusted(nor should they need to be) to make themselves acceptably secure from third party monitoring. We need to have much more discussion around data privacy and retention for ISP's.
It's not a matter of if the data will be misused, it's truly a matter of when and it's not fair to the general public.
Some folks use a Ubiquiti EdgeRouter and a user-space proxy to forward EAP (authentication) packets to the AT&T router but otherwise use the EdgeRouter to route LAN traffic out to the ONT (fiber to Ethernet translator) and the internet, thus bypassing the shitty AT&T router for most stuff. This would be sufficient to ensure that 1.1.1.1 is reachable.
It's not a good solution for me, however, because I run PFSense, which is FreeBSD-based and lacks the PF_RING socket support to filter out those EAP packets. As far as I know, PFSense's PF packet filter cannot strain them out, either. Traditional libpcap is available on FreeBSD (slow) and netmap (fast), too. I looked into writing an EAP proxy in Go using a special netmap-enabled libpcap but it was way too much yak shaving and I eventually gave up. I should take another look, or maybe learn enough C to do it natively with netmap. My goal is native EAP proxy support for PFSense that can support filtering EAP out of a wirespeed gigabit fiber connection.
Here is the original Cloudflare post on what 1.1.1.1 is [1]. For those who don't know, 1.1.1.1 is Cloudflare's privacy focused DNS service. That means that when you type in www.google.com, that URL can be sent to 1.1.1.1, and then 1.1.1.1 resolves that URL an IP address and send the IP back to the user. All user requests are then sent to the IP address, not the URL. Supposedly this is better than using the DNS server of ATT+Comcast, because ATT+Comcast want your browsing history while Cloudflare does not.
What I don't understand is how this really helps user privacy much. If AT&T, Comcast, etc want to know your browsing habits, can't they still see the IP addresses you're browsing and figure out the URL from the IPs? I can't see that as too big an impediment, but maybe someone with more knowledge can share.
One point: the whole URL doesn't get sent to the DNS server, just the domain name.
Regarding privacy, Cloudflare are at least saying they aren't spying on you. Your ISP may not even be saying this. Also, Cloudflare don't necessarily have access to your name and address, whereas your ISP does. Also, many different sites can be hosted on the same IP address, so merely tracking the IP addresses a client is connected to won't necessarily tell you what sites they're visiting.
That said, I tried 1.1.1.1 and found I had to switch back to Google DNS since Cloudflare intentionally doesn’t support EDNS Client Subnet which was causing my AppleTV’s to have trouble loading content.
I've been meaning to try eap_proxy for a while. I've seen it mentioned several times. My ATT router doesn't get in my way enough to bother with it yet, but it still pisses me off they won't let me use a 10.x range at home.
Also I've heard that their routers report your entire network topology back when they phone home.
Can you not just put the router in bridge mode and use a different sane one? In the UK Virgin forces you to use their moderately shit modem/router, but even that lets you use bridge mode.
The best you get is a "passthrough" mode where a router behind it will get assigned the public-facing IP so it doesn't ultimately behave like double-NAT, but the gateway still maintains an internal NAT table for everything going across.
You can't just use your own VDSL modem or plug your own router into the fiber ONT, as AT&T uses 802.1x auth and the key is burned into the gateway hardware.
It allows you to completely bypass AT&T's router, so you can use your own router talking directly to the ONT. The AT&T router is then necessary only to authenticate to the ONT. So the proxy, running on your own router, sends authentication packets (and their responses) from the ONT to the AT&T router, but otherwise the AT&T router isn't handling any packets.
I just tried setting DNS to CloudFlare (primary and secondary + ipv6) on my downstream router behind AT&T fiber and I can't resolve any hosts. They are explicitly blocking CF DNS.
I have AT&T internet and also can't get to http://1.1.1.1. but I can on my phone using AT&T's cellular service. Apparently not all of AT&T dislikes CloudFlare.
I have AT&T Gigabit fiber and am able to access 1.1.1.1, but perhaps I just don't have the router update yet? I'm also not using the AT&T router's wifi, but a separate router behind it.
Knowing how bad most telco networks are operated, I blithely wonder if maybe they were using stuff in 1./8 as PNI or some other privileged internal net and are going through some oh shit moments.
Hanlon's razor as lots of DNS services are available on not as vanity IP space, and there is no evidence of blockage.
I'm certain there are, but AT&T is a 250,000-person organization with a bureaucracy to match. Things take weeks to sort out, assuming the right person is pushing for it.
I would cancel any broadband contract of any ISP that did this when providing me a service. We need to stand up to these sort of things. (Disclaimer: I live in Europe though.)
My guess is this is just incompetence and not intentionally made to block CloudFlare.
I have one of those routers, and I couldn't use 1.1.1.1 because it was routing to an internal interface on the router. I confirmed this with ping, I was getting microsecond response times from 1.1.1.1.
Under the new firmware, 1.1.1.1 is just dead. So it's probably still connected to the local interface, and nothing is listening.
Probably because they know they would never be able to convince anyone that it was a technical bug and not malice. A surprising number of people seem to be convinced this was unintentional.
FWIW as an ATT Fiber customer, I was not able to (and am still not able to) access 1.1.1.1. I tried just a couple days after Cloudflare announced the service, and requests timed out. I can access with a VPN, however.
If at&t does not provide any official explanation, what's your opinion on how people should respond. The first thing that came to mind for me is to switch over to Xfinity on my next contract cycle.
Breaking the contract is a reasonable option, maybe? At scale & among people who can afford to (ahem, HN) openly refuse I'd argue it could have more immediate impact.
Frankly, NEVER paying the bill is an option, too. Downloading Netflix is sweet, maybe you can pool with your neighbor? that's another topic
It's expensive to enforce payment.
If you've never been in collections, it's an experience you might enjoy for sport.
If you live in fear of not being able to get a cheap interest rate on a loan for some shit you don't need... well, maybe you'd better not take part in that type of protest.
How is the ISP performing this remote update? Is it TR-069/CWMP or an open SSH port or something? Many routers will allow the user to disable TR-069 even while it's running. Often a hardware reset will also disable it and then the user can put the manufactures update on it and prevent the ISP from managing it in the future. If it's an open SSH port then we all have bigger problems.
AT&T's internet service requires you to use one of THEIR "gateways". Which is a combination modem and wireless router. When AT&T wants a new gateway they go to a company (mainly Arris now) and have them build a gateway that will only be for AT&T to deploy . AT&T completely controls the software/firmware on the device. There is no site you can go to and download a "manufacturer" firmware. Even if you could it wouldn't accept it because it wouldn't be signed by AT&T. And yes AT&T uses CWMP to remotely manage the gateway. That's how they can send firmware updates, customer service can retrieve signal stats, remotely reboot the gateway etc etc. And no they certainly do not put in a option on the gateway to disable CWMP or any of the remote management stuff they use.
You can turn off the Wi-Fi on AT&T's gateway and run your own router behind the AT&T hardware. But since your router is behind the gateway everything still goes through it and AT&T still can do all the CWMP stuff to their gateway.
While far from perfect, for anyone looking for a temporary solution, run pi-hole on a remote server and have it use 1.1.1.1 as its DNS. You'll get the benefit of pi-hole blocking ads.
I really wish Cloudflare would have used a "normal" IP for their DNS service. That way there would be no confusion whatsoever as to whether this is malicious or a bug.
> The Cloudflare-APNIC experiment uses two IPv4 address ranges, 1.1.1/24 and 1.0.0/24, which have been reserved for research use. Cloudflare's new DNS uses two addresses within those ranges, 1.1.1.1 and 1.0.0.1.
They had acknowledged to themselves going into it that the IPs weren't "normal". They could have easily chosen a safer range if that was a priority.
1.1.1.1 is a normal IP as it was reserved for internet use. There are already IP ranges that are supposed to be used for internal use, and 1.1.1.1 is not one of them
1.1.1.1 is widely known to be a dumping ground of random traffic, as well as a common internal address for captive portals and whatnot. The entire rollout of 1.1.1.1 has been characterized by legacy bugs and misconfigurations of network hardware preventing its proper use. Regardless of what the standards say about 1.1.1.1, it was a poor choice if Cloudflare values widespread adoption.
Jesus Christ. Fortunately I only have AT&T on mobile and it still works there, but I will ditch them in a heartbeat if that changes. At least in the cellular space there's still some consumer choice to be had.
As a consumer, you are free to switch to a different provider. I'm not saying what they're doing is ok, but let's not neglect the opportunity to vote with our $$$.
What is the likelihood of obtaining net neutrality through the courts? I.E. Cloudflare sues -> judicial process -> decision that establishes a "right to access"?
Likely 0% chance. The court cannot just go off and make up its own laws because it wants to, all it can do is decide how existing laws should be applied.
It's true that courts can't make laws, but they have shown a lot of leeway in the past of creatively interpreting laws (i.e., using the Interstate Commerce clause to say a farmer can't raise a particular crop to feed to his own animals). Could existing laws that prevent monopolies from unfair business practices be applied here?
Unlikely through the courts, although state utility regulators (i.e. the California Public Utilities Commission) might take interest. Democrat members of the FCC also might be interested.
Late to the party, but here's some traceroutes run from AT&T Gigapower with their router entirely bypassed via an 802.1x MitM:
# traceroute 1.0.0.1
traceroute to 1.0.0.1 (1.0.0.1), 30 hops max, 60 byte packets
1 45-18-124-1.lightspeed.austtx.sbcglobal.net (45.18.124.1) 59.462 ms 61.348 ms 63.373 ms
2 71.149.77.208 (71.149.77.208) 1.304 ms 1.695 ms 1.957 ms
3 75.8.128.136 (75.8.128.136) 1.329 ms 1.682 ms 1.393 ms
4 12.83.68.145 (12.83.68.145) 2.673 ms 2.661 ms 2.648 ms
5 12.123.18.233 (12.123.18.233) 8.877 ms 12.753 ms 8.800 ms
6 192.205.36.206 (192.205.36.206) 6.663 ms 6.375 ms 6.680 ms
7 66.110.56.158 (66.110.56.158) 6.885 ms 6.725 ms 6.436 ms
8 1dot1dot1dot1.cloudflare-dns.com (1.0.0.1) 6.855 ms 6.557 ms 6.662 ms
# traceroute 1.1.1.1
traceroute to 1.1.1.1 (1.1.1.1), 30 hops max, 60 byte packets
1 45-18-124-1.lightspeed.austtx.sbcglobal.net (45.18.124.1) 163.322 ms 163.927 ms 174.243 ms
2 71.149.77.208 (71.149.77.208) 1.346 ms 1.779 ms 2.035 ms
3 75.8.128.136 (75.8.128.136) 1.215 ms 1.214 ms 1.564 ms
4 12.83.68.137 (12.83.68.137) 1.495 ms 12.83.68.145 (12.83.68.145) 2.289 ms 12.83.68.137 (12.83.68.137) 2.283 ms
5 12.123.18.233 (12.123.18.233) 7.783 ms 11.766 ms 11.757 ms
6 192.205.36.206 (192.205.36.206) 6.163 ms 6.160 ms 6.202 ms
7 66.110.56.158 (66.110.56.158) 6.909 ms 6.931 ms 6.423 ms
8 1dot1dot1dot1.cloudflare-dns.com (1.1.1.1) 6.922 ms 6.492 ms 7.075 ms
; <<>> DiG 9.9.5-9+deb8u14-Debian <<>> cloudflare.com @1.1.1.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15100
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1536
;; QUESTION SECTION:
;cloudflare.com. IN A
;; ANSWER SECTION:
cloudflare.com. 53 IN A 198.41.214.162
cloudflare.com. 53 IN A 198.41.215.162
;; Query time: 7 msec
;; SERVER: 1.1.1.1#53(1.1.1.1)
;; WHEN: Thu May 03 13:40:52 UTC 2018
;; MSG SIZE rcvd: 75
; <<>> DiG 9.9.5-9+deb8u14-Debian <<>> cloudflare.com @1.0.0.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 61685
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1536
;; QUESTION SECTION:
;cloudflare.com. IN A
;; ANSWER SECTION:
cloudflare.com. 66 IN A 198.41.214.162
cloudflare.com. 66 IN A 198.41.215.162
;; Query time: 7 msec
;; SERVER: 1.0.0.1#53(1.0.0.1)
;; WHEN: Thu May 03 13:40:39 UTC 2018
;; MSG SIZE rcvd: 75
I'm not going to paste the output, but `curl https://1.1.1.1/` works as well.
Doesn't look like it's anything onn AT&T's internal network.
I find the sappy, defeatest, whiney attitude with the FCC useless. File them if you're affected. They're cataloged and can be used as evidence in the future. The current administration is certainly against regulation, but blocking a DNS provider is an escalation. More than likely, this block is due to incompetence. My guess is ATT was using the IP internally for some purpose and is now getting DDOS'd.
I'll go on record as saying I am an ardent hater of U-Verse and AT&T due to personal experience with their service and would like nothing more than for this to be a purposeful act that would result in backlash on that company...
... that said, I'm going to fall in the camp of stating that this is likely an unintentional bug. If they truly wanted to block 1.1.1.1 (and it's backup), doing so via firmware would seem to be the most difficult and unreliable way of doing so. The benefits of doing so are also limited: (a) If the motivation was to avoid losing the ability to spy on their customers via DNS requests, well ... they can still do that. Yes, Cloudflare supports encrypted DNS, but the half of one percent of folks who have this set up wouldn't be worth the effort[0]. (b) If there was some other reason to want customers using their DNS (i.e. redirection to advertising pages when lookup fails), they could simply do packet rewrites (of non-encrypted DNS lookups) to send them over to AT&Ts infrastructure -- the benefit of doing this is that it would be more likely to go unnoticed[1]. (c) There have been several other, far more popular and just as well publicized public DNS services that they haven't messed with -- why pick on a new entrant -- why not break 8.8.8.8 or OpenDNS?
More likely is the explanation that 1.1.1.1 was being used as a defact-o 10.x.x.x address for other purposes. It had a few benefits -- it was far less likely to be used as an internal address for customers (being ... not a traditional non-routable address) and up until recently, it was unlikely to be used for legitimate services. Or ... it's something else. Firmware bugs are everywhere and having had their service and the particular brand of modem they're using, I'm not the least bit surprised. I had to root my modem to make my service work reliably[2]. Heck, I worked for a telecom for 17 years, and the first half of that, the guy who set our network up used 1-10.x.x.x as internal addresses.
[0] It's not terribly difficult to do, but few take the effort. I've got an internal DNS server configured (for AD purposes) which forwards to another internal DNS server that makes all DNS requests out to cloudflare via encrypted DNS. It was a 5 minute change to my internal setup, a lot of which was the time it took to download the container, reboot the host for testing purposes and validation of everything.
[1] It probably would have managed to be hidden an entire minute longer than this debacle.
[2] On their DSL (re-labeled U-Verse despite it having nothing to do with their U-Verse TV/Internet -- it's the old DSL limited to 12Mb down if you're lucky), my modem would randomly display the "Internet is down" page for all requests despite everything being fine. I forgot, exactly, what I had to do to resolve it, but it required hitting their ping page to trigger a buffer overflow, allowing me to get console access and running some command. I also wanted to be able to ping the modem remotely (something they disable with no customer-facing option to correct) to correlate it with weather so as to prove to customer service (...and at least a little to myself) that this bizarre happenstance wasn't all in my head. My next-door neighbors also had this problem, so I suspected it was something in the wiring (expansion/contraction-like) up the street, but it was hard to track down where because all but two people on that street (including us) used those homes as summer vacation homes and were rarely there in the winter -- many didn't have service and those who did were unlikely to be around when the weather hit about 40 degrees, so AT&T wasn't getting reports of outages in enough frequency to do anything about it. Two years ago, they sent a truck, took everyone down and re-did a pole 8 houses down. Since then, the problem hasn't happened.
My parent company uses 1.1.1.1 as a captive portal address on the guest network. Easy to remember, but cloudflare probably needs to stand up some more conventional DNS ips.
What exactly do you mean by “wasn’t assigned”? According to this article [1], 1/8 was reserved in 1981. Only from 2008 to 2010 was 1.1.1.0/24 ever truly unallocated.
If, after 8 years, most providers still haven’t moved to either private networks or officially assigned networks, honestly – they suck.
Good. If cloud fare is allowed to block sites from their hosting service based on opinions, then att should be allowed to do the same. Also fuck cloud fare for choosing 1.1.1.1 when any network engineer worth his salt would have told them it's going to cause problems. There are things like conventions and traditions, you break them at your own peril.
>APNIC's research group held the IP addresses 1.1.1.1 and 1.0.0.1. While the addresses were valid, so many people had entered them into various random systems that they were continuously overwhelmed by a flood of garbage traffic. APNIC wanted to study this garbage traffic but any time they'd tried to announce the IPs, the flood would overwhelm any conventional network.
>We talked to the APNIC team about how we wanted to create a privacy-first, extremely fast DNS system. They thought it was a laudable goal. We offered Cloudflare's network to receive and study the garbage traffic in exchange for being able to offer a DNS resolver on the memorable IPs. And, with that, 1.1.1.1 was born.[0]
It's not a reserved address like 192.168.0.0/16 or 10.0.0.0/8[1][2], nor is it one of the other reserved addresses for documentation or testing. So I think people using it before as test or LAN addresses are actually in the wrong here. This kind of "tradition" in networking is wrong. That's what things like RFC's are for.
You seem to miss my point, in that Cloudflare specifically chose that IP in order to share research data with APNIC regarding people erroneously using 1.1.1.1 in the wild.
Just because something is a tradition doesn't make it a right course of action.
I get why people are paranoid about ISPs blocking content and net neutrality, but let's not cry wolf prematurely. The technical details here strongly suggest a bug rather than intentional blocking of 1.1.1.1 DNS traffic.