Unless something has changed since I just happened to have checked earlier today, VPC ELBs are still IPv4-only. They return AAAA records on the ipv6 and dualstack subdomains like their classic/non-VPC counterparts (which do fully support IPv6), but the ports aren't open on the returned addresses, so it's not very useful. Maybe someone from Amazon can chime in?
+1 on this. Would appreciate someone from Amazon commenting on this.
Unfortunately it looks like VPC ELBs are still IPv4 only. They have a link in the article from when they introduced IPv6 for non-VPC ELBs back in 2011. Are VPC ELBs being worked on anymore or are we going to have to move to Application Load Balancers for IPv6 support?
...I did read the article, what do ALBs have anything to do with what I said? Or are you just assuming that no one needs non-HTTP services running in a VPC load balanced over IPv6?
This is great news. The reason we support IPv6 lookups but only over IPv4 connections on https://ipinfo.io is because we use AWS with VPC, which hasn't historically supported IPv6. Except many sites/services to add IPv6 now that were previously limited by this.
I just came across https://ipinfo.io, great service. I actually use it when hyperlinking IP addresses for a quick way to view GeoIP information. Thanks!
Just wanted to echo node's comment above - I regularly use ipinfo.io and find it very useful. I'm sure the plethora of other 'IP lookup' sites have very similar data, but your design and layout makes all the difference. Thank you!
Out of curiousity, why? IPv6 is much more important on the edge, than inside your VPC.
Is it just for the simplicity of your networking? One strategy is to define your IPv4 space inside one of your IPv6 networks such that you have dual addressing with directly mappable addresses.
Do you have some other use case than just simplicity of design?
I'm not the OP but globally unique and optionally reachable from anywhere addresses are super-attractive from a not-going-insane perspective if you manage a non-triviality sized network.
GP hit it on the head. Every node is globally addressable. Regarding the 464xlat, that's strictly to enable hosts in the internal network to communicate with external IPv4 only hosts. IPv6 is still used as the only internal transport, and primary external transport.
I am also curious what your use-case is for every instance inside of your VPC having a publically routable IPv6 address. Usually you only need public addresses for your load balancers and ssh jump/bastion host. All other instances inside the VPC are private networking only.
An instance might want to reach anything on the internet that's using an IPv6 address is a use case.
Also, having instances connect to IPv6 parts of the internet while you can see which subnet of your vpc the traffic originates from, and maybe filter on that fact.
Having routable addresses is a different thing than them being public or private, which is determined by the fact if you accept traffic to them or not when it arrives at the last hop before entering your vpc.
Use case: Never having to maintain a NAT instance (or, worse, but necessary: failover NAT instances).
That's how I tend to use AWS at the moment, with IPv4. Almost every resource (except AWS services) has a public IP assigned, few have open inbound security groups, most have open outbound security groups (but the option to restrict is handy).
It's still nice to be able to give each host a globally unique address even if most of them aren't publicly accessible. It means that you never e.g. have routing clashes when VPNing into your datacenter from public wifi somewhere.
Fun fact: Ubuntu 16.04 image offered on Amazon does not support IPv6 out of the box.
Even when you log in via ipv4, and make relevant networking changes (basically enabled DHCP for v6), it will still fail to apt-get update, because eu-central-1.ec2.archive.ubuntu.com is ipv4-only.
Out of the box, an instance will come up with IPv4. If you then perform DHCP for IPv6, you should still be able to route to the per-region archive mirrors (which only serve over IPv4 because when they were deployed IPv6 wasn't available).
If you disable IPv4 entirely, then it's true that you won't be able to access the archive mirrors. However, this isn't exactly "out of the box". :)
If you do want to disable IPv4, then at the same time you'll also need to modify your /etc/apt/sources.list to point at archive.ubuntu.com (which does serve over IPv6).
We're looking at how we can best make the "no IPv4" case work with the in-cloud mirrors, and hope to have that resolved soon.
After a complex set of steps to setup ipv6 in my [VPC, Router, Gateway, Security Groups, and others that I'm forgetting], I was able to launch 2 instances and get both ipv4 and ipv6 addresses. As OddBloke notes, I did have to 'sudo dhclient -6' to grab that ipv6 address in the instance. Once I did, I was able to ssh back and forth using ipv6 (yay!). Then I tried 'apt update'. In fact, the mirrors inside of AWS worked us fine. But security.ubuntu.com did not. It did resolve, but it didn't respond. Rest assured we (Canonical) are working this right now... Thanks.
I actually hit this problem, and I fixed it by modifying the routing table associated with my VPC to route ::/0 to an Egress Only Internet Gateway. (Just _adding_ the EIGW to a VPC is not sufficient, because routes aren't configured automatically.)
And of course while it's Canonical's fault for not having a v6 dhcp, the similar problem affects Amazon Linux instances:
repo.eu-central-1.amazonaws.com is ipv4-only so yum makecache fails.
Not really, considering any single subscriber could be assigned a whole /56, and that machines with privacy addressing would be periodically changing the last 64 bits of their address.
Hypothetically speaking, does anyone know if enabling IPv6 on a small VPC using 100% of its IPv4 addresses would allow you to spin up additional EC2 instances assigned IPv6 addresses?
NAT sucks, but so does a lack of affordable L3 switches that can handle IPv6 routing (not that I'll use that against IPv6, but it's a personal pain point of mine).
If I wanted to deploy v6 in my home network I'd literally have to configure my EdgeRouter X on every VLAN in my home network (and as such stuff all IPv6 traffic on my network that crosses subnets into a single 1Gbe connection), since my TP-Link managed switch has no facility to put out a IPv6 RA (even though it supports IPv6 routing, go figure).
I'm really digging the age of vendor silicon where most switches just support whatever Broadcom/etc. put into their chips, but man I hope they catch up with IPv6 soon because right now only the big boys (Cisco/Juniper/HP/Dell/etc.) seem to have any hardware with full IPv6 support, and then you're paying a hefty price for the hardware + support contract (assuming you want software updates).
Not dealing with carrier grade NAT messing up your analytics from mobile carriers in the near future. T-Mobile and Verizon are pushing IPv6 hard because of address space depletion.
you can. but this can open up way more flexibility on how you run those services.
think about something like docker where each app may be running web/db/queues/mail/etc. with v6 each docker app/service could have its own ip so you can run on the default ports. you could even create a subnet per app to segment them off easily.
Being able to live without NAT, and give all the countries that run out of ipv4 addresses the same status most western countries enjoy, aka having enough IP addresses for every user (or even your IoT devices, or anything in your household requiring internet access). I am not arguing it is a good thing or bad thing though.
1.) Allocate and associate an IPv6 block to the VPC.
2.) Configure internet gateway and routes in the VPC.
3.) Update security groups to include IPv6 in the VPC.
4.) Confirm current instance types support IPv6.
5.) Assign IPv6 address to instances.
6.) May require manual network setup on each instance if not using DHCPv6.
This complexity (lack of abstraction) is why I prefer Google Cloud Platform. While GCP currently does not support IPv6, when they do support it, I am willing to bet they will roll it out as a turn-key button click.
I certainly hope that a click of a button doesn't log into my instance, adjust network configuration files, reload the network stack, and permit all ipv6 traffic in...
Easy there.. I was not advocating any OS/instance level changes by Google. It is just that GCP seems to abstract away concepts like internet gateways and NAT instances, perhaps making the switch to IPv6 on GCP easier than AWS.
AWS used to as well. However, that doesn't work if you're an enterprise moving legacy systems up to the cloud that expect certain network addressing schemes.
Amazon.com itself didn't move to AWS until VPC supported the bring your own address primitives. Not everyone can clean slate rewrite systems to bring them to the cloud, Amazon recognizing this is going to keep them dominating the market.
However, most of the users of internet were not ipv6 ready until very recently. It is a simply supply and demand question, where nobody wanted to invest in ipv6 infra first. Replacing networking gear for supporting ipv6 is very pricey operation too. I remember having pretty bad issues of having a dual stack in software. One example is when the python resolver library defaulted to ipv6 resolution first for a domain name, and the nxdomain to the AAAA request invalidated the name cache on the servers. When we changed this behaviour to default to ipv4 we saved an insane amount of CPU time. There are lots of issues like this one in a cloud environment, most people are never going to face this though.
While not a perfect solution, you can use CloudFlare in front of your app/site for full IPv6 support without having to deal with AAAA records and IPv6 support from your origin.
I think Google internally uses ipv6 for desktop computers though. I thought they are going to be faster with ipv6 rollout given how much they are investing into SDNs.
I think they would make it public if they believe they are scale ready. From my experience working there years ago, one of the first requirements was to be able to scale to any kind of requirements.
I had IPv6 from TWC, then for a month or so before the merger it went MIA, and then about a month after the merger it was back. For some reason I get a more reliable path to my IPv6 gateway than IPv4, so I missed it quite a bit.
Spectrum/TWC has had IPv6 via DHCPv6 with prefix delegation for a few years in Louisville, but only with a select set of modems. A Juniper DOCSIS 3 wouldn't work for me while a Motorola had no trouble.