> A new blog post shows you how to use Elastic Load Balancers and NAT Gateways for ingress and egress traffic, while avoiding the use of a public IPv4 address for each instance that you launch.
It would be nice if this came with reasonably priced NAT gateways. The current pricing is outrageous.
Not to mention the absurd fact that accessing (IPv4) AWS APIs from a private subnet requires paying for either a NAT gateway or an interface endpoint (we got bitten by sending a ton of Kinesis traffic through a NAT gateway once)
This is one thing Google Cloud does well - traffic to Google services bypasses NAT gateway, even over IPv4.
I was curious how they do this, so I set up a service on Google Cloud Run that just echo'd the user's public IP address. When curl'd over IPv4, it said I was coming from a unique local (i.e. private) IPv6 address. The private IPv4 address of my server was embedded in the address, along with some other random-looking bits that probably identified my VPC somehow. So they must have been doing some sort of stateless IPv4 to IPv6 translation behind the scenes.
It was a clever solution that takes advantage of the fact that all of Google's API endpoints are dual-stack, even though (at the time) they didn't support IPv6 on customer VMs. The problem AWS currently has is not all of their internal endpoints are dual-stack, so even using IPv6 can't save you from cloud NAT costs when accessing AWS services.
Our network is completely software defined, so we just fake it to the VM and make it look like it's talking right to the service, and do all the routing via magic.
Honestly, I really like that the AWS implementation is not magic. AWS is the only one of the big 3 cloud providers where I can reasonably assume I get what it says on the lid, and that it works with the pieces it advertises working with (whereas other cloud providers tend to be more nebulous in their documentation).
GCP especially takes a lot more trial and error building systems that compose a bunch of different primitives. That the API is awful doesn't help either.
I agree with this, having built large dev platforms on both. GCP in my experience takes 2-5x the engineering effort, to deal with "GCPisms" and the terrible documentation. AWS is simpler and does what it says most of the time.
Have quite a bit of experience with AWS and Azure, and only recently learning about GCP, it’s very clear that Google nailed Some of GCP’s core cloud engineering concepts and got them exactly right.
Although unfortunately they will never reach the size of AWS or (maybe Azure? It’s hard to tell Azure’s market size as they don’t disclose it.)
I know Google’s load balancers use BGP. So a load balancer will have a single IP address, but you don’t talk directly to that IP. Google’s servers take over as traffic is being routed.
AWS didn't start with "VPC", and people who still had access to the much-easier-to-conceptualize EC2 Classic only got forced off recently; Amazon VPC wasn't actually launched publicly until after Google Cloud.
Another dead simple solution would be if AWS would provide us a simple subdomain (such as myapp.xxxx.aws-hosting.com), no need to meddle with IPs at all in that case. Google Cloud already does this with xyz.appspot.com subdomains, same with Github Pages as they provide you xyz.github.io subdomain for your app.
I wonder if someone at AWS noticed that the interface endpoint pricing was offensive for accessing S3 and therefore created the free “VPC Endpoint for Amazon S3.”
I would find it rather surprising if the actual cost to Amazon of connecting a VPC to S3 were substantially lower than the cost of connecting a VPC to any other AWS service.
Yeah, the endpoints bother me. I get charging for IPv4 space but they shouldn’t charge you for calling their APIs, especially since it’s one ENI per endpoint so I have a few VPCs which have half the allocated addresses used by endpoints (the old trade off between multi AZ reliability and the cost of allocating redundancy).
AWS NAT gateway is $0.045 per hour plus $0.045 per GB. The hourly fee seems mostly okay - for largish users, one or two per region is fine.
$0.045 per GB is nuts. That’s $20.25/hour or $14580/mo for 1 Gbps. One can buy a cheap gadget using very little power that can NAT 1 Gbps at line rate for maybe $200 (being generous). One can buy a perfectly nice low power server that can NAT 10Gbps line rate for $1k with some compute to spare. One can operate one of these systems, complete with a rack and far more power than needed, plus the Internet connection, for a lot less money than $14580/mo. (Never mind that your $14580 doesn’t actually cover the egress fee on AWS.)
A company with a couple full time employees could easily operate quite a few of these out of any normal datacenter, charge AWS-like fees, and make a killing, without breaking a sweat. But they wouldn’t get many clients because most datacenter customers already have a NAT-capable router and don’t need this service to begin with.
In other words, the OpEx associated with a service like this, including the sysadmin time, is simply not in the ballpark of what AWS charges.
Is that $0.045/GB for all data transferred through it, or just egress to the public internet? If it's the latter, that's half the price of normal EC2 instance egress to the public internet.
If it's the former... oh sweet jesus, what? Probably way cheaper to just run an a1.large or something with Linux on it, plus a very short shell script to set up NAT. That's assuming well more than half of the traffic going through it is ingress from the internet. If it's 50/50 ingress and egress, then it's basically the same pricing as NAT gateway.
> You also incur standard AWS data transfer charges for all data transferred via the NAT gateway.
Yes, the $0.045/GB “data processing” charge is in addition to the usual $0.09/GB egress charge. You are paying an effective $0.135/GB for all of your egress, in addition to the $0.045/hr just to keep the NAT gateway running.
And yes, your ingress and even internal-to-AWS traffic is also billed at the $0.045/GB rate. (An example given on the aforementioned page is traffic from an EC2 instance to a same-region S3 bucket, which they note doesn’t generate an egress charge but does generate a NAT processing charge.) As far as I can tell, the only traffic which isn’t billed is traffic routed with internal VPC private IP addresses, which don’t hit the NAT gateway and thus aren’t counted.
There are highly paid AWS consultants who shave literal millions of dollars off of many company’s AWS bills by just setting it up a cheap EC2 box to handle their NAT instead of using the built-in solution. Doing that instantly wipes out the ingress charges and effectively halves the egress charges, and it’s probably a lower hourly cost than they’re already paying: an a1.large is $0.051/hr on-demand but that immediately drops to just $0.032/hr with a 1 year no upfront reserved plan. If you’re willing to pay upfront and/or sign a longer contract, you can get it as low as $0.019/hr.
I say sorta because it's built on an old version of Amazon Linux and is headed towards EOL with no replacement except "go build your own" as you suggest.
Another thing: EC2 instances (VMs) have a "Source/Destination IP check" which makes them ignore any packets not intended for them. If you want an instance to do NAT, you need to turn this off.
You also have to do it in AWS if you don't want to use the NAT Gateway service and still desire reliability over and above the MTBF for an EC2 instance or AZ, or ever want to do anything requiring a reboot.
For example, rather than simply routing IP packets and then forgetting them, you need to statefully inspect every TCP segment and every supposedly connectionless UDP conversation, you need to maintain state for every live conversation, and you need to mitigate DOS with all those resources.
At that point, you might as well be running a Layer 7 Firewall or an Intrusion Protection System.
> At that point, you might as well be running a Layer 7 Firewall or an Intrusion Protection System.
If you go down this path consider using Transit Gateway so you can route multiple VPC traffic to a central security VPC in a region. I’ve done this a Palo Alto VM and it seems to work well.
UDP is connectionless precisely so you can build novel stateful protocols on it. There’s no promise in UDP that you’ll be able to statelessly monitor it.
UDP is actually more expensive to NAT than TCP is. The reason is UDP fragmentation, which is my vote for the worst, and least forgivable, design error of TCP/IP.
Instead of putting the fragmentation in L4 (like QUIC now does) and including a UDP header on every fragmented packet in a datagram, UDP only includes the header on the first packet. With fragmentation happening; firewalls, NATs, and end-hosts have to buffer and coalesce IP packets based on IP IDs, before the destination can be identified. It's a real nuisance. A lot of CGNAT "stateless" implementations can't handle this and you get very hard to debug issues when there are fragmentation and MTU mismatches.
This is probably more accurately called IP fragmentation (since that is the layer where the fragmentation happens), and a lot of companies make it optional to support in networking gear. I'm surprised that you are using it or seeing it, because it is essentially obsolete today.
It has a legitimate purpose in old-timey systems which have bespoke MTUs on each link, but now the usual thing is to use 1500 bytes for WAN traffic, which is the generic Ethernet MTU, and reserve larger sizes for intra-datacenter communications.
There's a number of UDP protocols that have large enough payloads to fragment. DNSSEC and EDNS0 in particular made it much more common, though the EDNS0 flag day in 2020 partially undid some of the damage by getting folks to ratchet down their EDNS0 buffer sizes.
1500 is absolutely not a pervasively usable WAN MTU, you're going to need pMTUd if you're sending 1500 byte packets broadly. Plenty of WAN links won't tolerate it. If you don't want to deal with fragmentation at all ... 500 is the minimum guaranteed MTU, but in practice it's exceptionally rare to see anything below about 1200 require fragmentation. But you can always only control what you send, not what others are sending you.
One thing I've learned since joining Fly.io in 2020 is to laugh when people point to the 1500 MTU. You absolutely can't count on that: IPv6 cuts into it, and so does every additional layer of encapsulation on your path.
Yeah, you have to account for the headers in the 1500 byte MTU, which I suppose can be substantial if you have several VLAN tags, IPSec, IPv6, and a bunch of IP options. Presumably most of that encapsulation happens inside a datacenter, though, where you can use jumbo frames.
Even well-behaved unfragmented UDP should be more expensive to NAT because it doesn't have an end-of-stream "FIN" marker, meaning stateful middleboxes need to retain state for longer because they can only time out.
TCP does not use IP fragmentation, and the IP packets are marked "Don't fragment". TCP performs its own fragmentation and every packet gets a TCP header in its leading section. A NAT, Firewall, or end-host can L4 route the TCP packet as-is and does not need to correlate with other packets.
Edited to extend: this is why TCP has a "Maximum Segment Size", and why Path MTU Discovery information has to be passed into the TCP state machine. It is TCP that takes responsibility for carving up the data into the packets, not IP.
One of the goals of UDP was to avoid needing this kind of state, which is why the IP layer handles fragmentation for it instead. This is allowed on a hop-by-hop basis, unless the DF bit is set; so when a "too big" packet gets to a node with a smaller MTU, it can just split it and send on the fragments. No PMTUD needed.
The design could have been for the fragmenting node to also add a UDP header as part of that process, but was not. It would have been a simple change at the time. It's had a lot of consequences since and is responsible for a decent amount of complexity in hardware and software packet pipelines.
Several other protocols solve this in a layering agnostic way by simply having a header length field. The header bytes can then be copied without any understanding of the format. This is even how IP's own ICMP protocol knows how much of an IP packet it should (at least) include in an error message so that the sender can know what triggered the error.
TCP, UDP, ICMP and IP were all designed contemporaneously; UDP fragmentation could also easily have just been specified for. It's just an odd regrettable quirk.
Also, if you get UDP completely right, do you need any other IP protocols? The whole point of UDP is programming directly to the datagram interface. Before IPv6 you could even disable the checksums.
MSS was also super annoying for me doing re-encapsulation of TCP packets! We wanted to do eBPF cut-through routing of TCP connections for WebRTC stuff, where proxy bounces would be problematic because connections need to live a long time. If you're shuttling packets around, you're going to eat into the MTU with your own headers. 99.9% of our TCP connections weren't cut through so we don't want to dial in new settings into VMs for that feature, so we did it in eBPF, and parsing/adjusting TCP headers in BPF C (pre-bounded loops!) wasn't fun.
Which is why game networking libraries put a lot of emphasis on NAT traversal, forcing NATs to recognise the "connection". And why game console manufacturers tell users to just forward all incoming traffic unmanaged by the NAT to the console.
This is missing the point mostly, my own sites have supported ipv6 for a going on a decade because it was fun to get it working. But that's a very different thing than supporting only IPv6.
It's best for an ISP to deploy IPv6 and CGNATv4 in parallel, so the NAT only needs to handle traffic for services that don't support IPv6 (e.g. news.ycombinator.com)
NAT and Stateful firewalling are commonly bundled together (especially on home systems) but I would not go so far as to say “NAT has a stateful firewall”-
I hear such takes all the time and its really frustrating; usually in threads regarding IPv6, incidentally it is usually programmers who think they understand everything about networks because they know how tcp operates.
In almost all NAT implementations, public-side ports are dynamically assigned, which implies that inbound connections aren't possible (unless port forwarding is explicitly configured).
Is that really conceptually so different from a stateful firewall allowing inbound packets only for established connections/flows?
"NATs are good because otherwise people wouldn't have any firewalls" is a tired take, yes, but I don't see the point being needlessly pedantic about the semantics of NAT vs. stateful firewalls when in this case, the effect is the same: No inbound packets without prior outbound packets (or a connection establishment for TCP).
$40/mo is outrageous? We spend thousands a month on AWS and drive most traffic thru a single NAT gateway. It's rock solid and it "just works" without any fuss. Totally worth it.
yep, and they should. aws has never really been suited to the hobbyist. does it work for that? of course. is it most cost effective? absolutely not. is it cost effective for people who need the resources? yes.
I run a number of personal projects on AWS entirely on their serverless offerings and pay $0 outside of domain registration as I'm well within their free tiers. That seems pretty cost effective.
Yes, if you can abuse the free tiers, you can essentially run a small SaaS company for free. Once you scale past that point, you are on the hook for a (probably much too large) bill, when you could still be using the same $5 VPS.
You must be talking about renting resources that run 100% of the time. AWS rents us gpu instances by the second. We have to run sporadic jobs throughout the day that take 50 seconds to two hours. Depending on customer activity we might need to run 10 or more at once, or we might lie idle for an hour. The elastic economics are unbeatable.
so I have an HTTP endpoint which gets maybe 10 hits per day, and does some lightweight computations and records small amount of data.
Right now, this is done on AWS, with lambda + S3, and costs under $0.02/month.
Can you point me to something more cost effective that that? Don't forget I also need backup for data, automatic failover in case of machine failure or crash, amd no maintenance (like OS upgrades) for 5+ years.
Stateless applications are far cheaper than stateful applications to host on AWS. Computing is cheap, object storage is where AWS make unreasonable profit and lock you in their platform.
That does not answer my question though... I am not spending $600,000 on hardware!
I have no doubt that there are plenty of cases when local hardware is cheaper, but gp said "There is no possible use case in no possible universe where AWS is cost effective."... and I claim there are many use cases where AWS is cheaper.
Which hosting companies are there which are SoC-2 compliant and are 20 times less cost effective than AWS?
Enterprise workloads need compliance. AWS and GCP provide that. They are very few hosting companies out there who are better at security and compliance than those two.
Renting the same compute resources might cost you less but you are on the hook for maintenance and administration which can cost you more in the long run.
The issue tends to be that people do not actually stay on top of their spend- they claim to need less headcount but then spend more than a few salaries worth on their cloud spend.
They claim they do not need headcount but then spend the same headcount in infra people anyway, or finops people in the best case.
people have lost touch with how much compute actually costs, because its little by little and claims to scale to zero or you only pay what you want. - yet every installation I’ve ever seen has had a base cost higher than the largest colo installation cost we would have needed times 2.
Its not cost effective, because its on average 11x more expensive than a fully managed colocation installation. - your packets dont care that you spent 11x more on half the performance.
And all those cloud compute instances are probably strangled for io if there is any real load.
Colocating your own equipment is going to give way better base performance. Compute is not just processor and memory, it's also dependent on network and disk i/o. Disk is often overlooked because modern disks are so fast, people don't even realize it's crippled in the cloud.
It really is crippled to an absurd degree. A basic RDS install with a gp3 volume will get 3k IOPS and 0.125 GB/s transfer if your volume is under 400 GB, or 12k IOPS and 0.5 GB/s transfer if it's larger. The monthly per-GB storage cost for RDS is the same as the capital costs to buy the disks in a mirrored setup. Meanwhile, if you bought the disks, you'd get over 100x the IOPS and 10x the transfer.
For a provisioned IOPS volume, you can get up to 256k IOPS (so still a fraction of a single drive) at a cost of $25600/month (plus per-GB storage costs). For that price, you could buy 8x of these: https://www.newegg.com/micron-30-72-tb-9400/p/N82E1682036315... giving you 240 TB of raw SSD storage.
I ran into a SaaS company recently that had a guide for how to setup a white-label domain using route 53 and Cloudfront for one of their services. The SaaS company charges for service bandwidth usage, and they host their infrastructure on AWS, so if you opt to follow their guide they get a fat margin bump in the form of avoiding an egress charge and you get to be double-charged for bandwidth. You've gotta love it.
If I follow what you're saying I suppose my understanding could be wrong, but there's no "cloud transfer" required. It's just a matter of both the distribution and the configured origin being on AWS. If the origin doesn't direct traffic outside of the datacenter, AWS doesn't bill that as egress to whoever owns the origin. The Cloudfront distribution, on the other hand, will take the hit once it exceeds the free tier because it's the AWS service distributing data to end-users. It has to make a request to the AWS internal service then cache it, so The SaaS lambda or s3 bucket or EC2 instance or whatever they're using is none-the-wiser. It's just how the AWS billing mechanism works.
$40/month just to run it, but then $0.045 per GB data rates. The data rates are what is outrageous. NAT Gateways comprise a non-trivial portion of many customer's bills for this reason.
Exactly, it is $32 just to have it turned on 24/7 and then you pay additionally. Looked into it as it is the only(?) way to get dedicated IP for Lambda which is a common use case, which also explains why it is so costly. They lure you in free tier and then charge for all the necessities.
Amazon owns millions of IPv4s which they purchased for probably less than $5 a pop. So it completely pays for itself after less than a year then it’s just free cash flow
I completely agree. It’s odd they would announce charging for dedicated IPv4 while not having a free shared egress solution (unless I’m misunderstanding).
I would expect them to reduce NAT pricing in the long run, but who knows.
I'm shocked this isn't a feature of a VPC out of the box (shared internet bound traffic). You should only need a NAT gateway if you want the traffic to come out of a single set of external IPs that you control.
Almost all of my use cases I could easily ride out to the internet through a shared pipe (apt updates and such) and don't care whatsoever what IP that exits the AWS network from, since I'm not applying firewall rules or anything.
I think that as a business and given the fact they are now charging for a previously free service (public IPs), offering a now paid service as free would nullify the reasons for doing what they are doing. They don't owe anyone anything for free.
Look I’m the last one that will bang the capitalism, consume, spend, drum, but nothing is ever free. You want a free nat box? Your managed database cluster just got a fraction of a fraction of a cent more expensive every hour. You would never get it for free even if they billed it as such. They wouldn’t be the biggest company in the world if they were in the habit of using their profits for your bills instead of the shareholder’s dividends. This is a business, not a family style steak house.
That’s not anything new. They’ve been scarce for a decade and they’ve footed the bill this long so why change it now other then someone noticed the opportunity
How much does the NAT gateway cost? Quick search didn't turn up anything (and I don't care about this enough to spend more than a few seconds on it). You can turn a regular EC2 instance running Linux into a NAT box by giving it two network interfaces (hell, you can even do it with a single interface) and a few shell commands; I wonder if that's cheaper, even including the price of the public IPv4.
Edit: I see from another post that NAT gateway costs $0.045/hr + $0.045/GB of transfer. That seems... not terrible? An a1.large on EC2 is $0.051/hr + $0.09/GB transfer to the internet (which I assume this type of box would be doing a lot of).
100% agree, they need to offer steep reserved instance pricing for NAT gateways. To deploy 3 NAT gateways (HA one in each availability zone) is $99/mo just for the instances.
You don't have to use AWS' appliance (the NAT GW) to do NAT. You can NAT your traffic yourself from a t2.micro Linux.
AWS used to maintain a AMI to do just that, nowadays you have to do it yourself, but it's honestly not much more than adding 2/3 iptables rules.
I find this trade-off to be exactly the reason why AWS is so good even for small startups. You can bootstrap something quickly, though it will be a tad expensive.
And if you need to down your costs later on, you start chasing the quickwins like maintaining your own NAT gateway. The same could apply for all managed services.
Maintaining your own OpenVPN VS AWS VPN.
Maintaining your own Postgres VS RDS.
etc
"Cheaper" is not the only dimension to consider. Using managed services is also faster, more reliable, and scale better. You cannot just get that for free.
You can stand up your own on top of a t3.micro or something if you don't care too much about HA (e.g. you just wanna be able to hit the internet when SSHed into your instances).
What kind of workloads require a lot of NAT gateway usage?
I think my team's use is kind of high, with 16 TB going through NAT last month. The bill for that came to ~1300, which is higher than I'd like, but that's only about 1.5% of our AWS spend. Tbh I never really looked at the spend for NAT before, but this doesn't alarm me.
Last time we used GCP's NAT gateway it was constantly dropping SYN packets.
We had to revert to using External IPs on machines that talked to the wider internet.
AWS over the last decade has spent $ billions buying up ASN blocks.
I've never been one to use the word "rent seeking", but owning IPs is the ultimate rent seeking cloud business. Domain names can change registries but if you own the underlining IP being used (and there's a depleting supply of them) - it's a great business to charge rents on.
Looking at it a different way, IPv4 addresses are scarce so it makes more economic sense to have fewer, central owners that can maximize usage, rather than millions of individuals owners, many or most of which would not necessarily be using them at any given time.
Putting a price on IP address usage again is a mechanism to prevent squatting/hoarding a scarce resource.
But if you don’t want to “rent” IP addresses from anyone, you can still find blocks for sale. Last time I checked (last year) class C blocks were going for $15k-$20k.
> makes more economic sense to have fewer, central owners that can maximize usage
What you have described is effectively a China-style ICP license[1]. Unless you are willing to give a big name cloud provider $x per month, you shouldn't be able to put a service on the internet?
"We" are not doing anything of the sort. 11 years into IPv6 and you still can't single home a network behind v6. Much like DNSSEC the purists refuse to admit that it has basically been a failure outside of very specific use cases.
You can't buy/sell/trade "ASN blocks". The only people handling "ASN blocks" are the 5 RIRs (APNIC, RIPE NCC, ARIN, AfriNIC and LACNIC) and IANA.
> owning IPs is the ultimate rent seeking cloud business
It also seems that your use of "rent seeking" doesn't match established use. It normally refers to people extracting money for things far beyond their actual value. The IPv4 market is working pretty well on a supply vs. demand price feedback loop, i.e. the prices are in fact just reflecting the scarcity of IPv4 addresses. The term "rent seeking" does not fit that situation.
> It also seems that your use of "rent seeking" doesn't match established use.
No, OP used it exactly correctly. It's the textbook definition.
> It normally refers to people extracting money for things far beyond their actual value.
No, it doesn't. The use was popularized in Wealth of Nations (yes, the original) and it refers to, as the name implies, renting out land.
I buy land. Once I've done that, I extract wealth from the economy from the economy while putting nothing new in. There's a finite amount of land.
This contrasts with investing in businesses (which allows them to buy capital, thereby generating further wealth), work, and other forms of income which generate wealth for the economy.
In broad strokes, rent-seeking behavior is unproductive, while work, investment, etc. are productive.
> It normally refers to people extracting money for things far beyond their actual value.
That's not what "rent-seeking" means at all.
Rent-seeking is extracting wealth from a system without creating anything. It's a term meant to differentiate profiting via productivity/adding value (eg. manufacturing a better product and outcompeting others) and profiting via extracting value from others without adding anything (eg. buying out all of the manufacturers of a product and leveraging your monopoly position to jack up prices).
Amazon haven't created any value here - they own enough of a stock of a scarce, in-demand resource that they can charge a great deal for it. It's the definition of rent-seeking.
No you can't, because you can't actually acquire an ASN *block* to begin with.
Which is the point of my comment. Only the RIRs handle blocks of ASNs. As a non-RIR entity you can get individual ASNs, or multiple individual ASNs, but not an ASN block.
Even already, I think you can get away with doing almost everything v6 with a much smaller number of ipv4s for legacy traffic. I say that but still largely use v4 for everything, so maybe I'm not one to talk.
There's a collective action problem around IPv4 vs IPv6. Talking about Azure/Microsoft/GitHub and its lack of IPv6 support is very much an interrelated problem. It's ridiculous to think of noting downsides/trade offs as just kvetching.
Because for better or worse, those services are transforming the output. Arguably in not a very valuable way but they are transforming it nonetheless. Whether or not you find that transformation useful doesn’t change the fact it is happening.
IPs are IPs no matter what way you are cutting it. There is no other universal way to address internet resources yet (as adoption is still slow on ipv6), so this is rent-seeking in the same way a toll road on public roadway that has existed for 20 years is rent-seeking.
Edit: furthermore, in both of your examples you can just go to another provider or not use those services. If you are locked in to AWS, you HAVE to pay this price.
IP addresses are supposed to be free! The RIRs are in the business of handing out addresses to whoever applies for them if the applicant can show that they have a reasonable use for the addresses. But as the IPv4 addresses eventually run out, AWS will then buy addresses directly from whoever has them. This is allowed, but a bit dubious, since if someone aren’t using their addresses, it would make more sense to return them to the RIR to be reassigned. But if AWS owns all the addresses, and nobody can get any more, it makes sense for AWS to start charging for the addresses. It would also make sense for AWS to halt and delay any IPv6 adoption, so we should watch for that.
Not true at all you have apply for asn, you have pay for registration and ips seperately. Which needs to be renewed every year. Also you cant sell to anyone you want. You can transfer ownership but it goes through arin.
In reality, nothing can be free. The cost was initially not being paid for, because initially there's just quite a bit of addresses, and there wouldnt have been any quarrels.
I maintain that the world is being polluted because things aren't free. Imagine if every cubic inch of air, water and land is owned. You would not be allowed to pollute! You'd pay for your use of it!
Had AWS not gone around offering to beat any offered price for IPs things might be a lot more reasonable right now. You can't complain that you had to pay a ton for a scare resource when you were the ones throwing gasoline on the scarcity problem.
They even did backroom deals to steal large blocks of IP space, most notably from the HAM radio community.
This finally puts real pressure on software and services to work on IPv6 only. I wouldn't be surprised if within 1-2 release cycles lots of distributions suddenly update just fine with just IPv6, package mangers can download packages over IPv6, lots of APIs gain solid and well-tested IPv6 support, etc.
Businesses and organizations are holding IPv6 back, not consumers. No one I talk to is prioritizing IPv6 migrations or spending money to upgrade gear that will support it. Maybe some net new stuff might get it, but for most businesses IPv4 is and will be the default, simply because they can't be bothered to do something different.
It’s worse than that: new software and hardware is being developed or rolled out right now that is incapable of working on an IPv6 network. Not just unable to use it, but actively incompatible — failing to run if other devices use IPv6!
This was an issue with Azure’s PostgreSQL service, which would fail if you deployed other unrelated IPv6 services in the same virtual network.
We need a guild of software engineering so that the people responsible for this can be summarily ejected from it.
The threat of professional exclusion is one of the big levers provided by such a guild. Given the way tech companies behave, why do you believe that this lever will be left in the hands of good people, and not taken over (like the rest of the internet)?
I think there are more developers opposed to the recent web attestation shenanigans than those for it (even those who might be coerced into being for it by their employer). A majority guild vote could stop everyone working on it and keep the web open.
Yes, and if you can hold and retain power over the guild then the power of professional exclusion exceeds the coercive power of any one employer. But when your threat model includes adversaries who are willing to subvert the entire open internet to get their way, how do you harden your guild against this attack?
Serious question, is there any enterprise gear made today which does not support IPv6? I have assumed that the natural hardware upgrade cycles made it so 99% of all active equipment could support the technology, even if it was not configured to do so.
It is not about the gear, it's about security people that force you to disable IPv6. "You do not have a valid technical or business reason to use it. And, as electricians say, a VISIBLE circuit break provides the best assurance that this circuit will not kill you. Lack of IPv6, as opposed to just firewalling it, is the equivalent of the visible circuit break. I would also enable a whitelist of permitted ethertypes on all switches, and not include IPv6 there."
And let me quote from CIS SUSE Linux Enterprise 15 Benchmark v1.1.1 page 191: "3.1.1 Disable IPv6 (Automated). Profile Applicability: Level 2 - Server, Level 2 - Workstation."
Having an Allowlist of ethertype (ARP/IPv4/IPv6) is an extremely good idea IMO, as Windows and Linux are extremely permissive in what they accept on L2: https://blog.champtar.fr/VLAN0_LLC_SNAP/
That door alarm thing that has a Windows XP workstation VM the facilities team touches once a month probably doesn't support IPv6.
Repeat that scenario across multiple BUs and multiple locations and no leader wants to commit to doing that kind of due diligence. What's wrong with our current IP?
Man in the middle certificate re-signing deep packet inspection firewalls are notorious for not supporting IPv6. Most everything else has switched, but many network admins fear IPv6 and don't want to have to learn something new.
"Made today"? Probably not. "Still in operation today"? Definitely.
My company makes what is essentially an enterprise IoT device. I'd guesstimate 10% of networks with our hardware in them have no ipv6 support at all. And these are businesses that are on the more tech savvy side (I would assume, since they're ordering our stuff).
Just because it's a 128-bit number doesn't mean it should be difficult to remember, the standard notation goes a long way toward that. 2001:db8::cafe:f00d and fc00:bad:beef::1 aren't what I'd call the epitome of "can't remember"
Mind that real-world global addresses often have four groups of almost-random at the beginning, but it's usually not terrible to commit to memory.
Have you tried? I knew my v6 block (and static addresses within that, which is like two: router and server) back when I used v6 ten years ago, same as I know my v4 address, without really actively learning either
For sure this is a self-hoster thing, where you have pets not cattle, but so is memorizing your v4 address(es)
damn, imagine if the buddies of the company that owns all of these IPs operated that system. The entire internet would be in their control. Wild dream.
Plenty of people own ipv4 addresses (I mean, all of them are owned, but most by companies) and have been using the same addresses for a very long time, you start to remember some of them by heart after that :)
IP addresses should never have had letters and double colons in them.
What's Google's IPv4 DNS? 8.8.8.8.
What SHOULD Google's IPv6 DNS be? 8.8.8.8.8.8.
What SHOULD Google's IPv8 DNS be? 8.8.8.8.8.8.8.8.
What IS Google's IPv6 DNS? 2001::some::shit::I::::can't::remember//::h0ff::affblah
This is why I'm still stuck on IPv4. I'm a walking DNS server for all the instances I own, I can hammer out IPs when DNS fails me and that's a very useful feature, especially when idiot Wi-Fi hotspots try to DNS poison you when you're trying to SSH into something and the poisoned IPs stay cached even after you've accepted the stupid TOS.
If it is the use of colons instead of dots that prevents you from learning the adresses, then I'm not sure you can be helped.
But that discussion aside, if you adopt the IPv4 naming scheme to the 128-bit IPv6 adresses, Google's DNS would be 8.8.8.8.8.8.8.8.8.8.8.8.8.8.8.8.
I would never be confident that I put in the right number of 8's in that case. And I have a feeling that you being overwhelmed has more to do with the total increase of possibilities, than with hexadecimal notation.
I guess it shows that IPv6 was designed for computers, not humans. Because we need a vast number if IP adressses. And that is fine for me.
Colons are inherently more frightening than dots, especially double colons, which seems like some badly written C++ class escaped from gaol. Dots feel friendly and cute, I would pet an IPv4 address.
> then I'm not sure you can be helped
Sure, and the rest of the planet hasn't adopted IPv6 either. It's a horrible UX.
If I'm allowed to argue using unrelated topics and feelings, here you go ;)
> Colons are inherently more frightening than dots
I highly disagree. In traditional text usage, dots end a sentence. They are terminal. A symbol of stasis. Like death. Contrary to that, a colon always refers to something that comes after: it transcends itself, and wakes my curiosity. It is a symbol of growth and learning.
Apple has been demanding apps support IPv6 only for years now. They reject your app if it fails under NAT64. The end user side is mostly a solved problem.
For iOS maybe. Most of those applications are also using Apple's networking libraries and are effectively required to be on Apple's infinite software update treadmill to continue to be listed, keeping them young and hip in perpetuity. This is the upside to that treadmill, things are up to date or just stop working.
But I don't think that's representative. "Or just stop working" isn't a valid alternative to the rest of the world. Outside of mobile ecosystems and maybe web development most things aren't on these 6 to 12 month update cycles. It would be absolutely unreasonable to tell a hospital that every piece of hardware and software and MRI machine in their building has to be upgraded every 2 years or it's positively geriatric and do you even `pacman -Syyu` bro?
Theres a whole world of things that haven't been, and may never be, transitioned. Useful things like utility control computers and even peoples' 10 year old, still perfectly functional and supported desktops. Heck, my "end user" newly-installed fibre ISP doesn't support IPv6! And their previous DSL installation to the same address did! So much for "solved problem" :(
A hospital's MRI machine doesn't need an internet connection. IPv4 only intranets are fine and we are never going to get rid of them.
But anything that connects to the internet needs to be updated regularly, if only for security and vulnerability reasons. If you have a 10-year-old functional and supported desktop, it most likely supports being IPv6 only just fine. The typical 10-year-old desktop came from the factory with Windows 8 and could be upgraded to Windows 10 (since it's supported). It even gets relatively new features such as IPv6 RDNSS allowing DHCP-less deployments.
As a individual/hobbyist, it's a much bigger disincentive.
For students and the like, it might actually be prohibitive.
The problem is it's really the first group that needs to drive the remaining IPv6 adoption by replacing their middleware boxes etc. and they're the group who are unlikely to care at this price.
Interesting. It's only possible to terminate 2002::/16 using a public IPv4 address, so if you're behind a NAT router, then the router itself must be running 6to4.
Aha! Thanks for the hint: I recently had to reconfigure my router from factory settings. The IPv6 configuration, sure enough, was kicked into 6to4 mode. I set it to "Auto Config" and now I've got end-to-end IPv6 connectivity with, look Ma, no NAT!
Dang has mentioned before that he doesn't use any scripts for his HN posts. He likely refrains because it would go against the intended spirit of HN - interesting discussion and genuine interaction.
There's scripts that are advanced and do moderation, say, but you could have an easy keyboard macro that would take a URL and replace it with URL, title, and number of comments.
So I have a tiny personal website hosted on ec2. Right now the DNS points to the server's public IPv4 address. But I don't really want to pay $40+/year for an IPv4 for my personal project.
Does anyone have experience switching a small personal site to IPv6 only in 2023?
I'm guessing the vast majority of my (North American/European-based) friends and visitors can probably connect just fine to an IPv6 address. I wish I knew what percentage it is.
I guess I could add an AAAA record and check what percentage of traffic actually uses it.
I understand that Movistar, the largest Spanish ISP, is currently deploying IPv6 in beta at the moment. I expect that will trickle down to the various resellers of Movistar's network shortly after. Hopefully that will get that 98% down in the near future. :(
Sigh, so basically it's impossible to switch without shredding an already tiny audience. I'm sure it won't be a nice UX either to have a "can't connect to this IP" error in someone's browser.
IPv6 has been around for so long now, I'm disappointed it doesn't have a little bit higher adoption.
They have had a free tier since they launched over a decade ago. I think they’ve found a way to monetize that traffic or at least the data they collect on the sites they proxy because it’s survived so long.
I think it's the same model as free antivirus. Free customers provide a lot of data to analyze and detect threats, which translates into increased value of the product to the paying customers.
Also gives you a lot of traffic which you can use to test new deployments without disrupting paying customers.
It's a chicken and egg problem: as long as sites are available through ip4, ISPs have no incentive to provide ip6, and since ISPs often don't provide ip6, sites can't go ip6-only. One possible solution would be to provide both and throttle ip4 traffic, then better speed can provide incentive to upgrade to ip6.
I'm not sure, but I believe I was on Windows 10 at the time. Shortly after this, I checked the appropriate boxes on my router (UDM-PRO) and my home network now supports IPv6. Passed all the tests on that website at least.
The client shouldn't even attempt to get an AAAA record unless they have an IPv6 stack available. In that case, their client should try to look up an A record and get an NXDOMAIN error which the browser usually shows as "IP address could not be found". If the client does attempt IPv6, you'll get a stack error when trying to connect to the remote host because the kernel will reject the address family and you'll get something like "network is unreachable". Some clients will also degrade from AAAA to A on error so you'll get the NXDOMAIN error as above.
I wrote this a few years ago, but I feel like putting a dedicated IPv4 like 66.66.66.66 in the A record would inform the web browser or other software that the website is only reachable via IPv6, and a more informative error message could be displayed.
This would require a proper RFC of course, with support from IANA and web browsers.
How about removing the public IP and receiving connection from cloudfront? Or have it hosted in apprunner. Then you cname your domain to the services' domain, and skip the cost.
Throwing the VPC behind cloudfront is probably the best course of action, if your site is static I'd recommend looking into S3 + Cloudfront for hosting it. It's basically free, and great if your site is mostly static. I run a few scheduled jobs on Lambda to pull some data for my site and it comes out at basically $0 every month.
So I used to use DigitalOcean for around the same intro price point, but after a while I realized that I could pay $22/year for a t4g.nano ec2 instance instead of $72 for the cheap DigitalOcean VPS. I guess in the end, the $22/year was too good to be true and the DO/Linode pricing effectively bundles in the price of the IPv4 address.
The only barrier for me to go IPv6-only is those VPS that are provided with a single /128 IPv6, and I do not know of a service that would offer IPv6 tunneling other than HE, that requires an IPv4 endpoint. The day I get a full /48 or /64 with my VPSes, I'm ready to drop IPv4.
Does your VPS assign you multiple ipv4 addresses? Otherwise seems like feature parity.
I use ipv6 everywhere, but I get annoyed when some features are missing.
For example, OVH won't let me transfer an IPv6 prefix like they do for IPv4. I thought I could just migrate my VMs to another box, but one of them had lots of clients with their own DNS/domains, so it was a huge pain to update.
I still don't get why we can't just expand IPv4 into IPv5 by adding some new blocks to the front.
So instead of 192.0.0.1 it becomes 0.0.0.0.192.0.0.1
All existing addresses work, you simply append zeroes to any address which is too short for the new standard. Any old timey software still works as long as you use a router between the two systems with an old timey address.
This would give us as many addresses as we want without any changes or downsides. So why no do?
IP is not a text format (like HTTP). It's a binary format where each field of the IPv4 header has an exactly defined offset and length. The source IP address is placed at offset 96 and has a length of 32 bit, the destination IP address sits right afterwards with the same length. Changing anything will result in new protocol definition, et voilà that's IPv6.
This comment is just HN at its best. Chef's kiss. The Internet Engineering Task Force, a group of experts in the field, spent years and countless hours creating a new standard, but do not let that stop us from napkin-sketching up a new solution ourselves, I mean how smart can these experts really be?
I wish I had the ability to downvote, so I could downvote this comment into oblivion. What a shit attitude to have. Asking stupid questions is how you learn. You stop asking questions, you stop learning.
I guess because it's not simply a text address, it's a protocol where a specific number of bytes in the packet (4 in this case) are dedicated for IP, you can't just simply modify this.
I never understood why AWS has so much appeal when it comes to cloud infrastructure. Why not cheaper clouds? Is it about scalability, reliability, speed, modernity of equipment, customer support, UI, speed of networks?
Let's say the requirement is to build a platform like Twitter with 100mln daily active users. Wouldn't cloud like Hetzner with AWS/GCP/Azure failover, survive this?
I worked with AWS as a developer for a long time, but in pretty much ever case 10 was more than enough.
Would be very grateful if someone could share some insight into it!
Yes [1], I was looking into Hetzner, which has an api to create machines programatically. I assume for bigger customers you could rent out collocated racks which would satisfy requirements. I don't have any experience in it though.
As someone who recently wanted to try out IPv6 to learn more about it, I can say that I welcome anything that might help improve the sorry state of IPv6 adoption. This is a hostile and destructive move, I mean obviously, it's Amazon after all, but one can at least hope that as IPv4 increasingly becomes a cost, it could drive interest to the alternative that has been left out in the cold for like two decades.
Most end-users don't care what they're using as long as they can access the Internet, and since our other option to IPv6 adoption is living in a CGNAT hellscape that destroys the whole peer-to-peer idea of the Internet, then for the love of all that is holy start moving. Personally I think nation states need to take a bigger responsibility here and create incentives to move the market, because it's one of those things where the negative effects aren't obvious until they're overwhelming.
I knew I’d get the counter points here on HN, but I’d argue we’re probably the exception here. AWS can be really cheap, but it is easy for things to go wrong. Bandwidth, commonly unmetered at places like OVH or Hetzner, can cost a fortune at AWS if you get attacked. And while AWS will refund you once or twice, after that you’re either left scared or on the hook eventually.
Absolutely! It just happens to be a good fit for me :)
I use very little bandwidth and processing with the vast majority of my projects. In the even that I do need heavy lifting for a couple hours, it still tends to be a pretty minimal cost.
Now for sustained heavy loads/bandwidth… I definitely would look elsewhere for hobby projects.
Edit: and I agree with your point about attacks. I have pretty aggressive monitoring set up around billing.
AWS has the easy to use Lightsail[1] VPS offer with cheapest product at $3.5/mo but they'll likely increase these prices as well, since there's an IPv4 address included.
Counterpoint: My hobby projects all use AWS because that's what I am familiar with, and they have the cheapest prices. I also reuse a lot of resources like a database to further save costs.
Some companies have been allocating a bunch of pointless IPv4 addresses and I think that's why AWS is doing this. A friend of mine have reduced the number of IPv4 addresses his employer uses by 80% (100+ IPs) in less than a week. That's a huge saving, but those IPs should never have been allocated to begin with.
Depends how many IPs you're using. If you're using 10, who cares; if you're using 100, I dunno. If it's 1,000 or more, that's real money you probably shouldn't be pissing away. (OTOH, a lot of cloud spend is pissing away money, so what's another $45k/year)
But if you have 100 backend servers that mostly communicate on the internal network/VPC and need their IPv4 mostly for updates, it seems easy to justify standing up a proxy and reconfiguring your template. At least if your engineers aren't in Silicon Valley and thus don't cost you $400/h.
you don't have to break even on implementation. you will get billed every single year, so if you can have two dudes solve this in 3 months, you can break even in 3 years and every year after that you saved money
In most companies that would worry me. That there isn’t anything more impactful to work on than a project that breaks even in 3 years likely is over staffed and I’m likely on the chopping block when things turn south.
Why :-( ? There's no way MIT was using more than a tiny fraction of that /8; now it's actually being put to real use, and MIT probably got some money out of it. Everybody wins.
MIT was using it. Not efficiently, but MIT sold addresses that were in use at the time due to what appeared to be IT ineptitude.
It was also shortsighted. It was a massive resource, MIT presumably sold it for under $200M (I assume far under), and now AWS plans to rent the addresses at a rate that will be around $600M per year if they manage to rent them all.
It's not hidden, they put it right up on their blog https://aws.amazon.com/blogs/aws/new-aws-public-ipv4-address... the opening line of which is "We are introducing a new charge for public IPv4 addresses" and when it starts and what the cost is. I assume like every other AWS charge it's broken out in great detail on their billing statements and even have APIs to query costs. Usually they send an email with these changes too so if they haven't I assume they will. It's a regular old price hike but it's not a hidden one.
Secondly since "the cost to acquire a single public IPv4 address has risen more than 300% over the past 5 years", there's no accompanying decrease in server costs that would be "reasonable" to account for this. Charging for the IP itself makes total sense since that's the cost they're accounting for. If it were packed into the instance costs, then instances without a public IP would be paying for it too. This incentivises you to do exactly what they want you to do: use fewer public IPs where you don't need them. This is way more reasonable than an across-the-board instance cost bump which would be a hidden price hike. This is a bridge toll that covers the cost of the bridge by its users instead of raising taxes on everyone.
I guess you're wanting to pay the same and just distribute the cost between the IP and the instance differently? And hey me too, I love not being charged more. But they want to account for their costs without eating into their margin and this is how they're going about it. You don't have to like it; I sure don't. You can wish AWS would just keep eating the cost for you; me too! But I don't think "hidden" or "unreasonable" is accurate.
My back charged me a new fee advertised on new fine print in a web page somewhere I never saw. I changed banks. You can hide things in plain sight. No one has visited every page on Amazon.com.
There has been a decrease in server costs. Prices of computers continue to fall. AWS hosting has become (relatively) more expensive over time.
That would not catch every public IP address that is actually unused, because it can be attached to an interface and yet not be needed or actually used by any client. But I don't agree with GP that this is an important reason for the price increase. They are increasing prices simply because costs have increased.
Anything that an IP address can be attached to is already accumulating a charge, just by existing and running. EC2, NAT gateway, ELB, etc. What's "actually unused" then? Minimum amount of traffic? I don't think it's in Amazon's purview to make those judgement calls.
What I meant by unused is that there might not be a client that ever connects to that IP address, so the public IP address itself might not be used even if its attached to a resource.
> I don't think it's in Amazon's purview to make those judgement calls.
I already said I don't agree with GP that this is a motive for Amazon.
If they're separating out functionality from that service and charging for it, sure. Customers who don't pay extra are getting less service than they used to for the same money they used to pay.
> Customers who don't pay extra are getting less service than they used to for the same money they used to pay
Sure, yeah. That's how price increases work. Nobody's arguing that it's not a price increase. But if your delivered pizza's costs are fuel+ingredients and the price of fuel goes up, well, the whole price goes up or you have to give on the amount of pizza. The price of the ingredients didn't go down, so yeah you're just going to have to pay more or get less pizza. Sorry.
You can quibble on the pizzeria's margin I guess: AWS could just eat the increased price themselves, and probably have been until now. But apparently they don't want to so they're raising the price to compensate in frankly the most reasonable way possible. AWS has insane pricing for many of its services, especially bandwidth, but this isn't one of them.
Hot take. IPv6 adoption is never going to hit 100% because SNI routing covers most of the cases people actually need. If UDP functionality is necessary QUIC will be used. I wish this wasn't the case. It would be nice if the software was good enough that more people were enabled to self host.
In practice the Internet does not deliver IP packets. Only UDP or TCP is universally supported. Some firewalls, security appliances, filters, and proxies limit end to end connectivity to just TCP 443. Everything over IP has turned into everything over HTTP.
Not a hot take at all. We don't need 100% IPv6 adoption because we can't control what people do in their private networks. If a load balancer supports IPv6 that's good enough, even if the load balancer talks to the backend over IPv4.
Hetzner cloud has been charging for public IPv4 addresses for a while. It makes sense. If you have lots of servers, many of them probably don't need a public IPv4 address.
And not very good, together with the auto scaling groups, it performs the record act of not being able to do an instance refresh without downtime. We’ve put countless hours into that, seems like a simple problem, forums say it’s not solved.
Inflation has been aberrant over the past 3 years in some areas, i.e., food, from profit-price spirals but there is not widespread hyperinflation.
No one reputable is predicting the USD will crash imminently.
US T-bills lost a notch of rating due to long-term declining governance tied to the cozy relationship and revolving door between Wall St. and federal regulators. This is a form of corruption that undermines the economy and strategic power.
Well next time you get that big customer that scales your traffic by 500%, enjoy sitting around waiting for Dell to ship you a bunch of servers or whatever while we just change an integer in a repo and hit terraform apply.
It would be nice if this came with reasonably priced NAT gateways. The current pricing is outrageous.