> The first pattern is having multiple Load Balancers (per VPC); this is often the result of using several readily available Cloudformation templates / Terraform modules, or somehow using Kubernetes ingress controllers that create a Load Balancer for every service. This is fixed by not doing that! A single Load Balancer can handle many URLs and services.
This is the definition of cloud bloat. The fact there are tons of systems abusing that kind of architecture probably justifies charging for IPv4.
I think it's an impedance mismatch between the feature people want -- "logical load balancers" and the feature they're offered "physical load balancers."
How nice it would be if you could just create a bunch of load balancers and all that actually meant was that it was just adding config profiles to a single physical load balancer and kept
them truly isolated? Right now it's really annoying because load balancer config is global state and everyone has to
either be kind neighbors when adding themselves to it or manage them top-down.
I do believe the AWS Load Balancer Controller on Kubernetes allows for sharing a single "physical" load balancer.
You have to set a load-balancer-name annotation https://kubernetes-sigs.github.io/aws-load-balancer-controll... to tie everything together to one load balancer. There is a downside where you have to have a few other annotations be the same value across your ingresses, but once you work around that, you're good to go.
I couldn't find it easily specified in docs. This is a common use-case and part of why I avoid EKS for HTTP workloads is that I have tiny services I want to just make available and I don't want to have another full ELB sitting there. It isn't a cost thing primarily. It's that now I have another significant resource. I want all of these things on a misc LB.
This is the definition of cloud bloat. The fact there are tons of systems abusing that kind of architecture probably justifies charging for IPv4.