Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Is Kubernetes the only alternative for being cloud agnostic?
37 points by taylodl on Feb 22, 2022 | hide | past | favorite | 37 comments
My company's management desires for us to be "cloud agnostic" - without having a firm definition of what that actually means. Lots of people in my company think "cloud agnostic" means you "containerize" your solutions and run them on Kubernetes. Then you can pick and move your application from cloud to cloud as desired. That seems extremely limited to me. To me that thinking exemplifies the cloud as being "someone else's computer" rather than seeing the cloud as an application platform in and of itself.

What does HN think about this? Is there a way of looking at "cloud agnostic" that still allows you to look at the cloud as an application platform and not just as a platform for running Kubernetes? Is there another way to think about this?




It's up to you to decide where the "cursor" is on the "cloud agnostic spectrum".

As an example, let's say you can deploy your app as containers orchestrated with SystemD on an Amazon EC2 instance. Switching providers will require you to rewrite your whole deployment procedure but not the containerized application code. Your application is cloud agnostic, but not your deployment pipeline.

Now, let's say instead, you have an Helm chart to deploy to k8s, you can move from Google GKE, to Azure AKS, to Amazon EKS, to baremetal k8s, without modifying your Helm chart. Your package is cloud agnostic, but not your infrastructure.

If your application requires a PostgreSQL database, you can provide it with Amazon RDS, or by self-hosting it with KubeDB in a k8s cluster, or any other solutions. To your application, it would be changing the `DATABASE_URL` environment variable and running a migration script from the old DB to the new one. There is no true lock-in in that specific case.

In other words:

  - yes, using Kubernetes can facilitate the deployment of your app on your infra in a "cloud agnostic" way
  - no, using Kubernetes does not mean your whole infra is "cloud agnostic"
  - it's ok if parts of your infra is not "cloud agnostic", that does not mean you're "vendor locked"


If you bootstrap/manage your cluster with cluster-api, the cluster provisioning process itself can be cloud agnostic too

https://cluster-api.sigs.k8s.io/user/quick-start.html#initia...


The short answer is YES.

K8s is the closest you are going to get to putting your entire application on a USB stick and plugging it into any computer to run. It is currently the de facto app for modern applications. It’s missing a lot of stuff, but it’s a good way of packaging things and with CRDs you can bolt on the other things you need and make them portable as well.

Whenever the majority of the industry agrees on an abstraction level we are able to make lots short term improvements. I say short term because obviously platform hegemony is often a bad thing, but it frees us up to focus on one area of the stack. So if you can build a great app with k8s and move it between clouds, you have sufficient lock-in protection for a provider. When someone improves on k8s they will create a mostly clean interface so you don’t have to do a ton of things to port your app.

Look at Docker as an example. Everyone thought that was the greatest thing in thr world, but once they have everyone a clean abstraction layer they setup a ton of other container runtimes to slide/shim in and now the runtime layer is being well commoditized. The value for most people and businesses is at the top of the stack.


I've been in an environment like this, and while I understand being reluctant to locking ones self into a relationship that will be hard to renegotiate later, it is self-defeating to try to abstract away everything your cloud provider , erm, provides.

Here's what makes sense to me: 1. Design your system as components that communicate via interfaces you can implement easily anywhere. In other words, don't design your system to use a proprietary system to communicate. If you can't easily reimplement dynamo or s3, don't make that an inextricable part of your system between your services.

2. Use the cloud services that make implementation easier inside your services. The idea is to reduce the implementation effort/investment as much as you can. Sure, porting to something else could be costly later, but you are saving real effort now. When and if you do need to reimplement on another provider, attempt to do the same thing.

3. If your business doesn't really need or want cloud, don't do cloud. Doing cloud half-way is worse than a well thought out alternative.


I fail to see what anti-lock-in benefits either containers or Kubernetes provide. Kubernetes is either a cloud provider provisioned elastic service or you are running it on an operating system, an operating system that abstracts sufficiently to provide anti lock-in qualities for free, unless you are using some proprietary cloud os your cloud provider built… which I’ve never heard of.

Kubernetes is a waste of computer resources if we are speaking about production deployments. What is your container distro? Alpine? Just run your app on alpine! It’s not that complicated. Networking will actually function properly and safely and you don’t need Kubernetes to have declarative infrastructure as code. Addicted to yaml? Ansible uses YAML, can reproduce your build steps and doesn’t leave detritus like an agent nor waste any target resources.

Avoid containers unless you have a specific reason for using them. It’s a silly trend, IMO


> Kubernetes is a waste of computer resources

Yes, because orchestrating storage volumes, network policies, redundancy, secrets, and containers on a cluster with potentially heterogeneous nodes (some might have a GPU, some might not), is a waste of resource

> you don’t need Kubernetes to have declarative infrastructure as code

True.

> Addicted to yaml? Ansible uses YAML

Yes, because rollbacks in case of failure are so easy to do with Ansible. And who cares about an API that is becoming wide-spread (to not say standard) so you don't have to relearn everything when jumping from project to project?

> your build steps and doesn’t leave detritus like an agent nor waste any target resources

Only if you wrote your playbooks properly. Which is hard.

> Avoid containers unless you have a specific reason for using them. It’s a silly trend, IMO

Yes, because who cares about isolations, reproducibility, and avoiding the "it works on my machine" when integrating your app on different OS with various configurations?

> I fail to see what anti-lock-in benefits either containers or Kubernetes provide.

This sums up your whole unhelpful comment.


Striving for cloud agnostic is the wrong goal. If you sell software that runs in the customer cloud, then you will want to support multiple clouds.

Kubernetes will not get you either. You still have cloud specific to stand it up and more.

The short of it is that you cannot be cloud agnostic without significantly more code and complexity. Best to have a few configuration points. Even with Kubernetes, you will still have cloud specific pieces. Without, you can still get many of the benefits with Packer + Terraform.

A good question is what is the purported benefit of going cloud agnostic. If it's to switch clouds, that is not a good one. You won't do it often enough to get an ROI. What is the cost to develop and maintain this posture? (significant)


This. Cloud agnosticism is a questionable goal. I say go all in on at least one. Or maybe more. If your are "multi-cloud" while embracing any feature of each you don't need the agnosticism to get leverage over the vendor (however much it might be)


One thing to be careful of when going multi-cloud is the network egress fees between the two.


Sometimes it's not about switching clouds on a regular basis, but the possibility to switch clouds. Otherwise you can simply add access to your bank account to your cloud provider.


Let's talk about storage & load-balancing in the context of k8s. You are only truly cloud agnostic if you are not using your cloud provider's load balancer or storage replication tools.

Run k8s on bare metal with metallb, and something like ranchers longhorn. Then, when you've got that working, move it into the cloud. Run it all on VMs and not cloud-managed k8s. Then you probably still aren't quite there, because IAM and S3 and so on.

In short being cloud agnostic is dumb. Just know that you will have some vendor lock-in.

Pick how much you can tolerate, and live with the rest.

IMHO, it's a stupid constraint. Just pick a provider and go all in.

If not, get the definition from management and skate as close to it as you can.


I don't disagree with you. My counter-argument has been cloud is a platform. There's an interpretation of the phrase "cloud agnostic" that essentially means you're agnostic to the OS, middleware, and application servers. We've never been so agnostic before, what makes us think we can be now?

That's why I'm thinking "cloud agnostic" has to mean something different. I don't see there being a problem with needing to re-work an application a little bit in order to deploy it to another cloud platform. The question is what kind of work would a reasonable person expect "re-work" to entail?


It depends on your company. For a startup or a small company doing basic things, where the IT is not where the innovation is, I would pick a cloud provider and go all in.

Otherwise I would rather be cloud agnostic when it makes sense. Because the cloud provider you pick will have many bad / low quality / expensive products that you will be forced to use because using another cloud provider is too much efforts. Being cloud agnostic allows you to use the best on the market.


There is a difference between being agnostic and multi-cloud. Setting up a wireguard tunnel or direct connect between GCP and AWS is not hard and that gives you the best of both...


Alright. I assumed it meant the same.


We have ansible playbooks that we wrote leveraging some of the ansible-galaxy stuff, and they do the same thing either on-prems or in cloud. I would say we're pretty cloud native, we follow best practices and deploying something to a cloud provider or internally is the same thing.

Our level of abstraction is containers, we don't use services that cloud providers sell, for example, Lambda, as they are very cloud provider specific. We prefer to use technologies like RabbitMQ which can be deployed on any cloud or on prems easily.



OKD is the open source (RedHat, (IBM)) OpenShift, which wraps kubernetes ; sort of like AWX : https://github.com/openshift/okd

> [OKD] Features: A fully automated distribution of Kubernetes on all major clouds and bare metal, OpenStack, and other virtualization providers;

Single-node OpenShift 4 requires at least 16GB of RAM and 8 vCPU : https://www.redhat.com/en/blog/single-node-openshift-manufac...

> Single node OpenShift offers both control and worker node capabilities in a single server and provides users with a consistent experience across the sites where OpenShift is deployed, regardless of the size of the deployment.

MicroShift requires 2Gb of RAM on e.g. an ARM64 Raspberry Pi 4 or similar SBC: https://microshift.io/docs/getting-started/


So container first, deployed to vm or PaaS?

Are you clustering the rabbit?


Deployed to VMs, internally vmware, outside whatever they have (e.g: EC2)


What are the requirements for your app? Which databases, email providers, caches, queue services etc does it use? I think you will be cloud agnostic if the solutions are decoupled from any proprietary cloud APIs. If you are coupled to Dynamodb, Google Cloud Datastore or SNS you are not cloud agnostic.

This is probably the minimum. What else does your solution include? Does it include hosting, infrastructure as code is it a managed service, something that clients run themselves or a SaaS?

Generally if you keep deployment simple you don't have to use docker and kubernetes. For example if the app code is delivered as a monolithic .jar file or native executable and you don't have a large number of 3rd party services that need to be deployed.

Docker and Kubernetes do help be cloud agnostic for a particular type of solution architecture - many microservices and potentially many other requirements like queues, caches, 3rd party including open source applications. If you had to create this distributed system on different providers it would be a lot of duplication of effort. In k8s there is still some - you most likely have to configure docker repositories, storage, load balancers, ingresses for different providers but configuration is better than reimplementation. And you still have to manage the cluster. So depending on your architecture and application requirements you may find it much simpler to use Docker and Kubernetes or them to be a needless complication.

There are also certain best practices that help with portability - the 12 factor is a good guide.


I'm talking about a corporate portfolio of applications, but for my portfolio I can see DynamoDB and Lex as being services that I can't replicate in another cloud. We use DynamoDB for state management. I suppose we could use Redis since that's a service that would be available on more clouds? Or maybe we should abstract ourselves away from state management and create plug-ins for different cloud platforms?

For things like Lex, yeah, that's never going to be cleanly cross-platform. Your application architecture can isolate that technology from the rest of your application so you can cleanly port it if you have to.

Appreciate the nod to 12 factor. Thanks!


Take a look at darp.io if you really want to be cloud agnostic.


I'm guessing you mean dapr.io.


yes :D


I think it's often better to "pick your poison" in terms of cloud providers and commit to it, with a rough migration plan that you can execute if you have to. There'll be common patterns in your systems that can be repeated if a large-scale lift-and-shift has to happen for some reason. But it's never easy, and I've found different clouds to have their own idiosyncrasies that make migration difficult - larger migrations will inevitably take time, effort, and lots of planning.

If you're looking for alternatives, or something lighter weight than Kubernetes, I've used Nomad (plus Terraform and Ansible) and some shell scripts to get repeatable clusters deployed and migrated between cloud providers: https://www.nomadproject.io/


Terraform doesn't require k8s: https://github.com/hashicorp/terraform

OpenStack is somewhat EC2 API compatible, but the reverse is not true.


I suppose another option is to run everything in Virtual Machines, then you could run them to any cloud provider you like (or even run your own servers).

What's the reasoning for that requirement, though?

I understand the fear of vendor lock-in, but if AWS decide to put their prices up tomorrow, it's non-trivial to just move everything to Azure (for example). Sure, you have some container images which are portable in theory.. but it's still a large undertaking.

I'd be interested to know if anybody has built with 'cloud agnostic' in mind, and actually needed to migrate to another provider.


A reasonable requirement which leads to "run everything in VMs" is the need to support on-premise deployments. Often data-security and compliance requirements can be handled most reasonably (or at all) by allowing the enterprise client to pick where to deploy a single-tenant copy of the service. For this, I think the most reasonable approach is to only require VMs and do all the configuration yourself (preferably in some scripted / automated manner).

Of course, in this case, (unless you install a private cloud), you forgo all the convenience and advantages of a cloud infrastructure (so one can argue that this wouldn't really count as 'cloud agnostic'), but since such on-premise deployments should only be required by big enterprise clients, they should have deep enough pockets to pay for it.


Moving your application from one type of infrastructure to another always involves effort, whether it's from a single hosted server to a set of load-balanced hosts, or from one cloud provider to another.

Unless you expect to migrate between cloud providers relatively often, and/or have built your application to be "cloud native" up-front (implying low cost to move to the cloud), it may be cheaper and swifter simply to perform a one-time rewrite and redeployment once you've chosen a desired deployment platform.


There are definitely a lot of ways to write "cloud agnostic" software but gets tricky as you start to use managed services from cloud providers.

However, it is one of the only technologies that pretty much is available on every major cloud provider in the world as a managed service with guarantees of compatibility

https://www.cncf.io/certification/software-conformance/


I keep things cloud agnostic by using exclusively:

- Terraform

- Ansible

- Systemd (sometimes Docker)

This way the only thing that changes between providers (e.g., moving my whole infra from digital ocean to linode) are the terraform files.

Do I need a load balancer? That’s nginx on a droplet (terraform apply + ansible-playbook). A database? That’s mysql on a droplet (same commands to deploy and provisioning)


> Do I need a load balancer? That’s nginx on a droplet

Multiple droplets and a keepalived/haproxy on each one if you don't want a Single Point Of Failure in your infra, being the load balancer.

> A database? That’s mysql on a droplet

And another one for backups, making sure your first droplet has 2 network interfaces, one used by your application, the other one used by your backup procedure so that it does not create too much latency.

Also, potentially multiple droplets if you want your database to be highly available so that you avoid yet another SPOF in your infra.

What you propose is fine for small needs, but it does not scale well unfortunately. I'd still recommend to keep things simple and avoid premature optimization though.

In my case, my production infrastructure runs on a k8s provided by DigitalOcean with one of their load balancer (provided by a k8s Service of type LoadBalancer). I have a droplet running a single-node k8s provided by k0s[0] for testing purposes.

I have a terraform to create the cluster and configure the DNS, then I developed klifter[1] (based on ansible) and klander[2] to provision my cluster via Github Actions.

  [0] - https://k0sproject.io/
  [1] - https://klifter.datapio.co
  [2] - https://klander.datapio.co


Ansible can be used to deploy software on virtual machines. Please consider storing data in open source software such as Postgres, MongoDb and Mariadb. Where and how you store your data with standard open source tools is important to be cloud vendor agnostic and avoid cloud vendor lockin.


Spinnaker supports multiple cloud providers and is self hosted. Vm machine images are built with packer

https://spinnaker.io/docs/setup/install/providers/


Why not use something like dapr.io which already did the leg work to abstract cloud services for you?


I didn't know something like dapr.io existed! I take it you use it and vouch for it? What limitations have you found?




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: