It's funny to read this article and thread while doing exactly what everyone suggests to avoid: building Kubernetes on bare virtual metal for a team with few programmers without any dedicated devops or SRE roles.
The reason I'm doing it is because our business owner thinks that we need scalability and high availability. We have law obligations to keep our data inside a country. And we don't have any managed Kubernetes offerings inside our country. The best cloud stuff I've found is hoster with openstack API and that's what I'm building upon. I thought really hard about going with just docker swarm, but it seems that this tech is dying and we should rather invest into learning Kubernetes.
Honestly so far I spent few weeks just learning Kubernetes and few days writing terraform+ansible scripts and my kubernetes cluster seems to work good enough. I didn't touch storage part yet, though, just kubeadm-installed kubernetes with openstack load balancer, calico network and nginx ingress. I guess hard part will come with storage stuff.
Worst thing is: everyone talks about how hard it is to run Kubernetes on bare metal, yet nobody talks about what exactly issues are and how to avoid them.
For some projects there's no way around it, you have to build Kubernetes on bare metal/virtual machines. I've faced the same issues, the project called for Kubernetes, we can debate that requirement, but it was specified in the contract. Some project simply requires you to build Kubernetes on-prem, mostly for legal or political reasons.
I do question the logic of picking Kubernetes for scalability in those projects though. To make that work, you end up with a lot of excess capacity, unless you can scale down one workload, while scaling up another. E.g. scaling down a website or API at night and use the capacity for batch jobs.
Honestly building the cluster isn't my main concern, that pretty easy, I managed to write Ansible code for deploying a cluster in less than a day. My main concern is debugging and maintenance long term. Reading about companies that spin up a new cluster, because it's easier than figuring out while the old one broke, is an indication that Kubernetes might not be completely ready for production use.
Mixed use is, arguably, where k8s shines the most.
As in, you have a pool of hardware, and you want to optimize use of its capacity. Mix interactive jobs work batch jobs. Maybe even prod and non-prod. Fit as much as possible on smallest amount of nodes, etc.
Sadly what I see is people running separate clusters for prod, non-prod, staging or whatever you call it. I have never seen anyone use on-prem Kubernetes to optimize hardware usage.
K8s allows scale up and down. Having 2 different environments for prod and staging shouldn’t cause much extra costs of each can scale down on low usage. There’s a benefit in doing so: If your staging services accidentally require a huge amount of resources due to some of your own bugs, i.e. memory usage blowing up, you may easily pull down production for that due resource/monthly costs boundaries. The little extra money to run 2 clusters may be worth the money!
The reason I'm doing it is because our business owner thinks that we need scalability and high availability. We have law obligations to keep our data inside a country. And we don't have any managed Kubernetes offerings inside our country. The best cloud stuff I've found is hoster with openstack API and that's what I'm building upon. I thought really hard about going with just docker swarm, but it seems that this tech is dying and we should rather invest into learning Kubernetes.
Honestly so far I spent few weeks just learning Kubernetes and few days writing terraform+ansible scripts and my kubernetes cluster seems to work good enough. I didn't touch storage part yet, though, just kubeadm-installed kubernetes with openstack load balancer, calico network and nginx ingress. I guess hard part will come with storage stuff.
Worst thing is: everyone talks about how hard it is to run Kubernetes on bare metal, yet nobody talks about what exactly issues are and how to avoid them.