For some projects there's no way around it, you have to build Kubernetes on bare metal/virtual machines. I've faced the same issues, the project called for Kubernetes, we can debate that requirement, but it was specified in the contract. Some project simply requires you to build Kubernetes on-prem, mostly for legal or political reasons.
I do question the logic of picking Kubernetes for scalability in those projects though. To make that work, you end up with a lot of excess capacity, unless you can scale down one workload, while scaling up another. E.g. scaling down a website or API at night and use the capacity for batch jobs.
Honestly building the cluster isn't my main concern, that pretty easy, I managed to write Ansible code for deploying a cluster in less than a day. My main concern is debugging and maintenance long term. Reading about companies that spin up a new cluster, because it's easier than figuring out while the old one broke, is an indication that Kubernetes might not be completely ready for production use.
Mixed use is, arguably, where k8s shines the most.
As in, you have a pool of hardware, and you want to optimize use of its capacity. Mix interactive jobs work batch jobs. Maybe even prod and non-prod. Fit as much as possible on smallest amount of nodes, etc.
Sadly what I see is people running separate clusters for prod, non-prod, staging or whatever you call it. I have never seen anyone use on-prem Kubernetes to optimize hardware usage.
K8s allows scale up and down. Having 2 different environments for prod and staging shouldn’t cause much extra costs of each can scale down on low usage. There’s a benefit in doing so: If your staging services accidentally require a huge amount of resources due to some of your own bugs, i.e. memory usage blowing up, you may easily pull down production for that due resource/monthly costs boundaries. The little extra money to run 2 clusters may be worth the money!
I do question the logic of picking Kubernetes for scalability in those projects though. To make that work, you end up with a lot of excess capacity, unless you can scale down one workload, while scaling up another. E.g. scaling down a website or API at night and use the capacity for batch jobs.
Honestly building the cluster isn't my main concern, that pretty easy, I managed to write Ansible code for deploying a cluster in less than a day. My main concern is debugging and maintenance long term. Reading about companies that spin up a new cluster, because it's easier than figuring out while the old one broke, is an indication that Kubernetes might not be completely ready for production use.