Reminder to everyone that unless you have a truly massive or complex system, you probably don’t need to run K8s, and will save yourself a ton of headaches avoiding it in favor of a more simple system or using a managed option.
Not sure why this disclaimer has to be posted every time there's a discussion on K8s. It is a tool, if you need to use it, do use it. If not, don't.
Although I would argue that you need to know what trade offs you are making if you have the right use-case (multiple containers you need to orchestrate, preferably across multiple machines) and you are not using it or a similar tool. There are lots of best-practices and features you get out of the box, that you would have to implement yourself.
You get:
* Deployments and updates (rolling if you so wish)
* Secret management
* Configuration management
* Health Checks
* Load balancing
* Resource limits
* Logging
And so on(not even going into stateful here), but you get the picture. Whatever you don't get out of the box, you can easily add. Want Prometheus? That's an easy helm install away.
Almost every system starts out by being 'simple'. The question is, it going to _stay_ simple? If so, sure, you can docker run your container and forget about it.
This is news to me, do you have any links to k8s's support for blue-green deploys?
I've been holding off setting up a system like Spinnaker because I'd read it was coming (in the form of custom deployment strategies), but can't find anything current on the subject.
At an application level then the strategy consists of having two applications deployed and update the application's ingress after the deployment controller finishes updating the deployments.
I like the idea of kubernetes, and (some time ago) worked through a basic tutorial. My main confusion is how to set up a development environment. Are there any guides you could suggest that cover basic workflows?
You can migrate docker deployments to K8s just by adding the parts you were missing, so when in doubt, it always makes sense to start with docker, docker-compose, and only consider K8s as an alternative to docker swarm.
In practice, I found docker to be much more brittle than kubernetes, even when kubelet uses docker underneath. K8s minimizes the amount of "features" used from docker, and unlike docker is properly designed for deployment use out of the box (and generally has cleaner architecture, so figuring out server failures tended to be easier for me with k8s than with docker daemon)
I actually spent the past 3 days attempting to migrate my DIY “docker instances managed by systemd” setup to k8s, and found getting started to be a huge pain in the ass, eventually giving up when none of the CNIs seemed to work (my 3 physical hosts could ping each other and ping all the different service addresses, but containers couldn’t ping each other’s service addresses).
That said, if anyone REALLY wants to go the k8s route, it seems like starting with vanilla docker did allow me to get 75% of the work done before I needed to touch k8s itself :)
This false information really needs to die. k8s is a sane choice in many cases not just hyper scale. Regular business apps benefit from rolling upgrades and load balancing between replicas. Managed k8s platforms like GKE make cluster management a breeze. Developer tooling such as Skaffold makes developing with k8s seamless. I expect k8s to continue growing and will soon take over much of the existing Cloud Foundry estate in F500 companies .
Running k8s is much harder and takes more time than just having a few VMs with docker on it. Many applications never need to scale. I really like k8s from a user perspective but it's no easy task to set up. And managed solutions don't always work for everyone (and aren't always cost efficient).
If you run the whole thing (app + k8s) then I do agree with you that it's more complex and you're likely be better off without it.
But, k8s offers a very good way to split the responsibility of managing the infrastructure and managing the applications that run on top. Many people work in medium to big corporations that have a bunch of people that are in charge of managing the compute infrastructure.
I certainly prefer k8s as an API between me and the infrastructure, as opposed to filing tickets and/or using some ad-hoc and often in-house automation that lets me deploy my stuff in a way that is acceptable for the "infrastructure guys".
When I think about k8s complexity, I can only understand this argument if I'm the one dealing with the infrastructure required. If I have to install k8s in my servers, then I'll probably need to think hard about security, certificates, hardware failures, monitoring, alerting, etc. It's a lot of work.
However, if I use a managed k8s service, I probably don't have to think about any of that. I can focus on the metrics of my application, and not the cluster itself. At least, that's how I think it should work. I haven't used k8s in a while.
You don't need Istio if your application is simple. I think Istio makes more sense when your application makes heavy use of peer-to-peer pod connections. If you can get away with a simple queue as a bus, it should remain simple. I think!
I've enjoyed learning K8s for my not massive nor complex personal workload.
It's running and is more hands off than without it. I'm using a managed digital ocean cluster. I no longer have to worry about patching the OS as it's all handled for me. I also don't have to worry about having a server with a bunch of specialized packages installed, although I suppose only using containers could have gotten me that far.
I haven't had a ton of headaches. So, I guess people's experiences may differ.
It's interesting to me that K8s always draws out the "you probably don't need it" comments.
People say this every time anything related to k8s gets posted and I always wonder who it’s addressed to. The system doesn’t actually have to be that complex for kubernetes to be useful and kubernetes isn’t that hard to run. We’re in the process of switching from ecs to kubernetes and while it’s not an easy thing to make ready for production, it enables so much that wouldn’t even be possible with ecs.
To me this advice is only useful for tiny startups running a handful of web servers.
There is one more advantage vs ECS: there is no longer any lock-in. You can have more capabilities, using standard solutions that work anywhere you want.
I think the definition of "massive or complex system" varies between developers with different backgrounds, in your opinion, what's considered truly massive and complex system that may require Kubernetes?
Google Cloud Run, AWS Fargate, Google App Engine, Heroku etc. are comparable experiences to Kubernetes if you have the flexibility of (1) running on cloud (2) not having to configure host OS or rely on host GPUs etc.
Since you mentioned about CloudRun, I had one query.
I run docker compose locally for development. For prod, I just use a different docker compose file (with some values changed, for example the postgres database url etc.). I do this from a 5 USD per month droplet/vm. I can launch multiple services like this for my microservices platform. I can use a hosted database solution for another 15 USD per month, to get backups etc. Also I get a generous 1 TB bandwidth and predictable performance for my system as a whole.
In the past I have used appengine and been bitten by their update times (took more than 20 mins for a single code update, things could have improved now). Also I need to write deployment artifacts for each service.
Now is there any benefit that cloud run (or any paas) could offer compared to this ? Would it be not easier to just stay with docker-compose and defer upgrading to kubernetes until you turn profitable or become unmanageable with a single VM ?
Doesn't that require Istio and a few other things as well? I wouldn't wish Istio configuration and maintenance on anyone but the larger shops/teams. You might not be technically locked in, but not everyone can afford to dedicate the attention to tuning and maintaining Istio.
Source: at mid-sized company who has Istio in the stack.
Knative just needs a gateway (LB); not a full mesh. Istio is the default option. Alternatively you can use Gloo, Ambassador or something specifically built for Knative such as Kourier.
Vendor lock-in does not come from using GKE/EKS/AKS or derivatives.
What ends up happening is that your application consumes services (storage, analytics, etc.). You start using those services from the cloud provider, which makes sense as long as it is the right thing for your business (aligns w/ your cloud-native blueprint).
Kubernetes, by itself, is cloud-provider agnostic.
There is AWS Fargate for Kubernetes, AWS Elastic Kubernetes Service, DigitalOcean Kubernetes, Google Kubernetes Engine etc.
All of which are on the cloud and all of which don't require you to configure host OS etc. Some offer full control over node configuration e.g. EKS whilst others manage that for you e.g. Fargate.
Thanks for repeating what I already said. I responded to the question asked. OP asked if there are simpler alternatives. These are simpler alternatives to Kubernetes.
I don't know which part you're not getting, but it appears that this person's intention is not to learn Kubernetes or deal with nodes in the first place.
Perhaps installation and config are not as streamlined as one of the many available k8s setup tools, but the tools themselves are much easier to understand and less broad than the k8s system.
Used nomad/consul/fabio at a previous job for running containers, it was very easy to adopt. Way less new concepts as well.
It's worth mentioning I later chose GCP managed kubernetes for a small cluster to run at my startup. I had to learn a few new things, but I'm not familiar of any "nomad as a service" offerings, so I went with k8s on Google cloud.
When was the last time you used it? I found it straightforward and not so many knobs to twist. ACLs and cert management can be a bit of a PITA the first time (especially the former, and it’s something you really want to do before you’re replying on Consul for mission-critical stuff), but that’s about it. Still, those two things are mainly confusing due to poor documentation and not that complicated once you get up to speed, so it’s mainly the first time you do a new setup.