Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you use a good managed k8s offering, the complexities are not that great. GKE with Autopilot is a good option. In that case you don't need to know much more than how to write the yaml for a deployment. I've shown developers at all levels how to do that, it's not a barrier.


+1. GKE Autopilot and sticking to the core ~4 Kubernetes object kinds (Deplyoment, Service, Ingress), was a really easy way for us to get started with K8s (and in many cases can carry you a really long way).


Though easy to run, if there are multiple workloads of varying resource requirements, there will be a lots of wasted CPU and RAM, just because there are minimum CPU and CPU to RAM requirements.


Agree. I think they are working towards providing ISTIO (gateway replaces ingress, TLS internal communication, canary deployments, shadowing etc).

If they can spin up GPU or high memory nodes on demand with Autopilot that would be amazing.


Why would you run it in k8s as opposed to, say, ECS? Honest question here, why not run it on something simpler that requires less new concepts and achieves the same results?


Because ECS sucks. It's slow and unergonomic, like a badly designed k8s with fewer features.

k8s is nicer to use and simpler if you ignore the complex bits. Plus it's the standard.


My experience of ECS is the opposite of yours. It integrates nicely with the AWS ecosystem and was substantially easier to use and educate others on. Would not hesitate to use again on either fargate or BYO EC2. I will acknowledge scheduling is not quite as fast as Nomad but I never found it 'slow'


Personally, I'd rather create a k8s deployment than an ECS task, but I can see your point. If all you want is an integrated with AWS experience, then it makes som sense that ECS is just simpler overall out of the box.

I don't think the delta to make k8s integrated is that much work with EKS, but ability to mutate the entire infrastructe if and when you do scale wins out for me. I think the complexity, most of which you can ignore, is worth the flexibility.

Either way, since k8s landed, AWS itself has started improving too.


I've used both and I would still prefer ecs/fargate to build a rather independent application and k8s to build a long-term platform.


For a typical deployment, ECS isn't simpler than a fully managed k8s system, and doesn't have fewer new concepts. The wealth of concepts in k8s only comes into play when you're doing more advanced things that ECS doesn't have abstractions for anyway.

In ECS you have abstractions like task definitions, tasks, and services, all of which are specific to ECS, and so are new concepts for someone learning it originally. In Kubernetes a typical web app or service deployment uses a Deployment, a Service, and an Ingress. It isn't any harder to learn to use than ECS, and I find the k8s abstractions better designed anyway.

If you're already using ECS, and are happy with it, then there may be no strong reason to switch to k8s. But for anyone deciding which to use for the first time, I'd strongly advise against ECS, for several reasons.

One is that k8s has become an industry standard, and you can deploy k8s applications on many different clouds as well as non-cloud environments. As such, learning k8s is a much more transferable skill. K8s is an open source project run by the Cloud Native Computing Foundation, with many major and minor contributors. You can easily install a k8s distribution on your own machine using one of the small "edge" distributions like k3s or microk8s.

While in theory, some of the above is true for ECS, in practice it just doesn't have anything like the momentum of k8s, and afaict there aren't many people deploying ECS on other clouds or onprem.

Because of these kinds of differences, all in all I don't think there's much of a contest here. It's not so much that ECS is bad, but rather that k8s is technically excellent, and an industry standard backed by many companies, with significant momentum and an enormous ecosystem.


A compelling reason is the large ecosystem of tooling that runs on k8s. Practically anything you want to do has a well maintained open source project ready to go.


For example? What could you do on k8s that you couldn't do on native aws?


Take a look at Kubeflow.org for an example. There are several reasons that a tool like that targets Kubernetes and not native AWS. One of the benefits of k8s is how portable and non-vendor-specific it is. Basically, it's become a standard platform that you can target complex applications to, without become tied to particular vendors, and with the ability to easily deploy in many different environments.


To be clear, I'm not claiming you can't do these things on native AWS, but rather there are wide choice of high-quality projects ready to go that target k8s.

  - Countless Helm charts
  - Development tools like Telepresence
  - Many GUIs / terminal UIs
  - CI/CD tools like Argo
  - Logging and monitoring tools
  - Chaos engineering tools
  - Security and compliance tools
  - Service meshes / request tracing
  - Operators for self-healing services
  - Resource provisioning
  - etc...


There is also a new generation of platform that runs on top of Kubernetes that are emerging. Like Qovery




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: