Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can small companies afford the complexities of k8s? I'm not yet convinced. Its pros are compelling yet the learning curve is steep. Which suggests a smaller talent pool.


I think so, as long as you're using a managed cluster on a cloud that has automatic handling of Ingress and LoadBalancer. My first production experience with it was at a start-up with 3 initial engineering staff. We already knew how to build docker containers. A basic helm chart was easy to create and push to the container registry. All we needed to do was set the kubeconfig and helm install v0.1. Even if you're not sure how it all works under the covers, your tiny team is pushing to dev cluster several times a day, and your dev site is online (you can hide it behind Cloudflare Access for free). Extras like Cluster Autoscaling, Horizontal Pod Autoscaler, ExternDNS and cert-manager are easy to add when you need to. Anyone can fumble-install Grafana/Prometheus/Loki and have metrics and logs. At that point you have elastic infrastructure, dashboards, alerts, TLS, LoadBalancers, DNS... How long would a small developer team take to do this without k8s? Imagine trying to figure out ECS, Fargate, Lambda, ACM, CloudWatch logs, alarms and metrics? And how much AWS-specific code is in every single one of your services to get it to work on native AWS?


> Imagine trying to figure out ECS, Fargate, Lambda, ACM, CloudWatch logs, alarms and metrics?

In case anyone is curious, we went that route.

- ECS + Fargate + CDK took one of us two (8h/day) weeks for the initial setup. We've sprinkled a few more days here and there since then.

- Cloudwatch logs are "setup-free" (your containers' logs get sent there by default when using CDK constructs).

- ACM... we don't use directly. CDK will easily setup TLS-enabled (with AWS-emitted certificates) ALB (Application Load Balancer(s)) for you.

- Lambdas we don't use much.

- Metrics & Alarms are easy to set up but they generally suck. The custom language for computed metrics is clumsy and quite limited. The anomaly detection sucks. And it is expensive even by cloud standards (don't create metrics and alarms willy-nilly or you'll feel it in the next invoice).

- Our application code doesn't know much about AWS (we do use libraries for S3 and SES, but these are just easy to swap adapters).

- We ended up with ~3k lines of CDK definitions in typescript. These are very easy to read, and only moderately hard to write (you do need to look up the docs). However, I can say without a doubt that it's been the easiest infrastructure definition/description language that I've ever used.

I don't have enough experience with K8s to know whether that route would have been better or worse, but I can say this route hasn't been a pain point for us.


Good to hear. CDK is a huge plus.


A few years, I would have said no. Now, I'm cautiously optimistic about it.

Personally, I think that you can use something like Rancher (https://rancher.com/) or Portainer (https://www.portainer.io/) for easier management and/or dashboard functionality, to make the learning curve a bit more approachable. For example, you can create a deployment through the UI by following a wizard that also offers you configuration that you might want to use (e.g. resource limits) and then later retrieve the YAML manifest, should you wish to do that. They also make interacting with Helm charts (pre-made packages) more easy.

Furthermore, there are certified distributions which are not too resource hungry, especially if you need to self-host clusters, for example K3s (https://k3s.io/) and k0s (https://k0sproject.io/) are both production ready up to a certain scale, don't consume a lot of memory, are easy to setup and work with whilst being mostly OS agnostic (DEB distros will always work best, RPM ones have challenges as soon as you look elsewhere instead of at OpenShift, which is probably only good for enterprises).

If you can automated cluster setup with Ansible and treat the clusters as something that you can easily re-deploy when you inevitably screw up (you might not do that, but better to plan for failure), you should be good! Even Helm charts have gotten pretty easy to write and deploy and K8s works nicely with most CI/CD tools out there, given that kubectl lends itself pretty well to scripting.


the devil is in the details, they all look easy on the surface but there are so so many traps you can step into casually that they really aren't solutions for the k8s being too complex problem.


If you use a good managed k8s offering, the complexities are not that great. GKE with Autopilot is a good option. In that case you don't need to know much more than how to write the yaml for a deployment. I've shown developers at all levels how to do that, it's not a barrier.


+1. GKE Autopilot and sticking to the core ~4 Kubernetes object kinds (Deplyoment, Service, Ingress), was a really easy way for us to get started with K8s (and in many cases can carry you a really long way).


Though easy to run, if there are multiple workloads of varying resource requirements, there will be a lots of wasted CPU and RAM, just because there are minimum CPU and CPU to RAM requirements.


Agree. I think they are working towards providing ISTIO (gateway replaces ingress, TLS internal communication, canary deployments, shadowing etc).

If they can spin up GPU or high memory nodes on demand with Autopilot that would be amazing.


Why would you run it in k8s as opposed to, say, ECS? Honest question here, why not run it on something simpler that requires less new concepts and achieves the same results?


Because ECS sucks. It's slow and unergonomic, like a badly designed k8s with fewer features.

k8s is nicer to use and simpler if you ignore the complex bits. Plus it's the standard.


My experience of ECS is the opposite of yours. It integrates nicely with the AWS ecosystem and was substantially easier to use and educate others on. Would not hesitate to use again on either fargate or BYO EC2. I will acknowledge scheduling is not quite as fast as Nomad but I never found it 'slow'


Personally, I'd rather create a k8s deployment than an ECS task, but I can see your point. If all you want is an integrated with AWS experience, then it makes som sense that ECS is just simpler overall out of the box.

I don't think the delta to make k8s integrated is that much work with EKS, but ability to mutate the entire infrastructe if and when you do scale wins out for me. I think the complexity, most of which you can ignore, is worth the flexibility.

Either way, since k8s landed, AWS itself has started improving too.


I've used both and I would still prefer ecs/fargate to build a rather independent application and k8s to build a long-term platform.


For a typical deployment, ECS isn't simpler than a fully managed k8s system, and doesn't have fewer new concepts. The wealth of concepts in k8s only comes into play when you're doing more advanced things that ECS doesn't have abstractions for anyway.

In ECS you have abstractions like task definitions, tasks, and services, all of which are specific to ECS, and so are new concepts for someone learning it originally. In Kubernetes a typical web app or service deployment uses a Deployment, a Service, and an Ingress. It isn't any harder to learn to use than ECS, and I find the k8s abstractions better designed anyway.

If you're already using ECS, and are happy with it, then there may be no strong reason to switch to k8s. But for anyone deciding which to use for the first time, I'd strongly advise against ECS, for several reasons.

One is that k8s has become an industry standard, and you can deploy k8s applications on many different clouds as well as non-cloud environments. As such, learning k8s is a much more transferable skill. K8s is an open source project run by the Cloud Native Computing Foundation, with many major and minor contributors. You can easily install a k8s distribution on your own machine using one of the small "edge" distributions like k3s or microk8s.

While in theory, some of the above is true for ECS, in practice it just doesn't have anything like the momentum of k8s, and afaict there aren't many people deploying ECS on other clouds or onprem.

Because of these kinds of differences, all in all I don't think there's much of a contest here. It's not so much that ECS is bad, but rather that k8s is technically excellent, and an industry standard backed by many companies, with significant momentum and an enormous ecosystem.


A compelling reason is the large ecosystem of tooling that runs on k8s. Practically anything you want to do has a well maintained open source project ready to go.


For example? What could you do on k8s that you couldn't do on native aws?


Take a look at Kubeflow.org for an example. There are several reasons that a tool like that targets Kubernetes and not native AWS. One of the benefits of k8s is how portable and non-vendor-specific it is. Basically, it's become a standard platform that you can target complex applications to, without become tied to particular vendors, and with the ability to easily deploy in many different environments.


To be clear, I'm not claiming you can't do these things on native AWS, but rather there are wide choice of high-quality projects ready to go that target k8s.

  - Countless Helm charts
  - Development tools like Telepresence
  - Many GUIs / terminal UIs
  - CI/CD tools like Argo
  - Logging and monitoring tools
  - Chaos engineering tools
  - Security and compliance tools
  - Service meshes / request tracing
  - Operators for self-healing services
  - Resource provisioning
  - etc...


There is also a new generation of platform that runs on top of Kubernetes that are emerging. Like Qovery


> Can small companies afford the complexities of k8s?

Where exactly do you see this complexity in Kubernetes?

I have a couple of Hetzner nodes running microk8s and I have a couple of web apps running in them. All it takes to deploy each app is putting together the kustomize script for the app and afterwards simple call to kubectl apply -k ${kustomize_dir}. I'm talking about specifying an ingress, deployments, services,... The basics. I even threw in a couple of secrets to pull Docker images from private container registries.

And everything just runs. With blue-green deployments, deployment history, monitoring, etc.

It's far more complicated to setup a CICD pipeline, and don't get me started on the god-awful mess that is CloudFormation or even CDK.

Where exactly did you see all that complexity you're talking about?


A lot of people in this job just really hate learning anything, and would rather spend way more time spread out over months and years than just investing some time and learning how to use something new.

It seems like somehow some people get into this job by only learning tools that they can pick up without trouble over a weekend?

The concepts you need to deploy stuff on kubernetes really aren't that complicated. It's just a bunch of yaml documents in extremely-well-documented schemas. If you want to run a service with N instances, you just write a deployment with `replicas: N`.

There are a lot of details I'd choose slightly differently if I were designing my perfect ideal cluster orchestration system, but the whole point of open source is that everyone who wants to build a comprehensive cluster orchestration system can just get together and collectively build it once, so I don't have to design and own it all internally. It's got all the pieces to build exactly what I need to make good use of a ton of computers, in a simple, reliable, repeatable, consistent, standard way. It gives you trivial primitives to build HA fault-tolerant deployments.

There are very few good excuses left to ever have any reason to page anyone over "One server had a hardware failure".

It just baffles me that people can see this powerful, industrial-grade, comprehensive tool, and decide "Nah, that'll never be worth starting to learn".


I half agree with you here. It is actually quite simple, but everything in Kubernetes is very explicit, which is good, but also intimidating. If you've never worked with Kubernetes before then it's a lot of added complexity without clear benefits.


> If you've never worked with Kubernetes before then it's a lot of added complexity without clear benefits.

Unless you're someone who only had to work on a monolith deployed to a single box somewhere, Kubernetes adds zero complexity to the problem you're already dealing with.

In fact, Kubernetes simplifies the whole problem of running stuff on a cluster. Network, security, deployment, observanility... That's all provided out of the box. And you can rollback whole deployments with a single command.

Heck, even ssh-ing into a container, regardless of where it's running, became trivial.

How is that harder than deploying stuff to boxes somewhere?


>Unless you're someone who only had to work on a monolith deployed to a single box somewhere,

I think the point is that 90%+ of websites are fine with a few monoliths behind a load balancer. That set up can handle low thousands of requests per second in Rails/Django/etc. Maybe low 10 thousands with a more performant language.

And it's not just k8s. It's the whole microservice/SOA that comes with it. It ramps up the complexity of everything and is a constant time sink in my experience.


You apparently learned all about microk8s and spent time configuring it. I assume you did not get it right at the first time.

It is like Usain Bolt comming over and asking what is so hard for you in running 100m in ~10 seconds when you never left the couch.


> You apparently learned all about microk8s and spent time configuring it. I assume you did not get it right at the first time.

What? With Ubuntu, microk8s works pretty much right out of the box.

The only thing you need to learn is how to install it with Snap.

What are you talking about?

> I assume you did not get it right at the first time.

I did, not because I'm a rocket surgeon but because it is really really that simple.

https://ubuntu.com/tutorials/install-a-local-kubernetes-with...

What exactly leads people like you to complain harshly about how hard it is something you never even tried before?

You're literally wasting far more time complaining in a random online forum about how hard a technology is than what it takes to not only learn the basics but also get it up and running.


You also had a pentest of your setup so you are perfectly sure you don't expose something to the internet that you are not supposed to?

You also considered updates for Ubuntu and microk8s so you have strategy for updating your nodes with newer versions and security patches.

I can follow a tutorial to set something up - but then there is always whole world of things that is never included in tutorials.

Just like kubelet accepting unauthenticated requests by default: https://medium.com/handy-tech/analysis-of-a-kubernetes-hack-...


> You also had a pentest of your setup so you are perfectly sure you don't expose something to the internet that you are not supposed to?

What are you talking about?

With Kubernetes you need to explicitly expose something. In code. And apply that change. And see it explicitly listed in the descriptions.

Outside of Kubernetes, you're already talking about a requirement that applies to all web services, regardless of deployment solution. Why didn't you mentioned that?

> You also considered updates for Ubuntu and microk8s so you have strategy for updating your nodes with newer versions and security patches.

What point were you trying to make?

Do you believe Kubernetes is the only software that requires updating?

Even so, with Kubernetes you can launch a freshly created instance in your cloud provider of choice, add it to the cluster, and go on with your day.

If you'd want, you can drain a node, shut it down, and rebuild the node from scratch.

Where exactly do you see a challenge?


If you're concerned that you've somehow accidentally exposed something to the internet that you didn't explicitly intend to expose to the internet, you can just do a trivial port scan. You just run nmap, look at the output, and you're done in like 30 seconds.

What does this have to do with kubernetes? "Don't expose stuff to the internet that you don't intend for everyone across the planet to be able to access" applies exactly the same to literally everything you could run on your servers.

This isn't remotely "Kubernetes is uniquely scary and complicated"; this is basic fundamental network security, and if you're not already handling this, then you need to go brush up on your basic networking fundamentals, not blame it somehow on kubernetes.

Almost every network service I can think of defaults to accepting unauthenticated connections, or connections authenticated with some default credentials. This is the normal, expected, default situation with network services. If you make the decision to expose something to the entire world, in a professional context, it is your responsibility to know the specific reasons it is safe to do so.

Are you really trying to argue that "Some rando decided to bareback the entire global internet with no firewall, on a personal home server, and didn't bother to type 'kubernetes secure configuration' into google, therefore Kubernetes is super hard and complicated and dangerous"?

It's not like this is some obscure cryptic detail; it's explicitly called out in the documentation that any half-decent professional would read before deploying a production service: https://kubernetes.io/docs/tasks/administer-cluster/securing...

  Controlling access to the Kubelet
  Kubelets expose HTTPS endpoints which grant powerful control over
  the node and containers. By default Kubelets allow unauthenticated
  access to this API.
  Production clusters should enable Kubelet authentication and authorization.
  Consult the Kubelet authentication/authorization reference for more information.
Yes, untrained amateurs sometimes do dumb stuff. Sometimes companies leave their S3 buckets open to the world. Sometimes people expose mysql to the internet with credentials they ship to users. Sometimes people expose unauthenticated Redis to the internet. This does not mean that these technologies are somehow fundamentally too complicated for mere mortals, it just means that it's dangerous to ask amateurs to do something in a professional context.


Try setting it up on your own without Ubuntu doing the legwork. Set up a 3 node control pane, the deployment servers and storage.

You come off as very arrogant who believes he knows everything, some humility would suit you well, but I think all pseudo smart Germans are like that.


> Try setting it up on your own without Ubuntu doing the legwork.

Why? Do you also see any purpose in hopping on one foot to work instead of driving there?

I don't understand what leads people like you to try to move goalposts to pretend something is harder than it is or needs to be.


I've set up quite a few kubernetes clusters on my own, and relied on the clusters I've built for production services at both startups and big tech companies. I've done quite a bit with both local storage and network storage via Ceph.

I am not German, and I have never been to Germany. If we're trading wild speculation about personal details, I think you could use some ambition and self-confidence.


It is not untrained amateurs it is also people who do stuff from tutorial and think they know everything.

So my post is not about Kubernetes per se - but about narration "it is super easy 6 year old could do it", well no not everyone can do it and one has to spend time with any new technology.

Besides nmap in that scenario is not helping as well, beacuse I have to expose port 443 to serve my customers and Kubelets expose https endpoints. If someone runs simple nmap scan sees 443 open and concludes all is correct because he will be serving https websites - so your "you are done in like 30 seconds" seems like shooting oneself in the foot.


Hmm, interesting, I may have been misreading you.

I agree that 6-year-olds and other people without any production sysadmin or SRE experience are going to have a pretty bad time learning to build and deploy a Kubernetes cluster.

My point is that any professional sysadmin or SRE can learn Kubernetes just fine. Yeah, there's a lot of stuff, but there's just about as many moving parts as I expect for a system that handles what Kubernetes does. You also mostly don't have to pay complexity cost for many optional features you don't care about; you can get a minimal cluster up, and then grow it as you need more features.

I don't follow what you're saying about port 443. The kubelet API is not listening on port 443 by default. I'm as confident as I can be without checking that no kubernetes components listen on port 443 by default.

Speaking more broadly, I agree that someone with no SRE experience and no network security experience won't get much value from 30 seconds of nmap. What I was trying to say is that "accidentally exposed the kubelet API to the global internet" is something that I expect a competent sysadmin to be able to detect and notice with 30 seconds of nmap.

When I'm saying "deploying kubernetes is fine", I'm saying that anyone who has any business running nontrivial production services in a professional setting will not have any trouble learning to use and deploy Kubernetes. Deploying a cluster does require competence with sysadmin or SRE fundamentals, but not particularly more so than other systems that handle similarly-complex topics.

Also, any junior sysadmin or programmer should be able to learn to use an already-running kubernetes cluster to deploy basic services with no trouble and just a bit of time. I have trained quite a few people on this, and it really does go just fine.


It's just like any other tech; you read the docs, try it out, do some troubleshooting, and then you know how to use the tool.

I bet you could get microk8s running correctly on your first try. Give it a shot! Here's a doc: https://microk8s.io/docs/getting-started

You could probably get some additional nodes in your cluster on your first try too: https://microk8s.io/docs/clustering

This isn't Usain Bolt. This is normal people doing normal work with normal technical tools, and then somehow people keep claiming they must be some kind of world-class genius to have done it. Try it for yourself before you claim that you'd have to be a world-class peak performer to have installed and configured some simple daemons on a few linux servers.


Of course Usain Bolt is an exaggeration.

But you try it out, set stuff up from tutorial then do some troubleshooting put production data there and you get articles like these:

https://medium.com/handy-tech/analysis-of-a-kubernetes-hack-...

https://www.zdnet.com/article/a-hacker-has-wiped-defaced-mor...

https://thenewstack.io/armo-misconfiguration-is-number-1-kub...


I'll try to take a different approach here than I did in my other recent reply to you.

I hear that you're saying that there are possible configurations that are insecure, and that at least some care and attention needs to be invested to avoid problems. I agree. This isn't specific to Kubernetes, as you show in your second link. This can be a problem, and people have in fact suffered harm due to leaving their doors unlocked.

On the other hand, most important security doors in most professional environments are not left unlocked and unmonitored.

If you have already learned basic sysadmin fundamentals, then you have the skills needed to learn to deploy and use Kubernetes just as securely as any other network service. The way that you learn to apply your general sysadmin skills to Kubernetes is by practicing with it. You can supplement your practice with books and training if you really want to, but it's not necessary. If you happen to have other people who have already gone through this process as support, that can help quite a bit. If there are any other better ways that people learn things like this, I have yet to hear of them.

If you don't already have those skills, then the way to build and develop them is exactly the same process. You try stuff out, read some docs, poke at things to see if you can break them. You can supplement this with classes if you like, but they're not necessary. Peers and mentors are great if you have them, but they're not necessary.

What alternative do you have in mind? What about kubernetes specifically is so monstrously complex? People keep asserting this, but I learned it just fine like any other nontrivial software I've ever worked with professionally. My peers at work learned it just fine like any other software we've worked with. My friends and colleagues I keep in touch with from previous jobs have learned it fine.

I don't really understand what kind of complexity bar you're trying to imply is just objectively too high to be reasonable? Yeah, it's got more moving parts than like Redis, because it does way more than Redis does. Sed is simpler than python, but that doesn't mean that you need to be Usain Bolt to learn python.


The complexity of Kubernetes is overstated, so I think they can. However, it doesn't mean they should. Personally I would just start with a docker-compose.yml and run it on a VPS. In the starting phase you probably don't have much traffic anyway and docker-compose can nicely progress to Kubernetes when you need it. Another upside is that any developer can run the entire stack locally on their machine, which a huge upside as well. This means you can fully focus on producing working code and don't have to bother about infrastructure too much.

Then again, I've never worked in a startup so the above approach is purely theoretical. Curious about what other people think.


I agree that "the complexity of Kubernetes is overstated". Kubernetes itself is actually pretty simple, very reliable, and more mature than it seems. The complexity and challenges of Kubernetes come from all of the add-ons that may not be necessary in most situations.

Vanilla K8s is pretty good. But when you think about admission controllers, policy engines, service meshes, progressive rollout, etc, you are increasing the scope.

Start with k8s, and hold back the temptation to solve 10 other problems with 10 other projects from the CNCF sandbox. Once you have a good system running, really evaluate the complexity of each new solution with value it provides, and make a decision. Say no to most.


Can small delivery companies afford the complexities of flat-bed trucks? Well: are they driving them? Repairing them? Assembling them? Loading/unloading?

You can bet that using a passenger car instead can be more "affordable", because more people know how to drive them, repair them, assemble, load/unload, they're cheaper, etc. However, if what they're delivering can't be hauled by a passenger car, or the logistics wouldn't make financial sense, that again changes the equation. There's no one answer.


So is everything not K8s a passenger car in this analogy?

Solution space is vast. We're all trying and weighing alternatives as we have the time and capacity.


Yes, while I worked at a bigger company, our team of 15 developers and 5 SRE managed to build a few self hosted Kubernetes clusters for our microservices, a pipeline and also set up cloud development for fail over. We used Data Dog, Consul, Elastic Stack and a few SQL and NoSQL dbs.

I mostly self learned and was helped a bit by guys who started before me. The rest self learned, mostly by doing, solving issues, reading articles and tutorials.


I think the upsides to k8s are relatively minor (if any) for most businesses, and the downsides are significant (tech debt and complexity). It's sometimes sold as revolutionary tech, but it's really just an incremental improvement, and the next incremental improvement will come along soon enough.

DevOps is a never ending yak shave.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: