Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One may think Kubernetes is complex (I agree), but I haven't seen alternative that simultaneously allows to:

* Host hundreds or thousands of interacting containers across multiple teams in sane manner * Let's you manage and understand how is it done in the full extent.

Of course there are tons of organizations that can (and should) easily resign from one of these, but if you need both, there isn't better choice right now.



But how many orgs need that scale?


Something I've discovered is that if you're a small team doing something new, off the shelf products/platforms are almost certainly not optimized to your use case.

What looks like absurd scale to one team is a regular Tuesday for another, because "scale" is completely meaningless without context. We don't balk at a single machine running dozens of processes for a single web browser, we shouldn't balk at something running dozens of containers to do something that creates value somehow. And scale that up by number of devs/customers and you can see how thousands/hundreds of thousands can happen easily.

Also the cloud vendors make it easy to have these problems because it's super profitable.


You can run single-node k3s on a VM with 512MB of RAM and deploy your app with a hundred lines of JSON, and it inherits a ton of useful features that are managed in one place and can grow with your app if/as needed. These discussions always go in circles between Haters and Advocates:

* H: "kubernetes [at planetary scale] is too complex"

* A: "you can run it on a toaster and it's simpler to reason about than systemd + pile of bash scripts"

* H: "what's the point of single node kubernetes? I'll just SSH in and paste my bash script and call it a day"

* A: "but how do you scale/maintain that?"

* H: "who needs that scale?"


The sad thing is there probably is a toaster out there somewhere with 512MB of RAM.


It's not sad until it becomes self-aware.


A very small percentage of orgs, a not-as-small percentage of developers, and at the higher end of the value scale, the percentage is not small at all.


I think the developers who care about knowing how their code works tend to not want hyperscale setups anyway.

If they understood their system, odds are they’d realize that horizontal scaling with few, larger services is plenty scalable.

At those large orgs, the individual developer doesn’t matter at all and the EMs will opt for faster release cycles and rely on internal platform teams to manage k8s and things like it.


Exact opposite - k8s allows developers to actually tailor containers/pods/deployments themselves, instead opening tickets to have it configured on VM by platform team.

Of course there are simpler container runtimes, but they have issues with scale, cost, features or transparency of operation. Of course they can be good fit if you're willing to give up one or more of these.


> k8s allows developers to actually tailor containers/pods/deployments themselves

Yes, complex tools tend to be powerful.

But when I say “devs who care about knowing how their code works” I’m also referring to their tools.

K8s isn’t incomprehensible, but it is very complex, especially if you haven’t worked in devops before.

“Devs who care…” I would, assume, would opt for simpler tools.

I know I would.


We're almost 100 devs in a few teams - works well. There's a bunch of companies of our size even in the same city.

What's a bit different is we're creating own products, not renting people to others, so having uniform hosting platform is actual benefit.


Most of the ones that are profitable for cloud providers.


> Host hundreds or thousands of interacting containers across multiple teams in sane manner

I mean, if that's your starting point, then complexity is absolutely a given. When folks complain about the complexity of Kubernetes, they are usually complaining about the complexity relative to a project that runs a frontend, a backend, and a postgres instance...


In my last job we ran centralized clusters for all teams. They got X namespaces for their applications, and we made sure they could connect to the databases (handled by another team, though there were discussion of moving them onto dedicated clusters). We had basic configuration setup for them and offered "internal consultants" to help them onboard. We handled maintenance, upgrades and if needed migrations between clusters.

We did not have a cluster just for a single application (with some exceptions because those applications were incredibly massive in pod numbers) and/or had patterns that required custom handling and pre-emptive autoscaling (which we wrote code for!).

Why are so many companies running a cluster for each application? That's madness.


I mean, a bunch of companies that have deployed Kubernetes only have 1 application :)

I migrated one such firm off Kubernetes last year, because for their use case it just wasn't worth it - keeping the cluster upgraded and patched, and their CI/CD pipelines working was taking as much IT effort as the rest of their development process




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: