Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We'll see. Containers are definetly a thing and have their usecase. K8s as a whole? Imho the jury is still out.


Let me tell the truth that I think. I'm k8s newbie and I'm delighted by it and terrified by it at the same time.

I really like k8s architecture. It's just miracle architecture that I'd never come with myself. It's very extendable. I'm very inspired by it and I hope to apply it to my own projects some time.

But UI is terrible.

  scopeSelector:
    matchExpressions:
      - scopeName: PriorityClass
        operator: In
        values:
          - middle
Like really? I thought parsing problem was solved like 50 years ago. Why not `scopeSelector: "priorityClass in [middle]"`. It seems that we forced to write YAML AST instead of proper programming language. It really seems like some intermediate form that's supposed to be output from a proper frontend language but we've stuck with it.

Also I'm not sure that I like all this modular nature of k8s. Sometimes it makes sense, for example for CSI. But why should I choose between CNI implementations? I just want best one and that's about it. I want it to be from Google. I don't need to choose between kube-apiserver implementations or kube-scheduler implementations, thanks god.

Also the whole kubeadm story seems half-baked. It works, but far from ideal. I guess it'll get better with time.

But those are small complains. Like I think that it has a good potential to improve. But there's nothing to replace it. Even if someone will build some lambda swarm which is better than k8s, nobody's going to migrate to it, because k8s is offered by so many cloud providers which made investments and are not going to move to other solutions in the foreseeable time.


> It really seems like some intermediate form that's supposed to be output from a proper frontend language

Honestly, it is. You're supposed to write your own operator that would in turn generate all this boilerplate properly — you can look at a tech blog of any large-ish corp that tried to use k8s, and they all say "yeah, we tried to write those manifests manually, it was horrible, we now have a home-grown piece of software that generates it" (I mean, don't you like it that your deployment manifest must have metadata.labels, spec.selector.matchLables, and spec.template.metadata.labels all have exactly the same contents or else strange and wonderful things happen?)

Then again, some people make their operators' input language way too flexible so in the end you again have a YAML-serialized AST of an accidentally Turing-complete language... but it's the common ailment that plagues us all: given a language, any language, instead of using it we start building an interpreter for some other ad-hoc language on top of it, and then suffer from the consequences.

As for standard k8s' operators... they're okay, but the moment you need to do anything slightly different from the intended way, you're in a see of hacks and workarounds. For example, you want your pods spread perfectly even between a set of nodes? Haha, why'd you want that, the k8s cluster is supposed to be RAID-like but okay, here's the beta-feature called topologySpreadConstraints. Watch out though: it disregards any inter-pod affinity constraints, and you really want to use it with a nodeSelector/nodeAffinity.


What k8s offers is a platform, on which people can build their own solutions. It makes solving a lot of very hard problems a lot easier and portable. Before k8s, tools like these were out of reach for small/mid companies with only a handful of dev-teams, but enough for organisational scaling to become a problem. K8s allows self-servicing while still enforcing standardisation.

And I'll give you another use-case I'd like to see you replicate with any other existing platform: migrating an entire on-prem application stack of 200+ services to the cloud. It took us 3 weeks total to do for a client of mine, where the hardest part was the migrating state with minimal downtime, and a lot of work went into the IAC part for the GKE setup.

Also, institutions like banks have now jumped on the k8s (well, mostly openshift, but same thing) train, I'm pretty sure it's here to stay.


> K8s as a whole? Imho the jury is still out.

Not the biggest k8s fan out there but I think the verdict is that K8S will stay for a looong time, especially in the enterprise that it's just starting adoption now.


There is long time and there is looooong time. Java is an example of something that will stay for a looooong time.

Unless the hype cycle continues + major efforts to improve k8s, I personally don't see it last.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: