Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We've been running in production on GKE for a little over two years and it's been a solid platform since day one. It's nice to read articles like this and see others coming to the same conclusions we've come to. If your practices and workflow are oriented to containers and you outgrow your PAAS then k8s is the logical place to land.

With respect to the choice of helm: we started out rolling our own pipeline with sed and awk and the usual suspects. When that became too complex helm was just taking off and we moved to that. We still use it to install various charts from stable that implement infrastructure things. For our own applications we found that there was just too much cognitive dissonance between the helm charts and the resulting resources.

Essentially the charts and the values we plugged into them became a second API to kubernetes, obfuscating the actual API below. The conventions around the "package manager" role that the tool has taken for itself also contribute to lessen readability due to scads of boilerplate and name mangling. We recently started deploying things in a new pipeline based on kustomize. We keep base yaml resources in the application repo and apply patches from a config repo to finalize them for a given environment. So far it's working out quite well and the applications engineers like it much better. Now with kubectl 1.14 kustomize's features have been pulled in to that tool, something I have mixed feelings about, but at least the more declarative approach does seem to be the way the wind is blowing.



Perhaps I don’t understand the design decisions behind Helm, but it’s always struck me as having a severe impedance mismatch with k8s itself. It defines another entirely different schema, and relies on an agent running in your cluster that’s also trying to reconcile desired state with actual state (which k8s itself is also doing). I’m skeptical that you could use it extensively without also understanding the k8s stuff underneath it.

kustomize came along just as it’s become untenable for us to copy/paste config to multiple environments. I like that it’s pretty much the simplest possible way to customize yaml, and plan to dive in soon.


Once you start to see both Kustomize and Helm as a templating language, then you'll realize that Kustomize doesn't cover many use cases and is intentionally limited in scope. There is a reason almost every major project in the Kubernetes ecosystem has a Helm chart, but not a Kustomize configuration file. That doesn't mean that Helm doesn't have it's issues, because their implementation of Go templating is constrained, and Go templating is challenging in itself; however, it does a lot more than Kustomize can offer and covers a significantly larger amount of use cases than Kustomize patch files.

Kustomize is great for getting a job done quickly if you don't mind some duplication of code and effort throughout your projects, but Helm is ideal for managing dependencies and templating across a large amount of projects where you may want to re-utilize other charts.

I think it is ideal to not just discover these tools by reading about them, but actually start using them and spend a few hours trying them out and testing what they can and can't do.


You're right about these things in my view. There's a lot you can do in a helm chart that you can't do with kustomize and patches. The helm template functions are powerful in their own right; you have conditionals, loops over ranges, you can define your own helpers, and then you have the whole sprig library at your fingertips.

You can't do any of those things with kustomize. And that's really the (admittedly very opinionated) point. I think helm is perfect for the role the project assumed for itself, as a kubernetes package manager. It works well in that role. If you follow the conventions (as you must if you contribute to stable) then your thing has full config coverage with sensible defaults, you can install it multiple times and everything is namespaced.

Like any package manager it's somewhat difficult to write good packages that have all these attributes. And I think you have to admit that a well-written helm chart that covers all the bases is a lot less readable (albeit much more powerful, stipulated) than a simple yaml description of the object you want. It really does constitute a separate API between the developer and the system.

For our own deployable services we don't need all those package manager-like attributes. What we need is for the on-disk representation of our resources to be readable, and we want to be able to make simple changes to single files and not chase value substitution around through layers of indirection. We'll probably continue using helm for stable infrastructure things, but for our own services that are under continuous development and deployment it's come to feel like a mismatch.


> Like any package manager it's somewhat difficult to write good packages that have all these attributes. And I think you have to admit that a well-written helm chart that covers all the bases is a lot less readable (albeit much more powerful, stipulated) than a simple yaml description of the object you want.

I agree to that, but I think the situations in which you have to write really flexible and complex charts are pretty rare. Most of the charts that we maintain and use internally, use very little templating, because they are tailored to fit our own environments, not every environment under the sun.


There are situations where it makes sense to create a shared chart that can be used by multiple teams to implement their services, but to be extensible enough to work with other common services. For example, cert-manager and external-dns. Or for a local environment, you may want to use Minio for AWS-compatible storage but Service Catalog and AWS Service Broker for other environments.


actually

> running in your cluster that’s also trying to reconcile desired state with actual state

is not needed anymore


Seconding this. We used helm just to render templates; we don't run tiller.

As I understand it the upcoming version of helm ditches tiller completely.


Glad to hear I'm not the only one. I appreciate the helm design and I don't dispute people have a fine time using it. I just need something simple to render yaml from a template without regard to cluster state, so we use good old fashioned sed.

What's annoying about helm is many cases would be fine as plain yaml template macro without any cluster state but many public projects are packaged as helm charts so to use them off the shelf you need to go full helm. Thankfully it's not too difficult to use helm template.


Yeah, who needs those pesky 3-way commits to assure that the actual state will be the one described in chart.


Tiller actually doesn't even really look at the current state in the cluster. It basically just hard fails if the resource already exists when it shouldn't, but it doesn't attempt to reconcile manual edits to resources outside of the helm upgrade lifecycle.


We use Helm, but we really only use it for two things: Templating and atomic deploys/deletes.

Helm templating is pretty terrible. Whoever thought generating YAML as text was a good idea deserves a solid wedgie. But it gets us where we need to be. During our prototyping of our GKE environment, we had lots of individual YAML files, which was not tenable.

Atomic deploys/rollbacks is essential. What Helm brings to the table is a high-level way of tying multiple resources together into a group, allowing you to both identify everything that belongs together, and to then atomically apply the next version (which will delete anything that's not supposed to be there anymore). Labels would be sufficient to track that, in principle, but you still need a tool to ensure that the label schema is enforced.

We don't use any of the other features of Helm -- they're just in the way. We don't use the package repo; we keep the chart for every app in the app's Git repo, so that it's versioned along with the code. We've written a nice wrapper around Helm so people just do "tool app deploy myapp -e staging", and it knows where to look for the chart, the values by environment etc. and invoke the right commands. (It also does nice things like check the CI status, lint the Kubernetes resources for errors, show a diff of what commits this will deploy, etc.)

I've looked at Kustomize, and I don't think it's sufficient. For one, as far as I can see, it's not atomic.

I'm hoping a clear winner will emerge soon, but nothing stands out. My favourite so far is Kubecfg, which is similar to the unnecessarily complex Ksonnet project, which has apparently been abandoned. Kubecfg is a very simple wrapper that only does Jsonnet templating for you.

I'd be interested in how Google does these things with Borg. My suspicion is that they're using BCL (which Jsonnet is based on, last I checked) to describe their resources.


Kapitan (https://kapitan.dev) is on my radar as a possible sweet spot between Kustomize and Helm.

Until now I've used Jinja2 templates for our Kubernetes definitions with a variables file for each environment, but this is awfully manual.

I'd love Kustomize to be sufficient for us as it's poised to become a standard thanks to now being part kubectl.

Unfortunately, in some ways its YAML patching philosophy is too limited, and coming from a templating system would be a step back even for relatively simple use cases : for example, you're very likely to need a few variables defined once and reused across k8s definitions (a host or domain name, project ID, etc). You can't really do that in a DRY way with Kustomize.

AFAIK, it also currently doesn't have a good story for managing special resources like encrypted secrets : it used to be able to run arbitrary helper tools for handling custom types (I use Sealed Secrets), but this has been removed recently for security reasons, prior to the Kubectl merge.

Kapitan seems to cover these grounds, and it doesn't carry the weight of those Helm features which are useless for releasing internal software, but I'm still a bit worried about the complexity and learning curve for dev teams.

Is there anything else out there that goes a little further than Kustomize, is simpler than Kapitan and Helm and fits well into a GitOps workflow ?


> for example, you're very likely to need a few variables defined once and reused across k8s definitions (a host or domain name, project ID, etc). You can't really do that in a DRY way with Kustomize.

I agree this is one of the areas where you feel the pinch of kustomize's rather puritan design philosophy. We've been able to work around those things in ways that aren't exactly elegant, but don't cause physical discomfort. For shared variables we keep a patch on disk and generate specialized copies of it during deployment. It's a hack, but it retains some of the benefits of a declarative approach. We also still use substitution in a couple of places. It's hard to use kustomize to update an image tag that changes with each build for example.


I've only looked briefly at Kapita . It looks interesting, but I think what Helm gets right, and these other tools don't, is to have a real deployment story that developers can like. Helm doesn't excel here, but it's better than kubectl.

In short, I think the winning tool has to be as easy to use as Heroku. That means: The ability to deploy an app from Git with a single command.

It doesn't need to be by pushing to git. I built a small in-house tool that allows devs to deploy apps using a single command. Given some command line flags, it:

* Checks out a cached copy of the app from Git

* Finds the Git diff between what's deployed and current HEAD and pretty-prints it

* Checks the CI server for status

* Lints the Kubernetes config by building it with "helm template" plus a "kubectl apply --dry-run"

* Builds the Helm values from a set of YAML files (values.yml, values-production.yml etc.), some of which can be encrypted with GPG (secrets.yml.gpg) and which will be decrypted to build the final values.

* Calls "helm upgrade --install --chart <dir>" with the values to do the actual deploy.

The upshot is that a command such as "deploytool app deploy --red mybranch" does everything a developer would want in one go. That's what we need.

The tool also supports deploying from your own local tree, in which case it has to bypass the CI and build and push the app's Docker image itself.

Our tool also has useful things like status and diff commands. They all rely on Helm to find the resources belonging to an app, and we did this because Helm looked like a good solution back when we first started. But we now see that we could just rely on kubectl behind the scenes, because Helm's release system just makes things more complicated. We only need the YAML templating part.

I hate YAML templating, though, so I think something Kubecfg is the better choice there.


> The upshot is that a command such as "deploytool app deploy --red mybranch" does everything a developer would want in one go. That's what we need.

That tool for us is a gitlab pipeline, and I guess the logic in your tool is in our case split between the pipeline and some scaffolding in a build repo. The pipelines run on commit, the image is built, tested, audited, then the yaml is patched and linted as you describe before being cached in a build artifact. The deploy step is manual and tags/pushes the image and kubectl applies the yaml resources in a single doc so we can make one call. We recently added a step to check for a minimal set of ready pods and fail the pipe after x secs if they don't come up, but haven't actually started using it yet.


That sounds similar, except you prepare some of the steps in the pipeline. Sounds like you still need some client-side tool to support the manual deploy, though. That's my point -- no matter what you do, it's not practical to reduce the client story to a single command without a wrapper around kubectl.

Interesting idea to pre-bake the YAML manifest. Our tool allows deploying directly from a local repo, which makes small incremental/experimental tweaks to the YAML very fast and easy. Moving that to the pipeline will make that harder.

Also, you still have to do the YAML magic in the CI. We have lots of small apps that follow exactly the same system in terms of deploying. That's why a single tool with all the scaffolding built in is nice. I don't know if Gitlab pipelines can be shared among many identical apps? If not, maybe you can "docker run" a shared tool inside the CI pipeline to do common things like linting?


> I've looked at Kustomize, and I don't think it's sufficient. For one, as far as I can see, it's not atomic.

Kustomize just applies structured edits to yaml. We run it to apply all the patches and output a single manifest file with all the resources, then send that to the master with kubectl apply. I suspect its as atomic as anything helm does, but I could be wrong.


The "atomicity" (a misleading term, I agree, but I couldn't think of a better one as I was writing the comment) I was referring to was its ability to do a destructive diff/patch. In other words, if you apply state (A+B+C), then (A+B), it will remove C.

With plain "kubectl apply", there's the "--prune" flag, which is supposed to be able to track upstream resources via annotations. But it's still considered experimental alpha functionality, as least according to the "kubectl --help" for Kubernetes 1.11.9.


Yeah I read your reply above and I do see your point. For our own services that we continuously deploy this really just doesn't come up. If we have an http or rpc service it's going to have a deployment, a service, and maybe an ingress for pretty much all of time. If we needed to remove a thing in that scenario it might be the ingress if we change architecture, but it would be a big enough deal that cleaning up manually wouldn't be an added burden.


Deletion is definitely less common, but we do this all the time. It keeps cruft from accumulating when people forget to delete resources.

It's also nice to be able to do "helm del myapp" and know that everything is wiped. You can do this with "kubectl delete -R -f", but I believe you need the original files. You can of course do something like "kubectl delete -l app=myapp", but this requires consistent use of a common label in all your resources.


You can also use kubectl patch locally to apply a label to a set of manifests locally before piping into kubectl apply, eg:

  kubectl patch -f input.yaml --type merge -p '{"metadata": { "labels": {"key": "value"} } }' --dry-run -o json | kubectl apply -f - --prune -l key=value


I might be misunderstanding something here bit is helm really atmoic?

Sure, it'll manage sets, but will it really flip versions in an atmoic way and does this really matter when it's doing 3 rolling upgrades, without anything to manage which traffic goes where?

It's possible it does more than I think it does, but I'm also wondering if atomic is the right word here?


> I might be misunderstanding something here bit is helm really atmoic?

No, the underlying Kubernetes API doesn't have multi-document transactions. What you're getting is closer to best-effort eventual consistency.


Sorry, "atomic" was not the best term here.

I don't know if there is a word for it — it comes up in a lot of situations — but given a set of resources, Helm will diff against Kubernetes and wipe out anything superfluous. So if I've deployed a chart that has (A, B, C) and then do a new deploy of (A, B), then C will be deleted. "Destructive diffing"? I don't know.

Kubernetes itself is not atomic right now. I believe Etcd supports multi-key transactions now, so it could be done.


Right.

But even if kubernetes and etcd supported multi-document transactions and thus gave you the ability to update data in an atomic way, you'd also need to be doing green/blue deploy and then atomically switching service calls, whilst maintaing old network traffic/service calls goes to old pods and new network traffic/service calls goes to new pods.

Pretty complicated and whilst it can be solved, I dont want us thinking that services will fully function during a helm or even a kubernetes update with major api changes between services. Likely your old service will call the new service and fail. This level of failure might be acceptable or you can work around it by having retries or keeping APIs backwards compatible for several versions.

Apologies for blabbering, I would consider the current default state of deploys with rolling upgades to be akin to eventual consistency, but it's possible it's more clever than that.


I believe Kubernetes itself supports that kind of a change - that is, re-applying a config and creating or deleting resources based on the update.

I would say that helm is batch, in that it's applying a bunch of stuff at once, but not doing it transactionally.


> I'd be interested in how Google does these things with Borg. My suspicion is that they're using BCL (which Jsonnet is based on, last I checked) to describe their resources.

Yes. Kubecfg is the closest equivalent for k8s. And it also works the best for me (but I might be biased).


A colleague (Dmitriy Kalinin) recently created this: https://get-kapp.io/

Which does the grouping-of-resources thing and explicitly leaves templating up to you.


Author here; yeah, helm is the part of our stack we’re least happy with tbh (it’s turned into a huge pile of templated yaml files for each project that seems like it might not be maintainable long-term). I’m curious about looking into kustomize, but the package management/rollback capabilities of helm are quite nice; is there a good “pre-baked” solution for that that doesn’t involve helm?


Great article, thanks for writing it.

> I’m curious about looking into kustomize, but the package management/rollback capabilities of helm are quite nice; is there a good “pre-baked” solution for that that doesn’t involve helm?

Not that I'm aware of. I'm only really familiar with helm, kustomize and ksonnet at this point, and of the three only helm took the approach of running a server-side stateful thing that could take responsibility for pseudo-resources called releases. I haven't followed work on the next version closely, but it will be interesting to see what changes as they ditch the client-server architecture. I assume it will be more like terraform with some sort of state file.

Rollbacks for us basically mean go back and redeploy the last good ref. The pipeline is completely deterministic from the commit. The underlying build system that produced the container might not be, but we don't have to rebuild that since it's still sitting in the registry.


Excellent write up, thank you. I was also looking to read on how the dev experience changed moving to k8s. Were they using heroku cli? Did you create an equivalent experience in k8s or adopted a new one?


I had some really good experience with ArgoCD so far, it is agnostic to what you're using as a configuration tool (helm, pure yamls, etc) and it just works and has a nice UI.


Second this, helm is good for day one operations of k8s. If you just want to have a redis deployment running now I think it’s fine for that. But then managing helm charts becomes its own task. Kustomize seems to be a happy medium. Also Helm v3 seems to be doubling down on that “second api” level with addition of embedding Lua in their templates.


Managing Kustomize configuration files is even worse in most regards, and you are likely to have a significantly larger amount of Kustomize files than Helm files, especially if you create a common chart and use dependencies correctly within Helm.


One intriguing new development is the possibility of leveraging Helm's chart ecosystem and Kustomize's patching mechanism—see Replicated Ship or the Kustomize Generators and Transformers KEP.


I think it’s important to separate the use cases of a) deploying off the shelf software like Mysql and Redis from b) deploying your own custom built software.

Helm seems a weird choice for B.


Thank you for sharing this! I’m curious to learn more about your “mixed feelings” regarding built-in kustomize support in 1.14?


> Thank you for sharing this! I’m curious to learn more about your “mixed feelings” regarding built-in kustomize support in 1.14?

It's nothing too surprising :). Simply a preference for simple, compose-able tools. I personally feel like patching yaml client-side is outside the scope of a "ctl" tool.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: