Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They're both complex. But one of them has 10 times the components than the other, and requires you to use them. One of them is very difficult to install - so much so that there are a dozen different projects intended just to get it running. While the other is a single binary. And while one of them is built around containers (and all of the complexity that comes with interacting with them / between them), the other one doesn't have to use containers at all.


> But one of them has 10 times the components than the other

I've said this before. Kubernetes gives you a lot more too. For example in Nomad you don't have secrets management, so you need to set up Vault. Both Nomad and Vault need Consul for Enterprise set ups, of which Vault needs 2 Consul clusters for Enterprise setups. So now you have 3 separate Consul clusters, a Vault cluster, and a Nomad cluster. So what did you gain really?


Kubernetes' secrets management is nominal at best. It's basically just another data type that has K8S' standard ACL management around it. With K8S, the cluster admin has access to everything, including secrets objects. It's not encrypted at rest by default, and putting all the eggs in one basket (namely, etcd) means they're mixed in with all other control plane data. Most security practitioners believe secrets should be stored in a separate system, encrypted at rest, with strong auditing, authorization, and authentication mechanisms.


It's "good enough" for most and extension points allow for filling the gaps.

This also dodges the crux of GP's argument -- instead of running 1 cluster with 10 components, you now need a half dozen clusters with 1 component each, but oops they all need to talk to each other with all the same fun TLS/authn/authz setup as k8s components.


I'm a little confused. Why does the problem with K8S secrets necessitate having multiple clusters? One could take advantage of a more secure secrets system instead, such as Hashicorp Vault or AWS Secrets Manager.


The point is that once you're talking about comparable setups, you need all of Vault/Nomad/Consul and the complexity of the setup is much more than just "one binary" as hashi likes to put it.

> So now you have 3 separate Consul clusters, a Vault cluster, and a Nomad cluster. So what did you gain really?

GP's point was already talking about running Vault clusters, not sure you realized we aren't only talking about nomad.


The only thing I was trying to say is that although K8S offers secrets "for free," it's not best practice to consider the control plane to be a secure secrets store.


That's false. Vault has integrated storage and no longer needs Consul.

If you want to have the Enterprise versions( which aren't required), you just need 1 each of Nomad, Consul, Vault. Considering many people use Vault with Kubernetes anyway(due to the joke that is Kubernetes "secrets"), and Consul provides some very nice features and is quite popular itself, that's okay IMHO. Unix philosophy and all.


This is just false. I've run Vault in an Enterprise and unless something has changed in the last 12 months, Hashicorp's recommendation for Vaul has been 1 Consul cluster for Vault's data store, and 1 for it's (and other application's) service discovery.

Sure Kubernetes's secrets is a joke by default, it's easily substituted by something that one actually considers a secret store.


https://www.vaultproject.io/docs/configuration/storage/raft

It's new but I think is quickly becoming preferred. I found trying to setup nomad/consul/vault as described on the hashi docs creates some circular dependencies tbh (e.g. the steps to setup nomad reference a consul setup, the steps for vault mention nomad integration, but there's no clear path outside the dev server examples of getting there without reading ALL the docs/references). There's little good docs in the way of bootstrapping everything 1 shot from scratch in the way most Kubernetes bootstrapping tools do.

Setting up an HA Vault/Consul/Nomad setup from scratch isn't crazy, but I'd say it's comparable level to bootstrapping k8s in many ways.


Cool, so that's certainly new. But even then, you're dealing with the Raft protocol. The different is it's built into Nomad compared to Kubernetes where it's a separate service. I just don't see Nomad and Co being that much easier to run, if at all.

I think Nomad's biggest selling point is that it can run more than just containers. I'm still not convince that's it's much better. At best it's equal.


> you're dealing with the Raft protocol. The different is it's built into Nomad compared to Kubernetes where it's a separate service

I don't really follow this. etcd uses raft for consensus, yes, and it's built in. Kubernetes components don't use raft across independent services. Etcd is the only component that requires consensus through raft. In hashi stack, vault and nomad (at least) both require consensus through raft. So the effect is much bigger in that sense.

> I think Nomad's biggest selling point is that it can run more than just containers. I'm still not convince that's it's much better. At best it's equal.

Totally agree. The driver model was very forward looking compared to k8s. CRDs help, but it's putting a square peg in a round hole when you want to swap out Pods/containers.


It's not that circular - you start with Consul, add Vault and then Nomad, clustering them through Consul and configuring Nomad to use Vault and Consul for secrets and KV/SD respectively. And of course it can be done incrementally ( you can deploy Nomad without pointing it to Consul or Vault, and just adding that configuration later).


I don't mean a literal circular dependency. I mean the documentation doesn't clearly articulate how to get to having all 3x in a production ready configuration without bouncing around and piecing it together yourself.

For example, you mention starting with consul. But here's a doc on using Vault to bootstrap the Consul CA and server certificates: https://learn.hashicorp.com/tutorials/consul/vault-pki-consu...

So I need vault first. Which, oops, the recommended storage until recently for that was Consul. So you need to decide how you're going to bootstrap.

Vault's integrated Raft storage makes this a lot nicer, because you can start there and bootstrap Consul and Nomad after, and rely on Vault for production secret management, if you desire.


> This is just false.

No it isn’t.

> I've run Vault in an Enterprise

At this point I am starting to doubt that claim.


It has been longer than 12 months that Vault has had integrated storage.


Kubernetes native secrets management is not very good, so you're going to end up using Vault either way.


Also, Kubernetes can be just a single binary if you use k0s or k3s. And if you don't want to run it yourself you can use a managed k8s from AWS, Google, Digital Ocean, Oracle...


> Both Nomad and Vault need Consul for Enterprise set ups, of which Vault needs 2 Consul clusters for Enterprise setups. So now you have 3 separate Consul clusters, a Vault cluster, and a Nomad cluster.

This is incorrect. You don’t need consul for enterprise. Vault doesn’t need two consul clusters (it doesn’t need consul at all, if you don’t want it)


That surprises me. Does Google have a more complete secrets-management system for its in-house services?


IIUC, despite K8s having started at Google by Go enthusiasts who had good knowledge of borg, the goal has never been to write a borg clone, even less a replacement for borg.

And after so many years of independent development, I see no reason to believe that K8s ressemble borg any more than superficially.

This seems to be very much assumed by kubernetes authors. Current borg users please correct me if I'm wrong.


Thanks.


You gained the suffering of dealing with split-brains in Consul and Vault ;-)


Kubernetes has been a single binary with hyperkube for over 5 years. This argument is really tiring.


Which is which?


I believe that the one that requires containers is Kubernetes. Nomad doesn't require containers, it has a number of execution backends, some of which are container engines, some of which aren't.

Nomad is the single binary one, however this is a little disingenuous as Nomad alone has far fewer features than Kubernetes. You would need to install Nomad+Consul+Vault to match the featureset of Kubernetes, at which point there is less of a difference. Notwithstanding that, Kubernetes is very much harder to install on bare metal than Nomad, and realistically almost everyone without a dedicated operations team using Kubernetes does so via a managed Kubernetes service from a cloud provider.


From parent's comment:

k8s = 10x the components & difficult to install.

Nomad = single binary, works with but doesn't require containers.


k0s is a single binary.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: