Hacker News new | past | comments | ask | show | jobs | submit login

Although I understand the fear that many have of being left behind the technology curve by not having the time - or the chance - to run Kubernetes in their day to day work, we really must appreciate that Kubernetes really is the future of infrastructure. For those that are looking at Kubernetes with suspicion, it is a natural instinct to think of K8s as a threat to all the knowledge we have built in the past few years, but it doesn't have to be that way. So many things that we would have built ourselves (deployments, upgrades, monitoring, etc) can now be streamlined with K8s, and existing knowledge around those topics will just make our transition to Kubernetes faster (besides still being able to use most of our expertise within K8s anyways).

Managed solutions make K8s easy to use, while we can still benefit from being able to run out workloads left and right on any cloud vendor. In one word: portability. Which in the day and age of cloud vendor lock-in it is to be protected at all cost.

I know that some organizations allow to allocate time to explore new technologies and learn new practices. If your organization has no policy around this, it is worth try asking. Ultimately it will benefit the organization and the business as a whole, as they will be able to build a solid foundation to more rapidly transition and execute on their digital products. Kubernetes is good for business.




> K8s as a threat to all the knowledge we have built in the past few years, but it doesn't have to be that way

All this knowledge is still relevant. Kubernetes is an extra layer on top of all the classical technologies out there. It is not a replacement - besides some fancy experiments, it doesn't replace GNU/Linux.

If you're going to run a Kubernetes cluster, you still have to know all the "classical" networking, GNU/Linux system administration and architecture, etc. Having a fancy task scheduler for containers, and a tool that sets up clever network bridges (or whatever your preferred CNI flavor does), and a bunch of other nice extras does not remove the requirement to know what's underneath.

And in any non-standard situation, there are chances that you'll have to build things from scratch. For example, I do want to run a K8s cluster over a cjdns overlay network (which doesn't have IPv4, so Flannel or Weave won't work). Haven't figured this out yet.

And if you're not setting up and managing the infrastructure - it's about the same as it had always been. Just a different user interface to manage your deployments/storage/networking.

> Managed solutions make K8s easy to use

I would disagree. "Using" is the same (that's the whole point of K8s). "Install and maintain" is easier, but only in a sense that you don't do this. ;)


> All this knowledge is still relevant. Kubernetes is an extra layer on top of all the classical technologies out there. It is not a replacement - besides some fancy experiments, it doesn't replace GNU/Linux.

> If you're going to run a Kubernetes cluster, you still have to know all the "classical" networking, GNU/Linux system administration and architecture, etc. Having a fancy task scheduler for containers, and a tool that sets up clever network bridges (or whatever your preferred CNI flavor does), and a bunch of other nice extras does not remove the requirement to know what's underneath.

This so much. I went from one year as a junior HPC cluster admin (glorified title, I was a software builder and then focused on container usage) to a 6 month internship where I was focused on being part of an OpenShift team. I’m fairly good am maneuvering my way around a system and getting things working, but being thrown head first into that I realized how little I actually understood about systems, particularly networking related. I didn’t have a lot of time, and I learned a lot about OpenShift and K8s in general, but I felt more like an advanced user who could explain things the others trying to learn their way around and build small tools rather than an admin of the platform. Maybe I’m selling myself short and experiencing imposter syndrome mixed with being dumped into huge, pre-existing, and foreign infrastructure, but it was an eye opening experience.

Since that’s ended I’m at a new gig as a “standard” sysadmin. I’m using this to skill myself up and take the time to really understand as much of the layers and how they work together as I can, both on and off hours.

I’d love to get back into the K8s area, it’s such a fascinating workflow and paradigm, but I have some personal progress to make first.


I am making my money with Kubernetes and the ecosystem around it and I wholeheartedly disagree on K8s being the future.

However I do agree on vendor lock-in. At least to a degree. KVM and Xen didn't really prevent this for cloud environments.

I do think the main takeaway from a lot of this is the way software is designed. It is becoming more self-contained. Docker in many respects is like a static binary. FatJARs are a similar approach. Also Go in general seems to go that path.

What Kubernetes really does is providing an agreed upon standard for infrastructure, similar to what Docker gave for software packages.

They enabled concepts like Unikernels to at least become interesting, because they smoothed the way for the thinking of software that should be self-contained.

I think the future really is one where Kubernetes and Docker are just annoying overhead, where we find it odd how something "emulates" them, just how terminal emulators emulate... well, terminals.

We are in a feedback-loop where we put a more and more tight corset on the software we develop. First there were compute clouds, where developers learned it's bad to keep state around, then there was Docker, then Kubernetes where certain best practices, that have been best practices for a long time "forced" them to be followed more and more, especially because whoever provides your infrastructure and the developer are able to agree on the interface.

Docker and Kubernetes are standards due to their dominance, similar to Internet Explorer back in the days and Chrome today. As of now there is only minimal written specifications. Most of it is standardized by the implementation. Hopefully this will change some day to stabilize the interface and give opportunities for competing implementations, so more innovation can happen outside the boundaries of these projects, allowing for competition.

Maybe this has a positive influence on complexity as well.


I'm sorry but this reads like some mix of a sales pitch and religious preaching.

K8S doesn't eliminate the workflow for "deployments, upgrades, monitoring, etc.." it just black boxes them. It also assumes out of the gate that everything needs to be able to do HA, scale for 1,000,000 instances/s etc...

Over and over and over people show examples (I'm guilty too) of running internet scale applications on a single load balanced system with no containers, orchestration or anything.

So please stop preaching this as something for general computing applications - it's killing me cause I've got people above me, up my ass about why I haven't moved everything to Kubernetes yet.


> K8S doesn't eliminate the workflow for "deployments, upgrades, monitoring, etc.." it just black boxes them.

Kubernetes does not black box anything. At most it abstracts the computer cluster comprised of heterogeneous COTS computers, as well as the heterogeneous networks they communicate over and the OS they run on.

I'm starting to believe that the bulk of the criticism directed at Kubernetes is made up by arrogant developers who look at a sysadmin job, fail to undertand or value it, and proceed to try to pin the blame on a tool just because their ubriss doesn't allow them to acknowledge they are not competent in a different domain. After all, if they are unable to get containerized applications to deploy, configure, and run on a cluster of COTS hardware communicating over a software-defined network abstracting both intra and internet then of course the tool is the problem.


It's the exact opposite. I don't think that's stuff should be abstracted away.


> It's the exact opposite. I don't think that's stuff should be abstracted away.

Why not? The Kubernetes/serverless/DevOps people have a compelling argument--organizations can move faster when dev teams don't have to coordinate with an ops/sysadmin function to get anything done. If the ops/sysadmin/whatever team instead manages a Kubernetes cluster and devs can simply be self-service users of that cluster, then they can move faster. That's the sales pitch, and it seems reasonable (and I've seen it work in practice when our team transitioned from a traditional sysadmin/ops workflow to Fargate/DevOps). If you want to persuade me otherwise, tell me about the advantages of having an ops team assemble and gatekeep a bespoke platform and why those advantages are better than the k8s/serverless/DevOps position.


One of the things I see ignored in these discussions is the strategic timeline. Yes, dev teams can yeet out software like crazy without an ops team. But eventually you build up this giant mass of software the dev team is responsible for. Ops was never involved until one day the mgmt chain for the dev team realizes he can free up a bunch of capacity by dumping his responsibilities onto ops.

IMO, some of these practices come from businesses with huge rivers of money who can hire and retain world class talent. I’d like to see some case studies of how it works when your tiny DevOps team is spending 80% of their time managing a huge portfolio of small apps. How then do you deliver “new, shiny” business value and keep devs and business stakeholders engaged and onboard?


I might be misunderstanding you, but this line makes me think you misunderstood the k8s/serverless/devops argument:

> your tiny DevOps team is spending 80% of their time managing a huge portfolio of small apps

In a DevOps world (the theory goes), the DevOps team supports the core infrastructure (k8s, in this case) while the dev teams own the CI pipelines, deployment, monitoring, etc. The dev teams operate their own applications (hence DevOps), the "DevOps team" just provides a platform that facilitates this model--basically tech like k8s, serverless, docker, etc free dev teams from needing to manage VMs (bin packing applications into VM images, configuring SSH, process management, centralized logging, monitoring, etc) and having the sysadmin skillset required to do so well [^1]. You can disagree with the theory if you like, but your comment didn't seem to be addressing the theory (sincere apologies and please correct me if I misunderstood your argument).

[^1] Someone will inevitably try to make the argument that appdevs should have to learn to "do it right" and learn the sysadmin skillset, but such sysadmin/appdev employees are rare/expensive and it's cheaper to have a few of them who can build out kubernetes solutions that the rest of the non-sysadmin appdevs can use much more readily.


I approached K8s specifically from the point of Ops. It simplifies the story of supporting many applications immensely, in fact my first production deployment was done explicitly due to that reason - we have over 60 applications, we can't really reduce that number without unholy mess of rewrites that isn't certain to reduce that number at all.

Come kubernetes, and we have a way to blackbox developer excesses, push 12 factor onto it, and generally out of over 60 present apps, we have reduced our workload to really caring about maybe 5 classes of them, as they are commonalized enough that we can forget them most of the time.

At different job, we're pushing heavily towards standarized applications, to the point of Ops writing frameworks for the devs to use - thanks to k8s we get to easily leverage that, compared to spending lots and lots of time on individual cases.


> It also assumes out of the gate that everything needs to be able to do HA, scale for 1,000,000 instances/s etc...

k8s makes no assumptions about your workloads, it just gives you tools. And it's super useful even if you don't need to do HA or scale to a million instances.

Most production apps still need to manage deployments and rollbacks, configuration, security credentials, and a whole bunch of non-scale related things. And k8s makes a lot of that significantly more manageable.

Of course, this is overkill for a single application, but as you start adding more applications that need to be managed, the benefits really start to add up.

If you give something like GKE a chance, you might be pleasantly surprised. :-)


> K8S doesn't eliminate the workflow for "deployments, upgrades, monitoring, etc.." it just black boxes them. It also assumes out of the gate that everything needs to be able to do HA, scale for 1,000,000 instances/s etc...

What I've seen, anecdotally, is that many ops-background people don't "get" why kubernetes is such a big deal. They assert rightfully that they can already do everything, they already know how to do everything, and they can do it without the overhead (both cognitively and in terms of resource utilization) of k8s.

But, if you are writing and deploying code - especially if you're not in a terribly agile organization - k8s eases so many real pain points that "old" models have which ops teams may be only vaguely aware of. If you need a certain dependency, if you need to deploy any new software, an entire new language or approach, if you need a new service, you now have the ability to directly do it immediately.

I can't tell you what a big deal it is going to be for a developer at a random bigco to be able to run their code without waiting for ops to craft a VM with all the right bits for them.

k8s solves real problems. If you have a monolith and need to solve how to scale it, that's not where k8s shines. But with lots of small workloads, or dynamic workloads, or existing dev vs ops organizational hurdles, it can really be a game changer.



Nice post! But can you really compare AWS with k8s? With kubernetes, most companies still won't want to run it. They'll either use VMs or (where possible) a managed kubernetes. That's just one product for AWS. In the end, not everything will be k8s, you still need DNS, object storage, block storage, etc. I don't see why AWS couldn't thrive in a scenario where everyone uses k8s.

Quite the opposite. k8s isn't easy to set up, run or maintain. A large company running clusters with millions of nodes is probably more capable of letting it appear smooth than some small hoster with only a few servers.


> we really must appreciate that Kubernetes really is the future of infrastructure

No, it isn't. It's extra complexity most don't need.


Oh yeah? How are you (or your company) running their applications?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: