Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
CNCF Cloud Native Interactive Landscape (cncf.io)
76 points by pagade on Oct 4, 2020 | hide | past | favorite | 31 comments


Last Kubecon in person was exactly like this page, few interesting talks, lots of vendor pitch and them trying to convince cloud native is opposite of vendor lock-in. Yeah, maybe for things like OpenTracing where you can have multiple export options.

But why the fuck Splunk or vSphere is here? Just because they support cloud native unit, containers?

I unfollowed a lot of CNCF related twitter accounts and mailing lists to avoid vendor spam and been happier since.

I don’t want to be that grumpy old man (I’m 30, kinda getting old for our sector unfortunately) but all of this is too much.


It's even worse. Half the "open source" projects are mixed license these days. All the useful and interesting bits are paid up extras which involve navigating half assed vendors with sales teams that don't even understand their own product line. I've spoken to two major well known players in CNCF and they were a total shit show of incompetence. I'm not sure if they have a maturity index but I think it probably comes from tarot cards, a crystal ball or the bottom of a tea cup.

What happened is we traded one vendor lock in for management of a hundred different vendors all pushing incompatible bits of junk that cost a lot of money to stick together and everyone points fingers at the other vendors. At least when I worked at an MSFT only shop there was only one vendor to fight.

I'm considerably older than you and the definition of "getting too old for our sector" is actually probably the tiresome experience we have. You see the forest not the trees and you realise the forest floor is made of the death of a thousand stupid ideas and mistakes remarketed in a different way.


Disclosure: I work on Google Cloud.

> Half the "open source" projects are mixed license these days.

I can sympathize though.

For the folks who built companies around making something awesome and having it available as open-source software, they did so out of the implicit assumption they’d be the natural ones people would pay for “support”. Other people were free to use the software, but other than explicit “I make this freely available it’s just a utility” software, the assumption was that people wouldn’t just fork it and make a business that wraps it (especially without contributing back).

Some folks tried open core (as you describe), but it’s pretty painful for both parties. It’s not fun as an engineer to know “this feature this person wants is just arbitrarily disabled in the community edition”, while knowing “Sigh, and because they aren’t willing to pay us, we need to make money somehow”.

I think the current battle over “who can turn it into a service” is actually a pretty interesting one. I found the AGPL to be a fiasco (the way it was written it could be interpreted as a legal virus, defeating even incidental DIY usage), but have more hope for seeing a license that says “Cloud providers can’t just monetize our work; people are still free to run it themselves though as DIY”.

For open source to continue to prosper, it’ll have to figure out the business model.


It doesn’t need a business model. Most good open source exists out of necessity. A lot of this stuff transcends necessity and is about convincing people it is necessary.


30 is the new 24; ESPECIALLY with covid economics =D


Simple explanation of what Cloud Native is or should be:

- Kubernetes is to the AWS/Azure/other platforms what Linux was to Unix circa 2000s. Kind of the same but with more choice with the original Unix vendors forced by competition to adopt Linux.

- Kubernetes and the ecosystem around it will make zero sense for you if you haven't adopted "devops culture" (meaning that at the very least devs care about operations and IT guys put their things in git) AND you use microservices or the very least more than ten different servies or replicas.

- Vendor spam is real. YAGNI most of the time. But also different strokes for different folks.

- If you think that many cloud native things are just classic open source software needlessly rewritten in Golang and overengineered - you are probably right.


Except CNCF ecosystem looks more like open stack these days. A collection of software that half works together and eats up the majority of your time doing undifferentiated ops.

Devs should be focusing on the business and growing business metrics. Not spending their time learning a new cool tool that plugs into a different tool which is all replaced in 3 months with a newer tool.


Majority of CNCF projects are not aimed at business logic developers, they shouldn't need to understand any of this. This is for DevOps folks supporting these business developers.


You can (and people have) post this to Twitter as a joke with no additional punchline required.

So, you've avoided Vendor lock-in. But at what cost?

"Cloud native" is a buzzword without a meaning. Any modern service, library, or framework understands that it's most likely going to be executed in a Cloud virtualized environment. After that, the differences are negligible.

The real power of the cloud isn't that it's "someone else's computers", it's the rich diverse ecosystem of different services that INTERPLAY with one another in "<provider>-native" ways.

Whether it's AWS, or Azure, or Google. Pick one, and invest your platform into it.

The CNCF landscape may offer you the ability to migrate between a provider easier, but whatever discounts you might realize on raw virtual instance cost per hour will be MINISCULE compared to the A) additional overhead of building cross-cloud, and B) the opportunity cost you will leave on the table of not being able to utilize specialized services from these providers that solve immediately problems you'll need to instead reimplement yourself.

Lastly, and now we're getting into personal feelings territory instead of raw facts of "total cost of ownership", the whole CNCF leaves a rotten taste in my mouth by pretending to be an impartial agenda-free organization (a la Apache) when in reality it's origins and real mission are Google's attempted play at competing in the cloud wars from 3rd place.

There is no debate about this. At the core of everything in CNCF is Kubernetes. And while you can run Kubernetes on any server in the cloud or otherwise, managing it and maintaining Kubernetes infrastructure is a full time job that anyone who tries wants to immediately delegate to a provider. And here comes along Google saying "Well actually, we have a Manged Kubernetes service in GCP, and wouldn't you know it, it's the best one? I mean you know Kubernetes came froM Google originally right, so it makes sense that we would build the best one."

And everyone nods and goes along with it. What percentage of CNCF platforms are running on GCP? 90%? Higher? And once you embrace one of those Managed Kubernetes services, you're just as locked in if you had built on AWS Lambda or Azure Functions, only you've spent way more time and money for ZERO benefit to your actual underlying business.


There's a lot more to Kubernetes than avoiding lock in. Pick your favorite managed provider (GKE?) and stick with it. Use all of the GCP native tooling also so that Google can manage your network ingress and load balancers, TLS certs, etc. You are 100% right that there's still lock in.

But Kubernetes does more than avoiding lock in. Kubernetes creates a common interface so that if you chose GCP and I choose AWS and someone else likes DigitalOcean, we can all benefit from using the same tools across the stack. Very few tools on the linked landscape graphic are specific to a single provider.

If you choose Kubernetes solely to avoid vendor lock in, then you should reconsider. But if you choose it so that you can easily use any of the tools on this landscape, regardless of where you are hosting your cluster, that's a little different and has a lot more value.


I don't see it as Kubernetes being the common interface but containers.

So long as an organization uses a container-first philosophy or container-centric workflows for development they can deploy to pretty much anywhere.

Kubernetes gets the buzz but containers are the real key to flexibility.


There's way more to this than just the containers (the apps) themselves though. Orchestration is a very big piece of the puzzle, integrating into that layer is traditionally very cloud specific. Without a common orchestration layer you end up re-inventing the wheel a lot. Containers being a common interface is good, but having the orchestration have a common interface (k8s) is also important.


Why would you end up reinventing the wheel though if the orchestrarion layer were not common? No, you would be using the one from your cloud provider. I think that was the point of your greatgrandparent comment.


I do not understand these arguments that always come up. Everything Kubernetes gives you out of the box you'd have to reimplement yourself for a production-grade system. From dealing with service discovery, networking, persistence and volume management to enforcing processes.

Maybe you have spend some time learning those things, but it's much less in from a total cost of ownership perspective than maintaining your own custom ansible/puppet/chef/terraform god-knows-what scripts


I'm not comparing Kubernetes vs rolling the same functionality yourself.

I'm comparing it vs onboarding to the stack of Load Balancers/virtual networks/container registration/service discovery/etc offered by the major cloud providers natively on their platform, which are going to be much more well defined for solving real problems rather than K8s generic platform.


Kubernetes doesn't provide networking. You have to use Cilium or a flat layer 2 or fun BGP tricks to get the packets to go where it wants them to. It doesn't provide storage: you have to run Ceph or Gluster or vanilla NFS and hook it up.


Kubernetes is just an orchestrator. It's not a platform


"Cloud Native" with tools like K8s is just sooo much more work.

I mean, even with services like EKS and Fargate.

I was a bit disappointed, when I looked into serverless technology. It was sold to me as the savior, but it still requires much stuff I have to do myself.

But Kubernetes? That's a whole nother level!

I had to try K8s first to really appreciate what serverless technology gave me.


Disclosure: I work on Google Cloud.

Different people have different challenges.

For companies being built from scratch today, you get to pick your technologies and providers as you see fit. In that case, I’ve always been sad when I see folks doing least-common denominator and excusing it via “multi cloud” or “no lock in”. I remind them that “you don’t need to pick GCP, but your biggest challenge will be not going out of business. Focus on that. If you end up with a lock-in problem and you’re successful, that’s a good problem to have”.

However, there are many folks who end up in either a multi-cloud or at least hybrid cloud setup.

Any existing enterprise with sufficient infrastructure is going to start their migration to cloud as a hybrid affair, because they aren’t just going to burn their equipment to the ground.

Even companies “born in the cloud” eventually get larger and often acquire other companies. If the acquired company was on Azure but you’re an AWS shop, you are suddenly multicloud. If the acquired company is a serious enough fraction of your infrastructure, or it’s locked in economically via contracts, you have the same model as on-prem plus cloud. As with hybrid, some companies push themselves to get out of this state as quickly as possible. Others accept it as “we’ll always have bits and pieces everywhere”. It really depends on the company and their business values.

Finally, heavily regulated industries often have dual-sourcing or “must be able to get off within 12 months” restrictions as an example. Those folks are kind of in the “must always be either multi loud or at least hybrid”, and really appreciate a consistent “universe”.

tl;dr: you can impugn Craig’s motivation for creating the CNCF, but there are people and businesses that derive value from consistency and a theoretical ability to switch.


> If you end up with a lock-in problem and you’re successful, that’s a good problem to have

Damned if you do, damned if you don't. If you don't go out of business in the early stages, you become powerless against the cloud provider you are now locked into upping their rent-extraction and draining you until you go out of business.

"Your margin is my opportunity." — Jeff Bezos


It’s certainly a double edged sword, but as you grow it also makes the “reward” for moving higher.

“We can save $100k/yr if we do this large rewrite” is much less attractive than “We can save $10M/yr. Please give us a team of X people to do so”.

I also don’t think your rent-extraction hypothesis holds. Despite some obvious call outs (like App Engine realizing it should have been charging for time you were blocking on I/O, not just cpu cycles retired, in 2011 when it went from Preview to GA), prices are generally stable or decreasing. Mostly, this is a function of hardware trends and competition: the cost per thread or per byte goes down over time. So you aren’t likely to be drained out of business by decisions the providers make. It’s totally possible that your cost structure never worked out, but that isn’t “upping their rent-extraction”.


GCP has done more price increases than that.

Heck it charges for In use public ips attached to a GCE node. That was also a price increase.


One of my friends made a 1000 piece jigsaw puzzle[1] of the CNCF Landscape for a laugh (all sold out). It was fun and challenging to put all the pieces together, especially because my children were a little confused with the various logos.

[1] https://www.etsy.com/listing/854562986/cncf-landscape-puzzle


I feel dizzy now, thank you.


being clicky doesn't make it any less of a hellscape


anyone knows some ressources for running kubernetes outside the cloud?, like guides/best practices/example setups

like 1-2 machines scenarios inside airgapped companys, collecting and processing measurements and displaying them, used by managers of industry companies which expect a one click app install etc?

i really struggle with kubernetes in that case, and theres barely any documentation/best practices

i found it easy to run kubernetes on google,amazon,hetzner, but outside that it feels like hell


We use rancher (actually just rke wrapped in some simple ansible) for this and it works really well.

They have a whole guide for running airgapped [0] however, we found that our security requirements (extremely heavily regulated, think nation state and identity) only come into play once the data is ON the systems. So we can happily build a new cluster/environment with internet connections to docker hub or helm.io or whatever, and then as soon as we have data coming in we kill the connectivity -- usually you can wrangle something like this.

In either case though, rancher makes running on-prem a really nice experience for us, vs the clusterfsck of kubespray and openshift it replaced.

YMMV but I would reccommend giving it a shot.

ps -- I would probably avoid K3S if you actually need some redundancy -- unless you're happy managing some kind of distributed mysql setup for it.

0: https://rancher.com/docs/rancher/v2.x/en/installation/other-...


My company has similar deployment scenarios as you're describing (basically creating appliances that happen to use K8S as a platform), and we've hit similar problems.

It's not (yet?) a very common use case for K8S. Still, so far we haven't found too many problems, and we're using full-blown kubeadm for deployment. Microk8s seemed like a massive dead end when we tried it out, it was very limited and crashy for some reason.

Still, we've had a few complicated networking problems that go against some core assumptions of K8S...


In that case you probably want to be looking at K3s and not full blown Kubernetes.


we are already looking at k3s and microk8s

before that we used rancher 1.6 and docker, but rancher 1.6 is dead and was a hell of a software on its own(java extremely high memory usage, obscure bugs, and a team which abandonded it pretty early once 2.0 was released)


> collecting and processing measurements and displaying them, used by managers of industry companies which expect a one click app install etc?

Kubernetes is just a scheduler/orchestrator. It's not a full blown PaaS as you are expecting it to. You are supposed to glue together parts and make a PaaS based on your needs




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: