Hacker Newsnew | past | comments | ask | show | jobs | submit | more sethvargo's commentslogin

Can you help me understand how these changes would be _more_ than EKS?


Sorry, misread the original post, it would be the same.


Attempting to capitalize on a competitor's announcement isn't cool and doesn't look great.


I don't work for Azure and having used both GKE and AKS, I can say GKE is superior, however, I don't understand why this comment isn't cool. It's not like they're capitalizing on your misery. It's a decision you have taken and it's a fair game IMO that they want to emphasize their advantage over you.


Seems fine to me. Frankly, what isn't cool and doesn't look great is you not disclosing you work for Google in this comment.


Its actually appreciated. When a vendor makes a change to screw customers, those customers like me appreciate hearing from the vendor's competitors.


Thank you for the feedback. The management fee is per cluster. You are not billed for replicated control planes. You can use the pricing calculator at https://cloud.google.com/products/calculator#tab=container to model pricing, but it should work out to $73/mo regardless of nodes or cluster size (again, because it's charged per-cluster).

There's also one completely free zonal cluster for hobbyist projects.


Seth — I appreciate you being here to take feedback, and for the clarification as well. The very surprising email I’ve received this morning is very hazy on the details, and the docs linked from the email are not updated yet.

The main issue is that not charging for the control plane and charging for the control plane leads to two very different Kubernetes architectures, and as per your docs, those decisions made at the start are very much set in stone. You cannot change your cluster from a regional cluster to a single zone cluster for example. So you have customers who built their stacks taking into account your free control plane, and you’re turning the screws in by adding a cost for it — but they cannot change the type of their cluster to optimise their spend, since, per your docs, those decisions are set in stone. That’s entrapment.

You should keep existing clusters in the pricing model they’ve been built in, and apply this change for clusters created after today.

That said, many of us made a bet on GCP. For us in particular, we made a bet to the point that our SQL servers are on AWS, but we still switched to GCP for ‘better’ Kubernetes and for not nickel and diming, since AWS had a charge that looked like it was designed to convey that they’d much rather have you use their own stuff than Kubernetes. It is a relatively trivial amount, but it makes a world of difference in how it feels and you guys know more than anyone how much of these GCP vs AWS decisions are made based not on data sheets but for the ‘general feel’ for the lack of a better word.

AWS’ message is that they’re the staid, sort of old fashioned, but reliable business partner. GCP’s message, as of this morning, is stop using GCP.


Thank you <3. I apologize the email was hazy on details. I can't un-send it, but I'll work with the product teams to make sure they are crystal clear in the future. I'm interested to learn more about what you mean about outdated docs? The documentation I'm seeing appears to have been updated. Can you drop me a screenshot, maybe on Twitter (same username, DMs are open).

These changes won't take effect until June - customers won't start getting billed immediately. I'm sorry that you feel trapped, that's not our intention.

> You should keep existing clusters in the pricing model they’ve been built in, and apply this change for clusters created after today.

This is great feedback, but clusters should be treated like cattle, not pets. I'd love to learn more about why your clusters must be so static.


> This is great feedback, but clusters should be treated like cattle, not pets. I'd love to learn more about why your clusters must be so static.

What’s inside our clusters are indeed cattle, but the clusters themselves do carry a lot of config that is set via GCP UI for trivial things like firewall rules. Of course we could script it and automate, but your CLI tool also changes fast enough that it becomes an ongoing maintenance burden shifted from DevOps to engineers to track. In other words, it will likely incur downtime due to unforeseen small issues.

It’s also in you guys’ interest that we don’t do this and clusters are as static as possible right now, since if we are risking downtime and moving clusters, we’re definitely moving that cluster back to AWS.


Hmm - have you considered a tool like Terraform or Deployment manager for creating the clusters? In general, it's best practice to capture that configuration as code.


Managing clusters in terraform is not enough to "treat clusters like cattle". Changing a cluster from a zonal cluster to a regional cluster in the terraform configuration will, upon a terraform apply, first destroy the cluster then re-create the cluster. All workloads will be lost.

I'm sure there are tools out there to help with cluster migrations, but it is far from trivial.


I think it’s a bit optimistic to assume that your customers will just change their deployment model because you introduced a fee.

You provide a web interface, so it’s reasonable to assume people will use it.


Cloud providers assume everyone is just like them, or like Netflix: Load balancers in clusters balancing groups of clusters. Clusters of clusters. Many regions. Many availability zones. Anything can be lost, because an individual data centre is just 5% of the total, right?

Meanwhile most of my large government customers have a couple of tiny VMs for every website. That's it. That's already massive overkill because they see 10% max load, so they're wasting money on the rest of the resources that are there only for redundancy. Taking things to the next level would be almost absurd, but turning things off unnecessarily is still an outage.

This is why I don't believe any of the Cloud providers are ready for enterprise customers. None of you get it.


I think you're wrong -- containers aren't ready for legacy enterprise, VMs are a better choice of abstraction for an initial move to cloud.

Get your data centers all running VMWare, then VMDK import to AWS AMIs, then wrap them all in autoscaling groups, figure out where the SPOFs have moved to, and only then start moving to containers.

In the mean time, all new development happens on serverless.

Don't let anything new connect to a legacy database directly, only via API at worst, or preferably via events.


Have you considered the Google App Engine on GCP in standard mode? That seems like a good fit based on your explanation but I could be wrong.


I had a similar conversation with a government customer, saying that they should pool their web applications into a single shared Azure Web App Service Plan, because then instead of a bunch of small "basic" plans they could get a "premium" plan and save money.

They rejected it because it's "too complex to do internal chargebacks" in a shared cluster model.

This is what I mean: The cloud is for orgs with one main application, like Netflix. It's not ready for enterprises where the biggest concern is internal bureaucracy.


Why would one want lots of little GKE clusters, anyway? Google itself doesn't part up its clusters this way, AFAIK. I don't want a cluster of underutilized instances per application tier per project; I want a private Borg to run my instances on—a way to achieve the economies-of-scale of pod packing, with merely OS-policy-level isolation between my workloads, because they're all my workloads anyway.

(Or, really, I'd rather just run my workloads directly on some scale-free multitenant k8s cluster that resembled Borg itself—giving me something resembling a PaaS, but with my own custom resource controllers running in it. Y'know, the k8s equivalent to BigTable.)


We run lots of small clusters in our projects and identical infrastructure/projects for each of our environments.

Multiple clusters lets us easily firewall off communication to compute instances running in our account based on the allocated IP ranges for our various clusters (all our traffic is default-deny and has to be whitelisted). Multiple clusters lets us have a separate cluster for untrusted workloads that have no secrets/privileges/service accounts with access to gcloud.

Starting in June our monthly bill is going to go up by thousands. All regional clusters.


Namespaces handle most of these issues. A NetworkPolicy can prevent pods within a namespace from initiating or receiving connections from other namespaces, forcing all traffic through an egress gateway (which can have a well-known IP address, but you probably want mTLS which the ingress gateway on the other side can validate; Istio automates this and I believe comes set up for free in GKE.) Namespaces also isolate pods from the control plane; just run the pod with a service account that is missing the permissions that worry you, or prevent communication with the API server.

GKE has the ability to run pods with gVisor, which prevents the pod from communicating with the host kernel, even maliciously. (I think they call these sandboxes.)

The only reason to use multiple clusters is if you want CPU isolation without the drawbacks of cgroups limits (i.e., awful 99%-ile latency when an app is being throttled), or you suspect bugs in the Linux kernel, gVisor, or the CNI. (Remember that you're in the cloud, and someone can easily have a hypervisor 0-day, and then you have no isolation from untrusted workloads.)

Cluster-scoped (non-namespaced) resources are also a problem, though not too prevalent.

Overall, the biggest problem I see with using multiple clusters is that you end up wasting a lot of resources because you can't pack pods as efficiently.


Aware of all of this, but we have a need to run things relatively identically in GKE/EKS/AKS and gVisor can't be run in EKS, for example.

We're okay with the waste as long as our software & deployment practices can treat any hosted Kubernetes service as essentially the same.



For those that didn't click through, I believe the parent is demonstrating that it is a best practice to have many clusters for a variety of reasons such as: "Create one cluster per project to reduce the risk of project-level configurations"


For robust configuration yes. However one can certainly collapse/shrink if having multiple clusters is going to be a burden cost-wise and operation-wise. This best practices was modeled based on the most robust architecture.


This is it exactly.

Thank you.


Namespace are not always well suited to hermetically isolate workloads.


It's probably not worth $75/month to prevent developer A's pod from interfering with developer B's pod due to an exploit in gVisor, the linux kernel, the hypervisor, or the CPU microcode. Those exploits do exist (remember Spectre and Meltdown), but probably aren't relevant to 99% of workloads.

Ultimately, all isolation has its limits. Traditional VMs suffer from hypervisor exploits. Dedicated machines suffer from network-level exploits (network card firmware bugs, ARP floods, malicious BGP "misconfigurations"), etc. You can spend an infinite amount of money while still not bringing the risk to zero, so you have to deploy your resources wisely.

Engineering is about balancing cost and benefit. It's not worth paying a team of CPU engineers to develop a new CPU for you because you're worried about Apache interfering with MySQL; the benefit is near-zero and the cost is astronomical. Similarly, it doesn't make sense to run the two applications in two separate Kubernetes clusters. It's going to cost you thousands of dollars a month in wasted CPUs sitting around, control plane costs, and management, while only protecting you against the very rare case of someone compromising Apache because they found a bug in MySQL that lets them escape the sandbox.

Meanwhile, people are sitting around writing IP whitelists for separate virtual machines because they haven't bothered to read the documentation for Istio or Linkerd which they get for free and actually adds security, observability, and protection against misconfiguration.

Everyone on Hacker News is that 1% with an uncommon workload and an unlimited budget, but 99% of people are going to have a more enjoyable experience by just sharing a pool of machines and enforcing policy at the Kubernetes level.


It doesn't have to be malicious. File Descriptors aren't part of the isolation offered by cgroups, a misconfigured pod can exhaust FDs on the entire underlying Node and severely impact all other pods running on that node. Network isn't isolated either. You can saturate the network on a node by downloading large amount of data from maybe GCS/S3 and impact all pods on the node.

I agree with most things you’ve said around gVisor providing sufficient security, but it's not just about security, noisy neighbors are a big issue in large clusters.


IOPS and disk bandwidth aren't currently well protected either.


RLIMIT_NOFILE seems to limit FDs, or am i missing something?


CRDs can’t be safely namespaced atm, aiui.


We use Skaffold and it’s great. I’m talking about very minor unforeseen stuff that causes outages, not that we do it manually.


This is an interesting exchange if only for the thread developing instead of a single reply from a rep; it’s nice to see that level of engagement.

More importantly, this dialogue speaks volumes to Google’s stubbornness. Seth’s/Google’s position is: do it the Google way, sorry-not-sorry to all those that don’t fit into our model.

Like we haven’t heard of infrastructure as code? That can’t paper over basics like being unable to change your K8s cluster control plane. This is precisely the attitude that lands GCP as a distant #3 behind AWS and Azure.


Google stubbornly resists the idea that their platforms have actual users who depend on things not being broken for them constantly. It's cultural.

AWS has the complete opposite model.


Even still it’s not like it’s non-trivial to just bring up and drop clusters. Just setting up peering with cloud sql or https certs with GKE ingress can be fraught with timing issues that can torpedo the whole provisioning process.


How is this any helpful? Are they supposed to implement everything in terraform or similar, is that your suggestion? Why don't you completely remove the editable UI then, if whoever is using it is doing it wrong. What a typical arrogant and out of touch with customers Google response.


It could be totally unrelated but having an option such as equivalent TF along with REST and CLI options could dramatically speed up the configuration process.


Right now with k8s it is definit a 'ongoing maintenance'. We allocate around 0.5-2 pt per week on only doing that. If we would not do that, most of our stuff would be already outdated.

I know already too many people which are stuck at a certain k8s version. Do not allow that to happen!


> clusters should be treated like cattle, not pets.

Off-topic, but is this really how people do k8s these days? Years ago when I was at Google, each physical datacenter had at most several "clusters", which would have fifty thousand cores and run every job from every team. A single k8s cluster is already a task management system (with a lot of complexity), so what do people gain by having many clusters, other than more complexity?


The most common thing I've heard is "blast radius reduction", i.e. the general public are not yet smart enough to run large shared infrastructures. That seems something that should be obviously true.

People had exactly the same experiences with Mesos and OpenStack, but k8s has decent tooling for turning up many clusters, so there is an easy workaround


I still feel like that would only work in very niche cases.

I mean, if people aren't smart enough to run a large shared infrastructure, how can I trust them to run a large number of shared clusters, even if each cluster is small. The final scale is still the same.


Updating 100 clusters bares less risk than updating a single giant one.


And no SRE would allow you to run your application in a single cluster. Borg Cells were federated but not codependent - Google's biggest outages were due to the few components that did not sufficiently isolate clusters from one another.

Clusters are probably still pets to most orgs, but the lessons about how to manage complexity still apply. Each of my terraform state files is a pet and I treat it like such... but I also use change-control to assure that even though I don't regularly recreate it from scratch, I understand all that was there.


There are potentially quite a few benefits of being able to spin up clusters on demand [1]:

* Fully reproducible cluster builds and deployments.

* The type of cluster (can be) an implementation detail, making it easy to move between e.g Minikube, Kops, EKS, etc. After all, K8s is just a runtime.

* Developers can create temporary dev environments or replicas of other clusters

* Promote code through multiple environments from local Minikube clusters to cloud environments

* Version your applications and dependent infrastructure code together

* Simplify upgrades by launching a brand new cluster, migrating traffic and tearing the old one down (blue/green)

* Test in-place upgrades by launching a replica of an existing cluster to test the upgrade before repeating it in production

* Increase agility by making it easier to rearchitect your systems - if you have a pet, modifying the overall architecture can be painful

* Frequently test your disaster recovery processes as a by-product for no extra effort (sans data)

* Reduced blast radius

[1] https://docs.sugarkube.io/#benefits-of-sugarkube


I think for one, you cannot easily have Masters span regions without risk of them falling out of communication. Similarly the workers should be located nearby. If there's a counterexample to this I'd love to see it.


> clusters should be treated like cattle, not pets

Heh... how many teams actually treat their clusters like cattle, though? Every time I advocate automation around cluster management, people start complaining that "you don't have to do that anymore, we have Kubernetes!"

Some people get it, yes, but even of that group, few have the political will/strength to make sure that automation is set up on the cluster level—especially to a point where you could migrate running production workloads between clusters without a potentially large outage / maintenance window.


> clusters should be treated like cattle, not pets

Sugarkube is designed to do exactly that.

[1] https://docs.sugarkube.io


For any real production system you have to use terraform and their ilk to manage clusters, as you need to be spinning up and down dev/qa/prod clusters.

I don't know GCP though. In the past I've seen kube cluster archs which are very very fragile as they spin up. If that's the case with GCP I can see why you wouldn't do the above and rather hand hold their creation.


I would love if this happened in the real world, but for every well-architected automated cluster management setup I’ve seen using Terraform, Ansible, or even shell scripts and bubble gum, there are five that were hand-configured in the console and poorly (or not at all) documented, and might not be able to re-create without a substantial multi-day effort.


GKE makes it incredibly easy to spin up + tear down GKE clusters. UI/CLI/Terraform etc, all just work for 99% of the cases.


> but clusters should be treated like cattle, not pets

Ha. They should, but they are absolutely not. Customers typically ask "why should we spend time on automating cluster deployment when we are going to do it just once?" and when I explain that it's for when the cluster goes away, if it goes away, they say it's an acceptable risk.

The truth of the matter is, even in some huge international companies, they don't have the resources to keep up with development of tools to have completely phoenix servers. They just want to write automation and have it work for the next 10 years, and that's definitely not the case.


> This is great feedback, but clusters should be treated like cattle, not pets. I'd love to learn more about why your clusters must be so static.

Clusters often are not "cattle". If your operation is big enough, then yes, they might be. Usually they aren't, they are named systems and represent mostly static entity, even if the components of said entity change every hour.

Personally, I'm running in production a cluster that by now had witnessed upgrades from 1.3 to 1.15, in GKE, with some deployments running nearly unchanged since then.

Treating it as cattle makes no sense, especially since on API level, the clusters aren't volatile elements.


I think our architects’ heads would explode if they were told we should treat them like cattle.

For us, Clusters are a promise to our developers. We can’t just spin up a new cluster because we feel like it. I must be missing something or maybe our culture is just different.


As an architect, I am currently working on our org's first cloud deployment initiative. Due to federal compliance / regulations, we have no write access in higher / production boundaries, and everything is automated via deployment jobs, IaC, etc. Given the experience of the teams involved, I took the opportunity (burden) of writing nearly all the automation. If your architects can't handle shooting sick cattle in prod, I'd say get new architects.


> If your architects can't handle shooting sick cattle in prod, I'd say get new architects.

For every 1 competent person who can develop a solution to fully automate everything, there are 99 others who can automate most of that, maybe minus a cluster or DB or two, and another 500 whom cannot do either, but can run a CentOS box at a reasonable service level.

You experience using great tools and your vast knowledge of k8s each day to do all that, and you have the support of your org, but those other folks may not have the tools, support, knowledge, or sometimes even the capability to attain the knowledge to do that. That doesn't mean they're useless to anyone, to be cast off at will!

The type of thinking that leads to, "get new engineers/developers/designers/architects if yours aren't perfect" needs to die, and needs to be replaced with, "let's do what we can to train and support our current employees to do a great job" because, frankly, there aren't enough "superstar" people who have your skills to do that at every org.

We need to work on accepting people for who they are-- helping them to strive to be a bit better each day of course-- and utilizing those skills in the right place, rather than trying to make everyone the same person with the same skills doing the same things.

Some applications don't need clusters which can be rebuilt and destroyed at will, so let's not make that the bar for every project.


That only means that the "named system" moves upwards, somewhere. It doesn't mean you don't have "pets".

P.S. I consider "pet vs cattle" to be a horrible metaphor that should be taken behind the barn and short like the diseased plague bearer it is.


> I'm sorry that you feel trapped, that's not our intention.

Please don't do this. You can apologize for your actions work to improve in the future , but you cannot apologize for how someone feels as a result of your actions.

Also, intent doesn't matter unless you plan to change your behavior to undo or mitigate the unintended result.


> clusters should be treated like cattle, not pets

So my understanding that the official k8s way to upgrade your cluster is also to throw it away and start a new one (with some cloud provider proprietary alternatives).

Let's say there is something actually important, stateful, single-source-of-truth in my k8s cluster, like a relational DB that must not lose data. I don't want downtime for readers or writers, and I want at least one synchronous slave at all times (since the data is important). I also don't want to eat non-trivial latency overheads from setting up layers of indirection.

What's the recommended way of doing this?


In this case, one needs fault-tolerance. One way to achieve it is through replication, where an extra copy (or perhaps a reconstruction receipt) of your DB instance runs somewhere else. Usually DBs achieves this through transactions. Additionally, you can have distributed DBs, which then use distributed transactions for achieving so.

I am not expert, but K8s handles task replication, and either spawn or route a request to another task instance somewhere else. However, the application logic itself must handle the fault-tolerance (by handling its states through transactions or something else) should an instance fail. K8s doesn't do that for you.


Distributed transactions are a non-starter – I already said I want to run master-slave with synchronous replication, which is basically what you want to do in >99% of cases where you have a DB with important stuff in it.


It is not really about what you want, but about how to migrate the DB* to another location while also minimizing its downtime/slower-ness.

You need to instantiate a secondary DB replica somewhere else and start the DB migration. Since there will be "two instances" of the same DB running, you will also need to set up a (temporary) proxy for routing and handling the DB requests w/ something like this:

1) if the data being requested is already migrated, request is handled by the (new) secondary replica. 2) Primary instance handles the request, otherwise 2.1) Requested data should be migrated to secondary replica (asynchronously, but note that a repeated request may invalidate a migration).

Turn the proxy router off once the whole state of your primary DB instance is fully migrated, making the secondary replica the primary one. That's really just a napkin recipe for completing a live migration, though.

* We are now getting into the distributed transaction world because you can never be 100% sure that writing to 2 databases can succeed or fail at the same time. There is this talk from this guy who deals with similar problem you have: http://www.aviransplace.com/2015/12/15/safe-database-migrati...


It's kind of revealing that there are zero replies to this, 10 hours later.


Thanks for replying to feedback Seth. Stuff like this - following the Google Maps API massive pricing increase, the G Suite pricing increase - is what makes me wary about building stuff on GCP: I'm afraid that Google will increase prices for stuff I rely on. AWS has made users expect pricing for cloud services to only go down.


Can someone explain to me the cattle vs. pets analogy? I'm not sure I get it.


"In the old way of doing things, we treat our servers like pets, for example Bob the mail server. If Bob goes down, it’s all hands on deck. The CEO can’t get his email and it’s the end of the world. In the new way, servers are numbered, like cattle in a herd. For example, www001 to www100. When one server goes down, it’s taken out back, shot, and replaced on the line."

http://cloudscaling.com/blog/cloud-computing/the-history-of-...


https://devops.stackexchange.com/questions/653/what-is-the-d...

This is a pretty good in depth explanation, but at a high level if a your server dies and you are extremely upset about it (similar to if your pet died) you are putting too many eggs in that single basket, with no secondary plan. Conversely if you build your infra in such a way that your server dying is something you see no worse than how a farmer sees one of his cattle dying (which are raised to be killed) - you are much better prepared for the inevitable downtime from your server and can very easily recover


I think a pets vs. corn would be a better analogy as it's a high value single instance vs. a commodity farm.


GKE can't offer financial backed SLOs without charging for the service. This is something that, I assume, significant customers want and that competitors already have:

https://aws.amazon.com/eks/sla/


Workers are not free and never were. So they were already charging.


Correct, but the control plane nodes _were_ free and had no SLA. This changes that. [edit: spelling]


_were_ free. (Emphasis yours.)


> and the docs linked from the email are not updated yet.

That about sums up most things Google does for developers.


I thought the standard advice for Google stuff was "there are always two systems - the undocumented one, and the deprecated one"


What I most frequently heard was "There are two solutions: the deprecated one, and the one that doesn't work yet."


That's a wonderful quote that applies to many companies. (I think that will resonate with the Unity developer community right now.)


I agree the rollout is a little bumpy but I'm curious what workloads you are using k8s for where a $74/mo (or $300/mo) bill isn't a rounding error in your capex?


Think about any medium sized dev agency managing 3x environments for 20x customers. That's 50k/year out of the blue.

My problem is that this fee doesn't look very "cloud" friendly. Sure the folks with big clusters won't even notice it, but others will sweat it.

The appeal of cloud is that costs increase as you go, and flat rates are typically there to add predictability (see BigQuery flat rate). This fee does the opposite.


It's charged per-cluster. GKE encouraged (and was great for) running multiple clusters for all kinds of isolation and security reasons.

This cost increases rapidly for those scenarios.


$3600/year is significant for a startup on a shoestring budget.


Then manage k8s yourself.

Or, better yet, don't use k8s. You don't need it, especially as a startup on a shoestring budget. You can migrate later if you decide you really need to, but just a plain LAMP gets you 99% of the way.


But then you can’t put k8s on you resume for when said startup implodes.


If there were a lower complexity way to deploy containerized apps supported widely I think tons of people would go for it. Currently there's not really much of a middle ground between Cloud Run and K8s offered. It's kind of absurd, honestly.


Google App Engine has been exactly this since 2008.


My impression of app engine is that you have to use all the cloud* services like SQL, cache, etc, which will make it significantly more expensive, even if it does that app layer fine. Is that wrong?


It's wrong today. It was true in 2008, when GAE was Google's entire cloud offering (and there was no Docker or K8s).

Around the time "Google Cloud Platform" became a thing, Google changed GAE from an encapsulated bubble into a basic frontend management system that interacts with normal services through public APIs (either inside or outside GCP). It's more expensive than GCE, but it's fully managed and lets you skip the devops team.


So ask for Google cloud for startups? One free cluster is enough to get started.


> Google Cloud for Startups is designed to help companies that are backed by VCs, incubators, or accelerators, so it's less applicable for small businesses, services, consultancies, and dev shops.[1]

This makes it seem like Google Cloud for Startups is aimed at startups that aren't really on a shoestring budget.

[1]: https://cloud.google.com/developers/startups/


Like every "special offer for startups", it's a vulture waiting for funding round to close.


My boss viewed it as the main way to deploy containerized systems offered by cloud providers and figured we could run most of our internal only things in it for a couple hundred a month - we don't really need the guarantees and scale, and he saw it as a way to avoid creating excess numbers of dedicated VMs, as cloud run isn't sufficient for our non-static stuff. This view up until now has actually been quite accurate because of the dedicated usage discounts.

So I guess the big question in my mind is how do you run containerized apps in the major clouds besides K8s if it's a bulldozer and you just need a cargo bike? Is there something simpler?



E.g. https://aws.amazon.com/ecs/ - that was quite nice https://aws.amazon.com/fargate/ - haven't tried


You could consider the flexible app engine


How is it not a rounding error for Google?


I think you mean opex and not capex here.


Sorry for technical tangent but curious. Your decision making on GCP appears to appeal to best of breed + cost. But you put SQL Server on AWS? If you are saying SQL Server is better on AWS than on Azure it would be interesting to learn why.


We need MySQL 8 because of window functions, which GCP does not offer. That is available on AWS.


My bad. A clever marketing decision made me see the capital SQL as SQL Server since I am used to people saying Postgres, MySQL or SQL.


I see. Curious about the latency between your GCP apps and the database on AWS - is it like 1 ms or 100ms? Does it affect the product?


About 4ms for us. However, we chose our data centres on both ends very carefully. There are tables online you can find that for those pings, one such is here: https://medium.com/@sachinkagarwal/public-cloud-inter-region...

However this means we are paying for egress on both sides. This was something we chose to eat due to GCP Kubernetes, but considering today’s changes, it probably no longer makes sense.


So you decide to eat egress costs in perpetuity, which will scale as you go, but a one time increase of $70 per month is enough to make you go back? What are you even trying to optimize for?


sheesh, the lengths you guys went to build that monstrosity.


If this cost bothers you a great deal, why not just deploy a new cluster?


Hi Seth,

What about clusters that are used for lumpy work loads? Like data science pipelines? For example, our org has a few dozen clusters being used like that.

Each pipeline gets its own cluster instance as a way to enforce rough and ready isolation. Most of the times the clusters sit unused. To keep them alive we keep a small, cheap, preemptive node alive on the idle cluster. When a new batch of data comes in, we fire up kube jobs which then triggers GKE autoscaling that processes the workload.

This pricing change means we're looking at thousands of dollar more in billing per month. Without any tangible improvement in service. (The keepalive node hack only costs $5 a month per cluster.) We could consolidate the segmented cluster instances into a single cluster with separate namespaces, but that would also cost thousands in valuable developer time.

I don't know how common our use pattern is, but I think we would be a lot better served by a discounted management fee when the cluster is just being kept alive and not actually using any resources. At $0.01, maybe even $0.02, per hour we could justify it. But paying $0.10 to keep empty clusters alive is just egregious.


Those empty clusters that you get for free cost Google money. Perhaps it never should have been free, because that skewed incentives towards models like this.


Unfortunately, even if they switch to dynamically started clusters, the latency of spinning a new cluster is much higher than the latency of adding a bunch of preemptible nodes to existing node pool :/


Google are (were) not the only ones offering this free control plane model, though. My DigitalOcean DOk8s managed tend toward unstable if they are used with too small of node pools. (I don't know why that is, but it seems like a good way to make sure I pay attention to the workloads and also spend at least $20/mo for each cluster I run with them.)

It will be interesting in any case to see if DigitalOcean and Azure are going to follow suit! I'd be very surprised if they do, (but I've also been wrong before, recently too.)


The term is "loss leader." GKE provides the manager node, and cluster management so that we don't have to. And in exchange you sell more compute, storage, network, and app services. This is some ex-Oracle, "what can we do to meet growth objectives," "how can we tax the people who we own" thinking. They're customers, not assets Tim. Your cloud portability play should be the last project to jerk them around on.


Keep in mind that GKE cluster management was paid in the original GKE. GCP only stopped billing for cluster management when EKS released free cluster management.


When did EKS release free cluster management?


On GKE, you can use a single cluster with multiple node pools to achieve a similar effect. Just set the right affinity on your job resources.


PTAL at doing Multi-Tenancy in GKE!

https://cloud.google.com/kubernetes-engine/docs/best-practic...

We don't recommend using node pools for isolation.


If it is only workload isolation, why not?


For secure isolation, we learned it's not sufficient. It's good for resource isolation though.

PTAL at https://www.youtube.com/watch?v=6rMGRvcjvKc


That guide looks nice. Have you guys thought about releasing a terraform module or even a cloud composer workflow that will set that up in a project?


Thanks! We actually do and shipped together with the best practices.

https://github.com/GoogleCloudPlatform/gke-enterprise-mt

Please give us feedback there in case you hit any issue!


Yes, this is the general approach. However it unfortunately has security implications as you are putting MT workloads on a pool with access back to a shared control plane. Dealing with customer uploaded code is a nightmare.


Here you go:

https://github.com/rcarmo/azure-k3s-cluster (this is an Azure template that I use precisely for testing that kind of workloads - spinning up one of these, master included, takes a couple of minutes at most).

(full disclosure: I work at Microsoft - Azure Kubernetes Service works fine, but I built the above because I wanted full control over scaling and a very very simple shared filesystem)


Likely this model is precisely why they are introducing this fee.

I guess they realized they couldn't make cluster management MT.


We currently spin up dev clusters with a single node. $73/mo is going to basically double the cost of all of these..


This highlights a sorta-weird consequence of this pricing change: suddenly pricing incentivizes you to use namespacing instead of clusters for separating environments.

(As a security person: ugh.)


That’s interesting - I think you’re right. We might move our staging cluster into our main production deployment.

More likely though, AWS or OpenShift running on bare metal on a beefy ATX tower in the office. We want to have production and staging as close to each other as possible, so this is an additional reason and a p0 flag on reducing the dependency on Google-specific bits of Kubernetes as much as possible, hopefully also useful for our exit strategy as well.


Kubespray works well for me for setting up a bare bones kubernetes cluster for the lab.

I'll use helm to install metallb for the load balancer, which you can then tie into whatever egress controller you like to use.

For persistent storage a simple NFS server is the bees knees. Works very well and a NFS provisioned is a helm install. Very nice, especially, over 10GbE. Do NOT dismiss NFSv4. It's actually very nice for this sort of thing. I just use a small separate Linux box with software raid on it for that.

If you want to have the cluster self-host storage or need high availability then GlusterFS works great, but it's more overhead to manage.

Then you just use normal helm install routines to install and setup logging, dashboards, and all that.

Openshift is going to be a lot better for people who want to do multi-tenant stuff in a corporate enterprise environment. Like you have different teams of people, each with their own realm of responsibility. Openshift's UI and general approach is pretty good about allowing groups to self-manage without impacting one another. The additional security is a double edged. Fantastic if you need it, but annoying barrier to entry for users if you don't.

As far as AWS goes... EKS recently lowered their cost from 20 cents per hour to 10 cents. So costs for the cluster is on par with what Google is charging.

Azure doesn't charge for cluster management (yet), IIRC.


(replying to freedomben): NFS has worked fairly well for persistent file storage that doesn't require high performance for reads/writes (e.g. good for media storage for a website with a CDN fronting a lot of traffic, good for some kinds of other data storage). It would be a terrible solution for things like database storage or other high-performance needs (clustering and separate PVs with high IOPS storage would be better here).


It's good to have multiple options if you want to host databases in the cluster.

For example you could use NFS for 90% of the storage needs for logging and sharing files between pods. Then use local storage, FCOE, or iSCSI-backed PVs for databases.

If you are doing bare hardware and your requirements for latency are not too stringent then not hosting databases in the cluster is also a good approach. Just used dedicated systems.

If you can get state out of the cluster then that makes things easier.

All of this depends on a huge number of other factors, of course.


> Have you used NFS for persistent storage in prod much?

I think NFS is heavily underrated. It's a good match for things like hosting VM images on a cluster and for Kubernetes.

In the past I really wanted to use things iSCSI for hosting VM images and such things, but I've found that NFS is actually a lot faster for a lot of things. There are complications to NFS, of course, but they haven't caused me problems.

I would be happy to use it in production, and have recommended it, but it's not unconditional. It depends on a number of different factors.

The only problem with NFS is how do you manage the actual NFS infrastructure? How much experience does your org have with NFS? Do you already have a existing file storage solution in production you can expand and use that with Kubernetes?

Like if your organization already has a lot of servers running ZFS, then that is a nice thing to leverage for NFS persistent storage. Since you already have expertise in-house it would be a mistake not to take advantage of it. I wouldn't recommend this approach for people not already doing it, though.

If you can afford some sort of enterprise-grade storage appliance that takes care of dedupe, checksums, failovers, and all that happy stuff, then that's great. Use that and it'll solve your problems. Especially if there is some sort of NFS provisoner that Kubernetes supports.

The only place were I would say it's a 'Hard No' is if you have some sort of high scalability requirements. Like if you wanted to start some web hosting company or needed to have hundreds of nodes in a cluster. In that case then distributed file systems is what you need... Self-hosted storage aka "Hyper Converged Infrastructure". The cost and overhead of managing these things is then relative small to the size of the cluster and what you are trying to do.

It's scary to me to have a cluster self-host storage because storage can use a huge amount of ram and cpu at the worst times. You can go from a happy low-resource cluster, then a node fails or other component takes a shit, and then while everything is recovering and checksum'ng (and lord knows what) the resource usage goes through the roof right during a critical time. The 'perfect storm' scenarios.


I use an SMB file share for my node pools - here's how to set up a non-managed cluster on Azure that does that: http://github.com/rcarmo/azure-k3s-cluster


Have you used NFS for persistent storage in prod much? I know people do it, but numerous solutions architects have cautioned against it.


My experience with NFS over the years has taught me to avoid it. Yes, it mostly works. And then every once a while you have a client that either panics or hangs. Despite the versions of Linux, BSD, Solaris, Windows changing over the decades. The server end is usually a lot more stable. But that's of little to no comfort to know that yes, other clients are fine.

However, if you can tolerate client side failure then go for it.


What? Shouldn't you try to make the creation and deletion of your staging cluster cheap instead of moving it to somewhere else?

And if that is your central infrastructure, shouldn't it be worth the money?

I do get the issue with having cheap and beefy hardware somewhere else, i do that as well, but only for private. My hourly salary spending or wasting time on stuff like that costs the company more than just paying for an additional cluster with the same settings but perhaps with much less Nodes.

If more than one person is using it, the multiplication effects for suddenly unproductive people, is much higher. Also that decreases the per head cost.


I suspect I'm in the minority on this, but I would love for k8s to have hierarchical namespaces. As much as they add complexity, there are a lot of cases where they're just reifying complexity that's already there, like when deployments are namespaced by environment (e.g. "dev-{service}", "prod-{service}", etc.) and so the hierarchy is already present but flattened into an inaccessible string representation. There are other solutions to this, but they all seem to extract their cost in terms of more manual fleet management.


Hey - I'm a member of the multitenancy working group (wg-multitenancy). We're working on a project called the Hierarchical Namespace Controller (aka HNC - read about it at http://bit.ly/38YYhE0). This tries to add some hierarchical behaviour to K8s without actually modifying k/k, which means we're still forced to have unique names for all namespaces in a cluster - e.g., you still need dev-service and prod-service. But it does add a consistent way to talk about hierarchy, some nice integrations and builtin behaviours.

Do you want to mention anything more about what you're hoping to get out of hierarchy? Is it just a management tool, is it for access control, metering/observability, etc...?

Thanks, A


Any reason why you put your link behind a URL shortener besides tracking number of clicks?

Since there are no character limits to worry about here unlike Twitter, better to put up the full URL so the community can decide for themselves if the domain linked to is worth clicking through or not.



Hey, thanks for asking! My interests in it are primarily for quota management -- in my experience, this is inevitably a hierarchical concern, in that you frequently run into the case of wanting to allot a certain cluster-wide quota to a large organizational unit, and similarly subdivide that quota between smaller organizational subunits. Being able to model that hierarchy with namespaces localizes changes more effectively: if you want to increase the larger unit's quota in a flat namespace world, for example, there's no way to talk about that unit's quota except as the sum of all of its constituent namespace quotas.


Thanks! We're not currently planning on implementing a hierarchical resource quota in HNC, but HNC is trying to define a definition of hierarchy that could certainly be used to create a HRQ. Give me a shout if you're interested in contributing.


Yes, namespace alone isn't sufficient for isolation. Would you be able to look at our latest Multi-Tenancy best practices?

https://cloud.google.com/kubernetes-engine/docs/best-practic...

It's a living product which comes with Terraform modules. We introduced various features to enable doing Multi-Tenancy as well (and more on their way!)


I’m sorry if i am reading it wrong, but this guide to multi-tenancy seems to suggest not being multi-tennant and instead running a separate cluster per project. This seems more like scaling single-tenancy than multi-tenant (no bin packing oppty for instance.) or did i read it wrong?


Sorry if it was confusing. You need to read into more about how to set up in-cluster multi-tenancy.

We do recommend robust configurations for production setup (e.g. dev, staging and production) however you can certainly squash and skip it if not necessary.

Thanks for the feedback though. We'll consider adding such notes explicitly.


>You need to read into more about how to set up in-cluster multi-tenancy.

I am trying to do that. Where would you suggest? Throughout the comments on this post, when people suggest namespace, pod or node level separation you ask them to PTAL and read the link which suggests the single-tenant cluster-per-project approach (that is under the Multi-tenant cluster, confusingly.) The link you sent talks about cluster-per-project, which is not multi-tenancy as I understand it. Perhaps a different name would be less confusing (robust federated cluster administration?)


"This guide provides best practices to safely and efficiently set up multiple multi-tenant clusters for an enterprise organization."

This "multiple" multi-tenant clusters part isn't coming through. Please do jump into "Securing the cluster" section to cut corners and learn what to do in a single cluster. We're fixing the sections to avoid the confusions. Thanks for the feedback!

https://cloud.google.com/kubernetes-engine/docs/best-practic...


You can dedicated nodes by namespace, at which point the isolation is pretty strong.


Nit: we don't recommend dedicated nodes for isolation. PTAL at https://www.youtube.com/watch?v=6rMGRvcjvKc And the guidance from GKE is at https://cloud.google.com/kubernetes-engine/docs/best-practic...


* Assuming you also configure strong RBAC, network isolation and don't let persistent volumes cross-talk


As also a security person (:wave:), you can use dedicated node pools and workload identity to isolate workloads in the same cluster.


Workload identity is a GCP-specific beta feature for mapping to GCP IAM, right?



yes


Or move to minikube and friends. If it's a dev environment you can usually get away with such things.


Kubernetes consumes a lot of CPU even when idle, due to the polling design, which makes Minikube is a really poor fit for developer machines. It's well known [1] to sit there eating 20-30% CPU, draining your battery and frustrating your life while doing absolutely nothing. This applies to all the Kubernetes distributions, including Kind and Docker Desktop. Not sure if the same applies to K3s, though.

[1] https://github.com/kubernetes/minikube/issues/3207, https://github.com/docker/for-mac/issues/3065, https://github.com/docker/for-mac/issues/3539, etc; there must be dozens and dozens of these


It does. My k3s setups (like https://github.com/rcarmo/azure-k3s-cluster) take up nearly 100% of the puny master node I allocate to them, and kill my Raspberry Pi SD cards as well.

Swarm, in comparison, is much friendlier (and you can use it for dev/test across multiple machines just fine)


Assuming you can do that, and your system is not using namespacing for its own purposes.


I know kubeflow can use namespaces for its own purposes, but otherwise I thought that was quite rare. Namespaces are intended to be used for exactly this usecase (isolating teams and/or workloads).

What kind of system have you seen where this isn't true?


We've seen plenty of examples where people do this. Sometimes it's different teams (e.g. the ML team is namespaced away from the primary customer flow) and sometimes it's for different customers. Really, it sounds like we're in agreement about why this happens, the confusion is just whether or not that normally happens within a company in the course of doing business?


Gotcha. If you're looking for subnamespacing, HNC offers self-service subnamespace creation, have you looked into that? https://github.com/kubernetes-sigs/multi-tenancy/tree/master...

(My apologies if we've chatted about this before in another venue, I'm losing track of whom I've already talked to)


I use namespaces to isolate names. So I can have a service called memcached in namespace A and also one in namespace B.


Its still billed by the minute. If you run your dev clusters all the time 24x7 then they apparently are critical enough.


For a dev environment, why not host your own hardware? Especially if cost is a concern, it seems like a no brainer.


Generally curious, isn’t Docker Kubernetes an option?


It is - especially on OSX, it is very cpu and memory intensive thought.


> There's also one completely free zonal cluster for hobbyist projects.

Nice.


Hey everyone. Seth from Google here. There's another parallel thread at https://news.ycombinator.com/item?id=22485625 and I've responded to items there. I'll try to respond to items here as well.


Why did you change your profile from "Developer Advocate" to "Engineer" ?


You keep noting how "easy" it is to provision and manage a Kubernetes cluster. From experience, properly securing and maintaining a Kubernetes cluster is a multi-person full-time job.


We already do provision clusters, using the aforementioned tools. There is some setup involved, but once done, provisioning and upgrade is relatively simple. Indeed, we used to exclusively provision and upgrade via Terraform/Ansible. When we started using GCP, any data that could be stored by a US company without causing compliance issues was offloaded to GCP over other providers due to the auto provisioning/management at no cost.

If you guys find it hard to maintain/upgrade clusters, that's your business. All I am saying is that as a company giving you business, with this change, you are now no longer the cheapest, most reliable or most convenient. As a result, we will be moving to provision instances with other providers from now on.


It's slightly different than other cloud pricing because we include a free zonal cluster as a hobby tier.


Just doubling down on the free zonal cluster as a hobby tier. Folks seem to be missing that among the announcement.


We offer one free zonal cluster, which is specifically designed for your use case :)


I have personal experience of a few companies in the UK where GCP are offering 90+% discounts to onboard. GCP are spending hundreds of millions to do this. K8S control-planes are a rounding error compared to this.

You could have grandfathered in current deployments but - nope. In the tech world this is up there with killing Google Reader.


I use more than one zone because I run TCP services which need low latency. I'll probably just switch to Digital Ocean.


I'm sorry to hear that. You can use namespaces and and separate node pools to isolate workloads. We'd love to hear more about your use case for having many small GKE clusters.


Hey Seth, I know you used to work at Hashicorp on vault. I think Vault recommends that if you want to deploy it on Kubernetes, it should have the cluster to itself.


That's correct. Vault Enterprise (at my last math) was ~$125k/yr, so that management cost is negligible :)


Thank you for the feedback.

> Google's inability to actually commit to long term support...

This is _exactly_ what Google is doing in this case. We are providing an SLA - a legal agreement of availability and support. These changes introduce a guaranteed availability of the management control plane.


He means that there was a sales pitch from all gcp sales guys to not charge for that. 99.95% is not enough IMO to charge 73$/mo.

As someone else noted, it breaks a lot of recommended architectures where you would have auto provisioning and a lot of clusters to separate concerns and keep costs down.

Finally, the pricing changes are starting to look like a pattern, every time Google deems the usage of a product is good enough, they will increase the price.

They are the Ryanair of the cloud.

Edit 1: moreover, it will increase the cost of composer, and on top of that, the recommended pattern where composer is paired with a kubernetes cluster for executing the workloads


> They are the Ryanair of the cloud.

Isn't Ryanair literally the Ryanair of the cloud(s)?


EKS only gives you 99.9% uptime, and I'm uncertain as to whether you could achieve more than 99.9% uptime on your own by DIYing your cluster in a public cloud provider without doing multi-region.


To put that in perspective, three 9’s allows you about 9 hours of downtime a year, which will certainly require multi-region and a dedicated ops team.

Two and a half 9’s is a whole different story. We achieved about 20 hours of downtime last year even without HA on k8s bare metal in Alibaba cloud. But I’m uncertain whether that’s a feat we can repeat this year.


> Finally, the pricing changes are starting to look like a pattern, every time Google deems the usage of a product is good enough, they will increase the price.

To be fair this is hardly new and by no means limited to Google. Any number of SaaS startups that have survived to at least moderate success have done similar things.

Look at UserVoice as an example: started out with a free tier plus some reasonable paid tiers with transparent pricing, then a year or two back killed the free tier and moved to a non-transparent "enterprise" pricing model with absolutely exhorbitant fees.

Plenty of other companies offer free to build their userbase and reach, then either water down the free tier, or remove it entirely. It's practically the SV modus operandi for the last decade.


>Any number of SaaS startups that have survived to at least moderate success have done similar things.

Google is not a startup, it is one of the largest companies on the planet.


I don't see how that's relevant: my point is that it's a tactic employed by a wide range of businesses including but not limited to startups and we shouldn't be surprised to see it here. It's not a pattern that has suddenly emerged.


Shouldn't that be opt-in? The management control plane is not something we consider critical to operations. I'd happily accept if it was unavailable for 1 and a half minutes a day versus these additional costs.


That's great feedback. I'll relay that to the product team. IANAL, but I think it would be legally challenging.


Hard to understand how it would be legally challenging. ISP's do it all the time when differentiating their business plans from residential. Both services run over the same infrastructure and you typically get the same/similar speeds, but a key difference is an SLA with the business plan.


IANAL either, but I don't see why it would be? Just have a separate cluster type, e.g SLA Zonal, SLA Regional. The SLA already differentiates the current cluster types. Anthos Clusters are also not subject to any additional fees?

And having it opt-in will save face with those users of GKE where an additional $73/m is significant.


Opt-in for the SLA and additional cluster cost would be fantastic. We run pretty small clusters but don't need any additional SLA's on top of what's already provided. Frankly we could care less about the control plane SLA.


btw. I would prefer to have something like a cost reduce if the cluster runs 24/7. Currently we also do not need that amount of SLA. but we actually have a single cluster and are a really small customer that reall choose GKE, because of no management fee (i.e. nodes are really expensive compared to non cloud providers). But we never used more than one REGIONAL cluster (we also never spun up a new one, we only change workers). And now it will cost us money. What a shame.

P.S.: german sites have the pricing wrong.


Sure, in this case I can see that. I was referring to those four points with respect to Google services in general. I'm sure I don't need to dig up a list of features and services that have been merged, shuttered, price hiked or moved into a different product suite over the years. Admittedly a lot of the issues are with the GSuite side of things, but it's sad to see this coming to GCP as well.

On a hopefully more constructive note, if this is the way it's going to be from now on, I would at least expect to see an exemption on such a management fee/SLA on preemptible nodes - having an SLA and management fee on the cluster whereby nodes can be killed in a 30 second window without prior warning seems to be a little more than pointless.


Even if your worker nodes are pre-emptible, the master nodes are not. The management fee covers running those master nodes and many other core GCP integrations (like Stackdriver logging and other advanced functionality). Billing is computed on a per-second basis for each cluster. The total amount will be rounded to the nearest penny at the end of the month.


> We are providing an SLA - a legal agreement of availability and support.

Do I still have to pay the bill first, fill out forms, get account managers involved, at some point receive a partial credit, and repeat this until the delta between what I was expected as the SLA credit and what I got as the SLA credit is less than the cost of the time to fight for another cycle?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: