I think almost everyone was expecting this, still, it's great to see it happen.
Amazon truly listens to their customers and delivers what they want, even if they have their own competing in-house solution as well. I do think that for new projects, you'll see EKS being the more popular pick over ECS, which never reached quite the same mind-share as Kubernetes.
ECS will slowly be replaced, it doesn't make any sense anymore, and AWS was totally smart to "kill" their own product. Either you do like this, or you are out. Google and Azure provide K8 since months now... Companies want more and more solutions which allow them to "easily" migrate from one cloud to another - having everything based on a proprietary solution is no go...
ECS always felt rushed to me. The semantics of day-to-day operations always felt really awkward. Common things like rolling updates were a 3-step operation (or they were for us), and node replacement was a pain. It could be that we were using it wrong, but I never felt motivated to find out the right way. Kubernetes came along, I played with it briefly, and never wanted to go back.
I think ECS will, and should, be retired. It was a lurch in the right general direction, but ultimately missed the mark.
It WAS rushed. Amazon has customers banging on their door demanding container support in 2014 and they got ECS out the door, fast. Now it’s 2017 and Kubernetes seems to be the winner of the “orchestration wars” so they’re pivoting to that. Smart move of them to ship something and patch it up later if needed, they successfully defended against GKE’s assault and are vying to stay on top.
> Smart move of them to ship something and patch it up later if needed, they successfully defended against GKE’s assault and are vying to stay on top.
If you had to release a product to support the open source front runner, you did not successfully defend against it; you conceded after your tooling adoption failed.
As long as Kubernetes leads the way, lock in at any cloud provider is prevented (can even move back on-prem when the winds shift again). Kudos to Google for enabling that, but they have their own motives (ie disrupting AWS uptake).
>As long as Kubernetes leads the way, lock in at any cloud provider is prevented
That might be a little strong. They still have lots of other proprietary offerings you might use along with K8S. Cloudwatch, various database services, Lambda, SQS, S3, etc.
Don’t rely on primitives without open alternatives unless you want to be chained to your vendor. Today it’s roasting Oracle during the keynote, tomorrow it’ll be today’s “underdog”. It’s easy to talk customer success when the money firehose flows.
MariaDB and PostgreSQL both work well outside of RDS.
I suppose, but there's some stuff that's just hard to avoid. Like tooling to break down cloud costs, or network configuration, monitoring, provisioning block storage, etc. You can get to less lock-in, but it's hard to get to none.
Last I saw, "pets" like databases, were not optimal in K8S either. You can make it work, but it's higher effort.
Kubernetes wasn't the front runner when ECS was released. Nobody knew what would happen. There was a distinct chance ECS could own everything, or Mesos, or Docker’s orchestration, or something else altogether.
They resisted vehemently for a long time, even when it was painfully obvious it was wrong, and you’d just end up having these totally surreal conversations with their ECS product manager that would make your head spin they made so little sense. Glad that’s over.
That’s cause it’d be way better for them if ECS “won”, but it didn’t, so they adapted. Similar to how Docker now has had to add Kubernetes support after being combative and dragging their feet for years.
Does this just use a tool that runs on your laptop, to schedule and manage ECS clusters on Amazon? That's really not what I was looking for... I was hoping to prototype ECS-based solutions without spending money on cloud resources.
I have a laptop with 8GB+ RAM and a fast SSD, it doesn't have any trouble running fairly complex constellations on Minikube and I could later rebuild and/or install them on a production Kubernetes cluster, without any changes.
Can I do something like that with Blox, or is this another different way to consume ECS and spend money on EC2 nodes to run containers?
Edit: I would be satisfied if you told me, I still need to consume some AWS services like SNS and CloudWatch to use this toolkit, but that with Blox, I don't actually need to run my containers on ECS unless I want to expose them to the world.
I haven't found any tutorial or guide that indicates this is anything other than a different scheduler for ECS.
"ECS product manager" pretty much sums it up. They are gonna tow the ECS line till the ship sinks. A program manager would have been the real convo. Now the EKS product manager's star will shine.
Having done multiple K8s migrations, I can tell you that many of the problems around migrating to k8s have little to do with actually setting up the cluster. There's dockerizing all your apps, setting up the build->deploy pipeline for each app, and fixing all the hardcoded hacks where your apps aren't properly 12-factor (failing to take config from env vars, assuming you "always" deploy to a specific cloud, etc).
The other main component of my time in a k8s engagement revolves around logging, monitoring, alerting and backups of the k8s cluster, which hopefully EKS handles for you.
All told, actually starting the k8s cluster is probably less than 10% of my time.
> All told, actually starting k8s cluster is probably less than 10% of my time.
+1.
I've found myself (with Azure ACS) re-creating clusters quite often - as they don't support upgrades. This takes minutes with my deploy scripts, replicating the state of the cluster you're copying is the main bit of work.
I agree with this but I think the responsibility for fixing all this stuff lies with the teams responsible for each app - they should be the ones dockerising apps, setting up monitoring etc - that way you're distributing the devops stuff around the team which is what you really want.
Keep going... EKS is still in preview, so you probably shouldn't rely on it in production just yet. However you could check out kops (https://github.com/kubernetes/kops), which makes provisioning a k8s cluster on AWS extremely easy. Good luck! :)
I'd highly, highly recommend using kube-aws over kops for AWS. Its far more transparent as it uses CloudFormation templates, though it does have a higher upfront time investment. Probably two hours as opposed to the 30 minutes or so that kops requires.
That's how we've been migrating from our existing terraform-based infra to kubernetes. It made the transition of our staging environment relatively painless.
EKS is likely to be in preview/beta for most if not all of next year. Even after it exits preview, you'd probably want someone else to kick the production tires. Given how Kube on EC2 is doable/manageable (that's what we do) I don't see a reason to stop. If Kube is the right choice for you now, it's the right choice on EC2, and once EKS becomes a thing you can migrate to, you can just do it.
That is fine! Depending on your deadlines you can either go ahead and implement a small scale kubernetes cluster on EC2 then when EKS is ready, you can easily migrate your workloads to it which is essentially the benefit of using kubernetes in the first place.
Whether you're spawning your own cluster or using AKS, you still need to setup a build pipeline and have your applications be in a containerizable state. And any configuration like Dockerfiles or Helm charts you can still use. Actually setting the cluster up isn't the big deal (with something like Kops, at least.)
Not surprising ever since AWS joined the CNCF. Glad this is happening... not operating a Kubernetes cluster will be nice and integration with IAM is a nice value add! Most excellent :)
They only allow one acronym in the name of the service.
Since "AWS" is already an acronym, they use "Amazon" for services that already an acronym, i.e.:
Amazon SQS
but
AWS Kinesis
Amazon EC2
but
AWS Lambda
this is not an exception b/c DB is an acronym:
Amazon DynamoDB
If the service name is not an acronym, then they choose whatever sounds nicer, i.e.
Some companies are very reluctant to rely on "Amazon things" because they consider Amazon a key competitor. Especially in retail or e-commerce. Those companies are often more willing to use services that are specifically AWS-branded. It helps AWS make promises like "yes we will look at statistics of your usage to improve our services, but we won't share that with the retail side of Amazon."
Why are there at least 3 ways to configure an emr cluster from a main file and many of the property names don't match? (data pipeline, cloud formation and emr's own syntax)
How does one set up the workers? None of the announcements, documentation or FAQs explain how that's supposed to happen. It's pretty clear that Amazon manages the masters (any of the latest three public versions), but it's not clear what you'd do next: bring your own instances with kubelet and do TLS bootstrapping or similar?
So you're going to have to use a subset of possible distros/images. Then you have to figure your node update strategy. These are the kind of details I hoped they would discuss.
We'll provide AMIs with pre-configured images and Kubernetes resources. Customers have complete control over the worker nodes. With Fargate integration, node management will be painfree.
We'll show how the images are built using Packer. You are welcome to bring your own images.
Worker nodes upgrade will be the usual k8s way - drain, update, bring it back.
Some teams will want custom cluster deployments for whatever reasons but it will probably lose a lot of traction now... which is fine. Kops is a big expansive tool with a little too much surface area for my liking in a infrastructure management tool.
I'll be interested to know if EKS will support custom CNI or if it will use kubenet.
they wrote their own cni to use AWS networking, much like kubenet. I haven't found details in if everything is encrypted in transit though; I REALLY hope it is.
I'm guessing if you want your k8s cluster to be cloud-neutral, it would still be a good idea to have a tool like kops. But it will certainly reduce the number of people using it... which isn't a bad thing, really.
Does anyone know what the implementation(s) are for the Ingress resource? Google uses their own custom implementation, I'm curious if Amazon's version will be more portable. [0]
At the very least, Kubernetes natively supports exposing Services (targeting an Ingress controller or otherwise) by configuring AWS ELBs when informed to run on AWS.
All you have to do is using the "type: LoadBalancer" field in your Service's manifest.
GCP has great primitives that you can use to build your architecture. Things are generally faster, cheaper (with far simpler billing), and scale without any tuning.
However their biggest weakness is the overall alpha/beta stage of many APIs and the lack of managed services. AWS, especially with these recent announcements, has so many managed services that you can get started building your business product quicker than anywhere else, even if the offerings and billing are more complex.
GCP does have some things like Spanner and BigQuery that are unmatched though.
Right, Azure has an interesting offering for a global database with CosmosDB but AWS is leaping ahead of the game with global tables and multi-master/region Aurora, which are far more usable and productive than the others.
That's a different user. We run on GCP/GKE and aren't moving, but by managed services I'm talking about the multitude of products that AWS offer in a managed version, everything from NFS to a graph database now. It makes building products very easy since you have an answer for any infrastructure need.
Azure comes close but GCP is very far behind. What's there is nice and well designed, but leaves a lot of gaps to fill in.
Google's approach to auth is cumbersome and leaves a lot to be desired. Also, there's nothing like cloudformation for configuring supporting services (their equivalent is a pathetic joke).
The only thing I'd use GCP for today is bigquery. For everything else I'd rather use AWS.
I work for Google.. Sorry to hear that your experience was poor but thanks for the candid feedback!
Is the equivalent to CloudFormation on GCP supposed to be Deployment Manager? Would you mind elaborating what could have been done better?
I work closely with them so I can at least relay the feedback. If it's easy enough maybe I'll even get to help since I am interested in working on that project =D
Yes Deployment Manager. I can't remember now. I looked at it when I needed to automate something and found that I couldn't do it. IIRC it seemed too limited in its capabilities.
The GKE auth thing is so bad to the point we had to roll back from using service accounts to using normal API keys because there was nowhere in stack driver to add a service account key file. So the choice was either lose all our endpoint monitoring or just switch to API keys. When I opened a support ticket about this the support guy seemed incompetent. He literally couldn't understand what I was saying despite repeating myself 3 times in a way I struggled to make any clearer. It wasn't worth the hassle.
I'm running a gRPC/REST service on GKE with Endpoints and to add a new credential I needed to add the key to the service.yml file and update the endpoint. There's no way that scales. I can't wait to use AWS IAM for this instead. We had to backtrack and give out API keys instead of having anything better.
It's like GCP services weren't designed to work with each other. Just a hodgepodge of services that are fine if you can run CLI commands, but as soon as you want to get an ops team involved who want to do everything through a UI you're screwed.
Oh, and I can't tell you how frustrating it is for the k8s alpha clusters to just vanish on you. I'm a big boy. Let me decide when I want to kill an alpha cluster because, you know, I might know better than whichever engineer put that 30 day limit in.
Sorry for the rant, but as you can imagine I'm done with GCP and can't wait to head back to AWS land.
Terraform [1] should be preferred over both Google's and Amazon's as it supports cross cloud provisioning and provides features such as update plan which are crucial for making low risk changes to infra.
We ditched managed kubernetes on GCP and rolled our own using kops on AWS. Things are more stable. ELB works better than Google Cloud load balancer which had random issues and meltdowns when we were there. Also support is better.
Amazon truly listens to their customers and delivers what they want, even if they have their own competing in-house solution as well. I do think that for new projects, you'll see EKS being the more popular pick over ECS, which never reached quite the same mind-share as Kubernetes.