Google does make dealing with dockerized services pretty easy. I'm CTO for a small startup. I actually know how to use Kubernetes, Terraform and have used a lot of configuration languages like puppet, ansible, and chef in the past fifteen years as well.
However, what these have in common is that using them properly can more or less become a full time job for a few months. It's never simple or easy. I've been on multiple teams where somebody was doing that stuff full time for months on end. I don't have that kind of spare time. Either I do product development or I do devops but I can't do both and I certainly can't take two months out of my schedule for this stuff. So, I tend to look for solutions that minimize the amount of devops I need to do to the absolute bare minimum. Additionally, we are a bootstrapped setup, which means I also look for cost savings.
Google cloud run is awesome for this. Last year I wanted to setup CI/CD for a simple service and have it run in Google cloud so we could point our web app at it. This took around fifteen minutes from start to finish. My starting point was a git repository that already had a Docker file in it. It took me a few mouse clicks to get cloud run to generate a cloud build and deploy the first version of that in an autoscaling cloud run environment. Awesome. Love it. The best part is that this setup cost us close to 0$/month while we were developing for the next six months. It had so few requests that we stayed below the freemium tier. It even does websockets now. We actually applied to get access to the beta of that.
Later we ran into some limitations with background threads with cloud run. It aggressively throttles running containers when they are not serving a request. So, I decided to spin up a vm with the same docker container. Unlike AWS, spinning up a vm that runs a docker container is stupidly easy. I just grabbed the same docker container that we used for cloud run and passed that to the vm configuration (in the UI) and it launched in one go with a container optimized OS and our container came up after a few seconds. Every UI screen in Google cloud has a "copy this as a gcloud command line" as a well; so that became the basis for our CD (Github actions). We simply defined a github action that updates the vm. Prototype what you want in a UI and then copy and adapt the command for automation. Great stuff.
Fast forward a year and I had a need to finally make this a bit more proper production environment. So, I bought a wildcard certificate, created a load balancer with it, defined an instance group with an instance template similar to the vm I prototyped earlier and I ended up with a nice auto-scaling service that we can deploy with zero down time. The hardest part was a bit of trial and error to figure out what I needed to do exactly. Took me about a day to get this right. Again, we have a github action that updates this; so full CI/CD.
If you run monoliths, this stuff goes a long way. There's more to our setup of course but mostly I manage to not spend most of my days on devops topics. We will at some point out grow our current setup and hire a full time devops person to scale our setup a bit more responsibly. But actually, this setup already ticks most of my boxes. It's simple enough that I don't actually care about automating how it is created. It's flexible enough that it is easy to tweak. And it has things like logging, health checks, monitoring, alerting, etc. that is relatively easy to manage as well although maybe a little bare-bones.
If/when we move to kubernetes, we'll end up roughly quadrupling our cost. Right now that would not really solve a problem I have. But it's probably a valid next step to take. Until then, monoliths and KISS are what we do.
Google Cloud Run is so underrated. The only thing you absolutely need to know to deploy is to use the port assigned via the environmental variable Run provides your container and just be aware that your container is stateless. Everything else is up to you. Like you point out, it's completely portable in a way that rarely anything on AWS is.
However, what these have in common is that using them properly can more or less become a full time job for a few months. It's never simple or easy. I've been on multiple teams where somebody was doing that stuff full time for months on end. I don't have that kind of spare time. Either I do product development or I do devops but I can't do both and I certainly can't take two months out of my schedule for this stuff. So, I tend to look for solutions that minimize the amount of devops I need to do to the absolute bare minimum. Additionally, we are a bootstrapped setup, which means I also look for cost savings.
Google cloud run is awesome for this. Last year I wanted to setup CI/CD for a simple service and have it run in Google cloud so we could point our web app at it. This took around fifteen minutes from start to finish. My starting point was a git repository that already had a Docker file in it. It took me a few mouse clicks to get cloud run to generate a cloud build and deploy the first version of that in an autoscaling cloud run environment. Awesome. Love it. The best part is that this setup cost us close to 0$/month while we were developing for the next six months. It had so few requests that we stayed below the freemium tier. It even does websockets now. We actually applied to get access to the beta of that.
Later we ran into some limitations with background threads with cloud run. It aggressively throttles running containers when they are not serving a request. So, I decided to spin up a vm with the same docker container. Unlike AWS, spinning up a vm that runs a docker container is stupidly easy. I just grabbed the same docker container that we used for cloud run and passed that to the vm configuration (in the UI) and it launched in one go with a container optimized OS and our container came up after a few seconds. Every UI screen in Google cloud has a "copy this as a gcloud command line" as a well; so that became the basis for our CD (Github actions). We simply defined a github action that updates the vm. Prototype what you want in a UI and then copy and adapt the command for automation. Great stuff.
Fast forward a year and I had a need to finally make this a bit more proper production environment. So, I bought a wildcard certificate, created a load balancer with it, defined an instance group with an instance template similar to the vm I prototyped earlier and I ended up with a nice auto-scaling service that we can deploy with zero down time. The hardest part was a bit of trial and error to figure out what I needed to do exactly. Took me about a day to get this right. Again, we have a github action that updates this; so full CI/CD.
If you run monoliths, this stuff goes a long way. There's more to our setup of course but mostly I manage to not spend most of my days on devops topics. We will at some point out grow our current setup and hire a full time devops person to scale our setup a bit more responsibly. But actually, this setup already ticks most of my boxes. It's simple enough that I don't actually care about automating how it is created. It's flexible enough that it is easy to tweak. And it has things like logging, health checks, monitoring, alerting, etc. that is relatively easy to manage as well although maybe a little bare-bones.
If/when we move to kubernetes, we'll end up roughly quadrupling our cost. Right now that would not really solve a problem I have. But it's probably a valid next step to take. Until then, monoliths and KISS are what we do.