I'm a fan of boring technology too, but I would like to suggest to you that Serverless _is_ kind of boring.
Essentially you just upload a ZIP of your application, and register a handler function that takes a JSON payload.
Obviously this is quite a bit more boring than a K8s cluster, with a bunch of nodes, networking, Helm charts, etc.
I would posit that even compared to something like a DO Droplet, Serverless is still kind of boring. Everything is going to fit into the model of registering a handler function to accept a JSON payload. There's no debate about whether we're going to have Nginx, or what USGI runtime we're using. It's just a function.
And with Serverless, your cost for doing a couple million, 2-second-long requests is about four cents.
> I would like to suggest to you that Serverless _is_ kind of boring
It's not my kind of boring, although I sort of see what you're saying.
It presents a simple facade, but it's built on complex infrastructure you have no way to have any visibility into. So when things go wrong it's a nightmare.
To me boring tech means super simple, KISS to the extreme. Something I can diagnose fully when necessary without any layers of complexity (let alone proprietary complexity I can't access) standing in the way.
The challenge with serverless is building systems that rely on more complex backend processes and existing code and doing things like testing using most of an existing codebase. Serverless is great for nodejs/javascript stacks that are database and front-end heavy but don't need more complexity like queueing, event streaming, or more complex architectures. Then serverless becomes a huge mess normally and the developer experience becomes a giant catastrophe, as well.
Here the OP kind of got caught by the terrible DX that is almost natural to serverless IMO.
I'd argue serverless handles those cases even better. SQS for queueing and Kinesis for streaming have you pretty much covered. Much easier to manage those than setting up a fleet of workers that are all long polling for messages and managing heartbeat signals or routing to a DLQ manually.
It's admittedly a simplification and a best case. For AWS Lambda, the price is in GB-seconds, and the amount of CPU available to your function is itself a function of the memory allocated.
The price is $0.0000166667 for every GB-second on X86. So it looks like I've also misplaced a decimal. It's $0.20 per 1M requests with 1GB of memory.
Lambdas can be sized from 128MB to 10GB, and pricing depends on the resources allotted.
Ultimately, this pricing just represents the compute time for a function to complete. That function can do whatever it wants.
There are additional costs to put an API Gateway in front of it, for instance, to make it a publicly accessible API.
I'm not day to day on cloud stuff, haven't been in a while, that's why I'm asking this, not intended to be passive-aggressive:
So if I had a "hello world" function, that basically just returned a constant JSON payload...I'm seriously looking at a pittance, pennies for millions of requests? I.e. $0.20?
No. You would also have to pay for memory usage and bandwidth.
Millions of requests for a static file is easily done on a dedicated server in a few seconds if the file is small. $0.2 per request is very expensive.
To put things in to perspective: If you buy a very cheap new server for $1K, then you would be able to handle small static requests worth thousands for dollars each day according to AWS pricing. That's a nice return on investment!
Exactly! If you make your app compatible with serverless by following some restrictions, deployment is pretty boring.
In my opinion, it is easier to switch from serverless pattern to VM instances, and much more difficult to switch from VM to serverless without a major rewrite.
Essentially you just upload a ZIP of your application, and register a handler function that takes a JSON payload.
Obviously this is quite a bit more boring than a K8s cluster, with a bunch of nodes, networking, Helm charts, etc.
I would posit that even compared to something like a DO Droplet, Serverless is still kind of boring. Everything is going to fit into the model of registering a handler function to accept a JSON payload. There's no debate about whether we're going to have Nginx, or what USGI runtime we're using. It's just a function.
And with Serverless, your cost for doing a couple million, 2-second-long requests is about four cents.