> But I am curious where you have seen modern runtimes fail and where the code was the issue (not tweaks to the JVM settings); any concrete examples where well written, best practice code worked on the laptop but failed in k8s?
Not sure about OP, but the most times I have seen devs have issues with Kubernetes is in the tweaking of the knobs around deployments including security. Startup v/s readiness v/s liveness probes, rolling updates, auto-scaling, pod security policies and such are usually all-new to developers, and have a lot of different options. Most devs just want "give me the one that works, with good defaults", and need a higher level abstraction.
But at most companies I have seen those are handled by specific roles in the company who are in the team as well. Not all devs on the team need this knowledge. Depending on the service, you need resourcing. We have monoliths and microservices running on ecs and eks and we have 1 person who does the knobs turning and 1 person (me) who can take over if need be. I see no need to burden others with this, I dare say it, crap, because it is just not really useful or needed for writing business functionality that our clients want and need and pay for.
OP seemed to imply that coders needed to know this stuff because their code might not work: if that means turning knobs on the outside (runtimes/containers) then sure, but the devs don't need to know, but their comment about the JVM implies something else and I am curious what that is.
Not sure about OP, but the most times I have seen devs have issues with Kubernetes is in the tweaking of the knobs around deployments including security. Startup v/s readiness v/s liveness probes, rolling updates, auto-scaling, pod security policies and such are usually all-new to developers, and have a lot of different options. Most devs just want "give me the one that works, with good defaults", and need a higher level abstraction.