Not really. You can't really reduce the blast radius of crashes or bad deployments. You need to have the discipline of a good CI/CD instead of siloed but decoupled workflows.
Just keeping things neat doesn't go nearly as far as a separate process on separate machines. Monolith might be better but I don't think it's a situation where you can have it all.
This argument always confuses me. It depends on what you’re doing but if you’re doing web services, as most here are, the crash is limited to the request being served.
The blast radius is a single request, right?
In every likelihood it _is_ a separate process on a separate machine.
If schema migrations take non-zero time and you want zero-downtime deployments, then the both service versions need to be compatible with either the new schema or the old schema, regardless of service size.
The software should be compatible with both the old and new schema until all of your database servers have moved over to the new schema. All rollouts are going to be somewhat staggered even if you go straight to 100%.
The same problem exists in microservoce land, only you now have a load of separate teams managing their own database, perhaps with different solutions.
I think you misunderstood what I said, which is you still have databases and ergo need to manage migrations and such, only that this now typically falls to the specific teams.
That said, loads of folks do microservices with a shared dB.
Even for a web service, imagine a small feature is causing the app to crash to OOM, or too many file handles or something that will take out the whole process and not just the one request.
If you have long running requests (something beside small rest calls like large data transfers) then any being served by that same process are taken down.
Just keeping things neat doesn't go nearly as far as a separate process on separate machines. Monolith might be better but I don't think it's a situation where you can have it all.