I work on low code cloud ETL tools. We provide the flexibility for the customer to do stupid things. This means we have extremely high variance in resource utilization.
An on demand button press can start a processes that runs for multiple days, and this is expected. A job can do 100k API requests or read/transform/write millions of records from a database, this is also expected. Out of memory errors happen often and are expected. It's not our bad code, its the customer's bad code.
Since jobs are run as microservices on isolated machines, this is all fine. A customer(or multiple at once) can set up something badly, run out of resources, and fail or go really slow and nobody is effected but them.
Its not automatic but it has the potential for more isolation by definition.
If your service has memory leak, crash it only takes down the service. It is still up to your system to handle such a failure gracefully. If such a service is a critical dependency then your system fails. But if it is not then your service can still partially function.
If your monolith has memory leak, or crash it takes down the whole monolith.
With a well built monolith, a failure on a service won't fail the whole system.
For poorly built microservices, a failure on a service absolutely does being down the whole system.
Not sure I am convinced that by adopting microservices, your code automatically gets better isolation