It's like the argument about excessive modularity in software design in general: you can split a system into so many little pieces that each one of them becomes very (deceptively) simple, but in doing so you've also introduced a significant amount of extra complexity in the communication between those pieces.
Personally, I think modularity is good up to the extent that it reduces complexity by removing duplication, but beyond that it's an unnecessary abstraction that obfuscates more than simplifies.
The communication would've happened anyway. Now it just happens through a common mechanism with strong isolation. That all the most resilient systems, especially in safety-critical space, are microkernels speaks for itself. For instance, MINIX 3 is already quite robust for a system that's had hardly any work at all on it. Windows and UNIX systems each took around a decade to approach that. Just the driver isolation by itself goes a long way.
Now, I'd prefer an architecture where we can use regular programming languages and function calls. A number of past and present hardware architectures are designed to protect things such as pointers or control flow. Those in production are not, but have MMU's & at least two rings. So, apps on them will both get breached due to inherently broken architecture and can be isolated through microkernel architecture with interface protections, too. So, it's really a kludgey solution to a problem caused by stupid hardware.
Still hasn't been a single monolithic system to match their reliability, security, and maintenance without clustering, though.
>For instance, MINIX 3 is already quite robust for a system that's had hardly any work at all on it. Windows and UNIX systems each took around a decade to approach that. Just the driver isolation by itself goes a long way.
MINIX3 also has hardly any work done WITH it, so I don't think we can compare it to Windows and UNIX systems regarding robustness, unless we submit it to the same wide range of scenarios, use cases and work loads...
I'd like to see a battery of tests to see where it's truly at. Yet, there's still not a MINIX Hater's Handbook or something similar. That's more than UNIX's beginnings can say. ;)
Communication would've happened, but probably between far less actors. So, you have a communication channel which is orders of magnitude slower, and bigger communication needs. Not good.
That said, about the reliability point, I agree with you. If you're building an specialized system, and reliability is your main concern, microkernels+multiservers are the way to go (or, perhaps, virtualization with hardware extensions, but this is a pretty new technology for some industries).
Probably you're going to need to add orthogonal persistence to the mix, to be able to properly recover from a server failure, or an alternative way to sync states, which will also have an impact on performance. But again, you're gaining reliability in exchange.
The communication channel does get slower. The good news is that applications are often I/O bound: lots of comms can happen between such activity if designed for that. One trick used in the 90's was to modify a processor to greatly reduce both context switching and message passing overhead. A similar thing could be done today.
Of course, if one can modify a CPU, I'd modify it to eliminate the need for message-passing microkernels. :)
I think this is a good example of the law of conservation of complexity[1]. You can't reduce complexity, you can only change what's complex. In the case of monolithic kernels versus microkernels, it sounds like going to a microkernel moves the complexity from the overall design into the nuts and bolts of interprocess communication.
Personally, I think modularity is good up to the extent that it reduces complexity by removing duplication, but beyond that it's an unnecessary abstraction that obfuscates more than simplifies.