Back when I started engineering school, we tended to add more constraints to systems than what they actually need believing that we were making them more secure and "safer".
"This will make sure we cover edge cases we're not aware of", we thought.
Later we discovered such systems are called "hyperstatic" and that they are actually more fragile and more prone to malfunction. What we should've aimed for are isostatic systems, where less constraints meant more stable systems.
I'm not saying Boeing engineering aren't aware of this. Of course they do. I just wanted to show an example of how trying to avoid mistakes *may* lead to less safe systems.
Sure, but this just assumes they don't know what they're doing (which, well, is probably true). It doesn't refute the point that you want to put people who are obsessive about safety in charge of safety.
I work for a healthcare company, and we definitely put in charge of safety people who stress about a patient coming to harm, not people who are so-so about it.
I read GP as relocating people who were paralyzed by safety.
E.g. the developers who never ship code because they always want to write the better version of the thing, that they thought up while building the current version
At some point you have to look at a less than perfect design and answer the question of whether it's good enough for the requirements at hand.
You can strive for perfection and still have a grounded outlook on tradeoffs. Those people would in fact be very good for engineering security-critical aspects IMHO. I don't think "striving for perfection" in itself implies the inability to accept a calculated risk, or some type of paralyzation.
I'd look at this the other way around, there are people who don't strive for perfection, simply delivering something when it just meets expected bars, not giving any thought beyond that. I wouldn't want those people to design my safety systems, they'd just leave possible improvements lying on the ground by not caring to think a little beyond the boundary of their "box".
Are there any layperson accessible books that explain these systems concepts? I’m curious if there’s a crossover application to organizational systems.
Not that I'm aware of. This was taught as an introductory course in mechanical systems design or something like that (it was many years ago).
But yeah, I frequently apply this concept way beyond its initial purpose, especially in matters that relate to managing human relations both professional and personal which, sometimes, closely resembles the application of the illusion of choice.
I'll do some research to see if there are any books that I can recommend.
So why do you assume that they'll be slopping and kill themselves? Why is that the only option? Couldn't someone make a mistake? Couldn't the person just be riddled with guilt and just abandon their career.