Because, if there is even one mistake; the profit losses from that mistake, could easily eat up the savings.
Imagine having to tell management your optimization to save $400K cost $100K in engineer time and caused a $700K outage where the “Add to cart” button sometimes didn’t work. Great job. You’re possibly fired.
(Edit: Due to my misreading, add a few digits to the outage cost.)
That's not how you do risk analysis. You're proving too much [1]; you're proving that no company should ever change any working system because something could go wrong.
I don’t care if you can justify it in an academic sense; your company’s boardroom is going to say:
- Savings $400K
- Direct expenditures $100K
- Mistake expenditures $700K
- Net loss $400K; immediate loss $800K
And that’s it. You’re fired, replaced with someone who is better at not fixing things that ain’t broken; who wouldn’t have made this mistake in the first place. And heaven help you if your code is deployed before the Nintendo Switch 2 launch (or another major launch) when you made this mistake; or if you just ruined another company’s launch and your company’s contract with them to support it. Pointing to a Wikipedia article, musing about how risk analysis should be done, isn’t going to save your skin.
If mistakes cost that much you need something in place to prevent them anyway. Because I guarantee that eventually you will need some change made (tax laws change?) and you then get that same risk.
And "zero" mistakes is not part of the goal. It clearly wasn't in the requirements during initial development, why would they add it afterwards?