I've heard (and sometimes pushed) this rhetoric before, but something should be well understood before it's automated. Things that happen very rarely should be backed with a playbook + well exercised general monitoring and tools. This puts human discretion in front of the tools' use and makes sure ops is watching for any secondary effects. Ops grimorae can gather disparate one offs into common and tested tools but they don't do anything to consolidate the reason the tools might be needed.
To me that sounds like development and testing (i.e. figuring out what the steps are). Once you have that it should be automated fully.
Too often people will put up with the, "well, we only do this once a month so it's not worth automating". Literally, I script everything now, just in simple bash... if I type a command, I stick it into a script, and then run the script. Over time you go back and modify said script to be better, eventually this turns into more substantive application. At a certain point, around the time that you have more than one loop, are trying to do things based on different error scenarios, it's probably time to turn to rewriting it in another language.
The simplest thing this does for me, is guarantee that all the parameters needed are valid and present before continuing.
I've been doing it this way for years and it really, really works. Some places have reservations with it since its lack of formality is considered "risky" by some.
Though, an alternative to switching to another language is using xargs well. Writing bash with some immutably has been pretty invaluable for my workflows lately. For example