So, this happened somewhere I worked, and I disagreed with it too, but it happened because the time consuming processes were taking months for basic things from the wildly unreasonable and unqualified 'devops' guy who had a lock on the whole system.
What made a difference was getting people in my team to stop doing it, and making it clear when things were requested and timescales and why we were not able to do it. When deadlines started getting missed, the guy got put under a lot of pressure to change the processes, and eventually the business hired someone that ultimately diminished the original guy's role.
> What made a difference was getting people in my team to stop doing it, and making it clear when things were requested and timescales and why we were not able to do it. When deadlines started getting missed, the guy got put under a lot of pressure to change the processes, and eventually the business hired someone that ultimately diminished the original guy's role.
This is the way. If a process is slowing you down, let it slow you down to a grinding halt. BUT document and communicate the reason clearly, frequently and to the correct people.
If you can attach a clear cost to the process being slow, even better.
Coming from the ops side of the fence, who should be opposed to this, it's also often the only way things get done.
I've worked a couple places that had an unspoken Catch-22 mantra: "ye shall not invest resources in apps that are not business critical".
So the only way to build something new was to skunkworks it until something major depended on it and you could get resources to actually productionize it.
A smart ops team should have a semi-standard "skunkworks to production" pipeline.
I had a solution for a production data sanity check solution that worked better than the one in production, however the architects were making it difficult to deploy it, even temporarily. It was just a script.
My manager offered to give me a 2nd laptop so others could run the system as needed.
(and yes, we had kubernetes, all the fancy cloud stuff, etc)
Hard disagree. These are the people using their time wisely, and it's hilarious in some sense that the company shot itself in the foot by laying them off, ironically by using a tool to automate it. It's a bad thing to help increase productivity all of the sudden? It was a poor decision by management that reflects a culture of treating workers like arbitrary units that can be generated or destroyed with no consequence. Downside is that the rest of the team got some fallout, but it's not like they're not paid for the time anyway.
The failure here is on the manager/ops team for not seeing the critical project running on the hack instance and providing resources to stabilize it.
Hack instances are amazing for getting stuff off the ground or validating a use case when the proper channels are too burdensome to use, but they need to be migrated away from once something is critical.
> It's also possible the company has processes that make doing the right thing take so long it's infeasible.
If it caused outages and disruption, it wasn't the right thing.
I understand the temptation to just do my own thing and bypass everything to get a deliverable done and be the hero. But in the long term that's never the responsible thing to do. Follow the process to avoid disruptions like this one. If the process seems inefficient, work to improve the process to bring consistency benefits to everyone.
Those are by far the worst software devs, not understanding the implications of their actions. But also that his manager didn't catch up this mishap