I want to agree with this post, but I don't. This person is in a position of power over their product.
Instead, I have found that decisions work by "path dependency". If something works, even if it is a poor implementation, the path to get there is holy ground and the amount of effort to change a 10 minute decision that was made flippintly by two business people chatting in the urinals on a Wednesday is like trying to siege the walls of Jehrico.
There's no solving this problem because it is fundamentally human in nature. The egos are invested in the existing thing -- so improving the proof-of-concept-that-is-now-hastily-shoved-into-production problem requires that you unravel every person's thoughts and pride on the current thing, regardless of the seriousness of the problem. And so the problem cannot get solved and change cannot happen unless something catastrophic occurs.
Thus, the line-worker software engineer, who has no real organizational power, has to hope that the proverbial printer falls down a flight of stairs "accidentally" so that they can swap out the black ink cartridge to a new one that doesn't make weird bleed lines on the edge of every page. That way, when the new printer comes in, and someone complains that the ink bleed is gone and how they liked that, that you can feign ignorance.
> it is fundamentally human in nature. The egos are invested in the existing thing
That is one thing that can happen, because it is part of human nature. But it is not the whole of human nature. if you have a culture of experimentation and two-way doors, people learn to invest not in one particular experiment, but the process of experimentation. They invest their egos not in one particular outcome, but in the team and their long-term success.
I get that the dominant culture in American business is managerialism [1], where we all have to pretend that people currently holding power are brilliant, while they have pissing contests over their place in the corporate dominance hierarchy. But that's not the only way things can possibly work. I've lived different approaches, as has the author of the piece.
Sociocracy (from the Quakers) provides an interesting framework: consent-based decision-making instead of consensus or authority-based hierarchies.
To call hierarchical authority "human nature" is to ignore not only theoretical viability, but also demonstrated present-day viability as well as modern anthropological and archaeological understandings of realities explored in our pasts.
Sadly most of us have bosses or subject others to it, and we bind ourselves to these structures by legal contract and risk being identified as insubordinate if we try to do better.
Sociocratisch Centrum co-founder Reijmer has summarized the difference as follows: "By consensus, I must convince you that I'm right; by consent, you ask whether you can live with the decision".
This is so true. It still shocks me and I don’t know why. The most frustrating version for me is when the original author of a solution didn’t care all that much and would admit it was a hack but they’re no longer around - and their successor thinks that what they’ve inherited is some sort of sacred object built on infinite and unknowable wisdom. So this obvious design flaw must actually be a carefully designed feature that were simply not capable of understanding. Maddening.
It's especially difficult, too, because the stagnancy is often (hopefully) based, for better or worse, in humility, which is broadly a good trait to hav. Especially around Chesterton's Fence and product decisions, moving carefully and respectfully for prior art is a generally safe and mature way to work.
But dangit, sometimes you need to trust that you have a better solution and just execute! But it's not that easy.
First step to solving that problem is letting people dive deep into codebases, so that they can understand the situation. Which requires having that time available to them, rather than always being on busy work. I think people sometimes lump this in with maintenance, but it is something different and worth putting into your estimates. Add time to grok the existing solution first, not just time to build on top of it.
Devil's advocate: if nothing bad happens resulting from the path chosen with that 10-minute-decision solution, is it fair to call it poor from the business perspective? Or even if something "mildly bad" happens, if the flip side was spending 3 weeks doing research until finally arriving at a 24% better approach, which of those costs is higher?
If the team/org is too high-ego to actually treat 2-way decisions as 2-way decisions after they get made, I think that's a bit of a different problem. The org has to invest in the "so we can change it" part of the plan.
I mean, if it was a good decision, it was a good decision. But there are a lot of ways for a decision to be bad without legibly causing a legible problem—and the "business" might be happy with that—but that doesn't make it a good idea!
A common pattern I've seen is a team or organization getting in the habit of collecting little papercuts where no individual problem is bad enough to register but, over time, the codebase becomes slow and painful to work on. What's especially dangerous is that the state of the system informs people's expectations: what's easy, what's hard, what's possible at all. You get to the point where changes that should be easy are seen as inherently difficult and people start making strategic decisions that implicitly compensate for this, masking the accumulated problems even further.
It's a 20 year old codebase, so it's gotten so bad that we spend 2 weeks planning something that could take me 1 week to write in the current codebase or 1 day to write in a sane codebase, on a product that I could probably re-write from scratch in a month.
In my experience, a quick decision is often a poor decision. Most of the time it’s suboptimal but occasional it’s outright damaging. Most places I’ve worked at, claim that decisions are reversible, but they aren’t. Unless a process or a piece of software is actively and visibly misbehaving, nobody refactors it.
There’s gotta be a balance between waterfall and fake agile, but I haven’t found it yet.
I am a big advocate of refactoring bad code, but what you say is SO important. Badly written code that is non-critical or rarely used or simply where all the known issues are known and easily worked around, is not a good candidate for refactoring. What a lot of refactoring advocates fail to mention (or sometimes realize) is that the act of refactoring has hidden costs. There is an inherent risk taken on that when refactoring something you are going to break something. If you are lucky, it breaks “hard” and it is obvious. If you are unlucky, it soft-breaks and only shows up occasionally or years later on edge cases. This all must be taken into consideration when deciding whether to refactor a bit of code. Deciding to refactor should not be a casual decision.
I’m working with a distributed monolith right now. The system is misbehaving as a whole, but it’s difficult to point to one individual piece and say — this is the culprit.
A number of reversible decisions over the course of the years resulted in a terrible architecture.
Something is broken but nobody knows exactly what it is and there’s no organizational will to get it fixed.
While I agree with the sentiment, I can see 2 counter-points:
- "Nothing bad happens" might be the accumulation of risk until something _very_ bad happens.
- Something bad might actually be already happening, only silently. For example, teams suffering unnecessary difficulty whenever they need to work on some part of the codebase, but it's considered "normal" because that's life now.
The version that I’ve always experienced in practice is “we spent six months fixing bugs and it’s still broken, because we did not read the first page of the docs. No we do not want to read the docs now. Look I found a stack overflow post where some unknown, unqualified fool who is also incapable of reading the docs says to do it this way”.
Weeks of bug fixing can save you literally hours of planning.
> if something "mildly bad" happens, if the flip side was spending 3 weeks doing research until finally arriving at a 24% better approach, which of those costs is higher?
The flip side is often to ask the people doing the work for input, there's a good chance they know what's best. There's often a group that can make decisions and implement the decisions, and another group that can only make decisions, and so naturally we divide the work so that those who cannot implement the decisions make the decisions and those who can do both don't ever get to make decisions.
I agree completely. Path dependency is such a huge component of how the world is built and many people are oblivious to its effects.
Even the freeest "two-way door" decision becomes one-way over time because every day is a step farther away from that door, and another step you have to backtrack in order to go back through it.
The concept of line-worker software engineers seems like the problem in this parable. People need the ability to have some small amount of override and the ability to make decisions without risking their jobs, even if it's somewhat boxed to their experience level. Leaving everything up to management only is a way to ensure that little gets done and no one's happy doing it.
It took me awhile to go from thinking tech companies rely on their machines and algos, and the best way to change / better them is to present a good technical solutions, to the idea that organizations are made of people, and tech can only be a good argument after one establishes that your advice is welcome on an emotional level.
None of us like to admit we were wrong, the more power / responsibility we hold the harder it gets. A lot of people feel insecure deep down because their position in the org was gained roughly by accident and they can’t reliably reproduce their success in another place, so their ego / reputation becomes an incredibly touch subject.
We as devs have an incredible luxury of being able to pack up our stuff and deliver a working solution in another team / company / country / continent reliably. A lot of business people either don’t have that, or believe they don’t, same thing really.
So the way to approach it becomes understanding how vulnerable people in power feel, helping them out and building alliances that way, and when they start trusting you, then you can bring to bare even outlandish proposals and be supported.
Sadly we are all still primates, building the most elaborate tree ever, and one has to take that into account.
The espoused amazon groupthink seems to be from "earlier" amazon which, to your point, was practically greenfield for everything.
IMO only now is Amazon starting to run into the long term consequences of massive adhoc inhouse tech tools, probably only worsened by defensive coding techniques to combat stack rank evals.
I'd like to say that the most modern example of two way door decisions is hiring someone, because you can always just fire them immediately as the stack rank sacrificial lamb. See? Easily undone.
> There's no solving this problem because it is fundamentally human in nature. The egos are invested in the existing thing -- so improving the proof-of-concept-that-is-now-hastily-shoved-into-production problem requires that you unravel every person's thoughts and pride on the current thing, regardless of the seriousness of the problem. And so the problem cannot get solved and change cannot happen unless something catastrophic occurs.
Yes. I see this with internal home grown tools. They were home grown by early and influential employees and you ain’t getting rid of them. It is worse when they kind of work and capture a lot of organisational practice such that a rewrite or off shelf solution can now be “good enough is enemy of perfection” type thing. But the problem with home grown tools is someone has to support them. They suck up a lot of time.
There's no solving this problem because it is fundamentally human in nature.
the solution is unity. you are right that with a someone in a position in power making decisions it can be very difficult to change that, but when the whole team makes a decision then the whole team can also change that decision.
what is important here is to develop that mindset that everyones input to a decision is valuable. the challenge is that this requires trust that needs to be developed
Instead, I have found that decisions work by "path dependency". If something works, even if it is a poor implementation, the path to get there is holy ground and the amount of effort to change a 10 minute decision that was made flippintly by two business people chatting in the urinals on a Wednesday is like trying to siege the walls of Jehrico.
There's no solving this problem because it is fundamentally human in nature. The egos are invested in the existing thing -- so improving the proof-of-concept-that-is-now-hastily-shoved-into-production problem requires that you unravel every person's thoughts and pride on the current thing, regardless of the seriousness of the problem. And so the problem cannot get solved and change cannot happen unless something catastrophic occurs.
Thus, the line-worker software engineer, who has no real organizational power, has to hope that the proverbial printer falls down a flight of stairs "accidentally" so that they can swap out the black ink cartridge to a new one that doesn't make weird bleed lines on the edge of every page. That way, when the new printer comes in, and someone complains that the ink bleed is gone and how they liked that, that you can feign ignorance.