Developers can focus on the business logic when they're close to the business. Look for places where you sit close to the business people, talk to them a lot, maybe even talk to customers. Where you're part of a team that has a responsibility of delivering product X to customers rather than a team that has a responsibility of doing things that use technology Y.
The more organisational layers you have between you and the customers - architects, business analysts, and the like - the more disconnected your work will be from the business value.
> The more organisational layers you have between you and the customers - architects, business analysts, and the like - the more disconnected your work will be from the business value.
In every company I have worked for in the last 10 years, the layer in between business experts and software engineers has been always: the manager (either product manager or eng. manager). It's a bottleneck. The flow usually goes like this:
business expert -> manager -> engineers -> manager -> business expert -> manager -> engineers -> ...
Whenever the manager comes with "requests" from the business side, we end up with tons of questions because everything is half-baked. Manager goes back to business with our questions and comes back with some answers, but usually that just generates even more questions. In this way managers feel empowered. There's no way they can let the engineers just talk with business (otherwise the managers would feel like they are not contributing in anything... which is kinda right if the engineers are more or less professionals).
When I worked disaster recover and continuity, we had to map every business process.
We would make one map from the info the managers told us and one map of the actual processes by talking to every single employee involved.
We frequently found critical processes no manager knew about and also that some random secretary was the focal point for entire departments (mundane processes no one else wanted)
You start with any documentation they already have (usually done for insurance or legal reasons) and start mapping much in the same way you would map data flow in a program.
Other sources are any existing ERP or other business process system.
Paperwork of any kind (invoices, quotes, POs, accounting) can also be used.
Even if a business thinks they are a flat org and have few processes, I guarantee there are ad hoc managers/leaders along with shadow processes.
Well, being the middle man is kinda the manager job. The situation you describe happens when the manager does a poor job and is both submissive to the business requests and unable to comprehend technical issues.
Ideally having a competent person acting as the middle man for most daily operations is desirable though
This is it. I have had a few really good managers in my career. They were game changers. When they came to me, they’d really done their homework and thought about the problem, or they hadn’t done it at all and were handing it off to me and giving me the contacts and support I needed to do my own research and own a thing entirely myself.
Outside of those managers, managers have been worse than useless. They’ve been an impediment to both me and the business (from what I can see).
So, in my experience, a good manager is worth every penny. All other managers are a net negative.
In a lot of organisations the business side isn't necessarily incentivised to talk to tech, which causes a lot of issues. In good places, it's different and the development cycles work really well with an engaged business side. You still might want some manager in there to manage demands, conflicts, resources, etc.
Agreed. There is a role for a good manager to filter ideas from the business side and make sure all asks are building towards the products future. If the funnel is "sales people => engineers" you're likely to get a lot of spurious development that doesn't make the product better.
Reminds me of the "if apple built every feature requested" meme.
You'd think this is non-controversial, but have worked at places where product and UX act as gatekeepers and don't want engineers involved with the problem solving. They'll "figure it out for you" and then spoon feed Jira tickets and Figma mockups to devs.
I've experienced this too at a bigco. A well-meaning group of UX people took on the role of conducting all user observation studies. And the studies were high-quality and statistically valid, but it took months to get observations I could have seen directly in minutes. So the recommendations were mostly about things I'd already fixed or changed.
It's much better to just talk to users and watch them work. A same-day feedback loop beats any improved study quality.
I also agree that I prefer to work this way, but it's crazy to see how many developers have an "I just want to code, I don't want to think about business problems" mentality.
In some ways I respect their devotion to pure technical craft. Personally, I'm not talented enough at the technical aspect to make up for ignoring business needs.
Often it's because getting involved on the business side means doing a lot of politics. Is this problem more urgent than that problem? Who will take credit for solving this problem? If I solve this problem, will they fire half the people who currently solve it manually? If I solve that problem, will it empower a certain scumbag VP who's been clamouring for it? Etc etc.
Codemonkey doesn't want to be involved with all that shit.
I don't think GP means getting involved with business side decisions, just understanding the "business logic". Where "business logic" is a generic term for anything, it could also be the rules of a sport, how the competition(s) are structured etc. if you are building a sports website...
Requirements engineering can be extremely draining regardless of it.
What's that? We're three years into this six months project and you just realized again that you don't understand your own job, don't understand the legal requirements placed on you, nor your position in the overall organizational architecture? Of course we can rewrite everything from scratch for you, again. What do you mean there's no budget and it has to just work?
My biggest frustration is the people who refuse to get involved with any of the process of defining requirements, but then when the requirements arrive moan about how they're stupid requirements and won't get the job done. You can't have it both ways.
It's because defining requirements is someone else's job, so if you get involved, you will be doing part of their job while you might already be working at your capacity.
The Agile style user stories that often lack technical details because the author doesn't know enough about the technical details means the developer is the one actually writing the requirements (usually as they are doing the work).
It boggles my mind how someone can think they can build anything useful without having a basic understanding of what the thing they are building will be used for... that's a sure-fire way to software that will be a nightmare to use (and support).
I really like that “X vs Y” example. I was just on a team where we were estranged from the business folk who would supposedly be using our products and it very much felt like a “Y”, where my boss cared much more about the tech stack than the value (if any) we could deliver to the users. Suffice it to say the team was cut in a round of layoffs. I had fun working with things that I normally wouldn’t work with (eg Neo4j) but to me surprise it was very demotivating not feeling any purpose behind what we were building, tech stack be damned.
I have come to not care much about the tech stack, I work with all sorts. But by extension I grow weary of of dealing with technical problems that are only tangential to the business problems.
Incidental complexity it can be called. Problems outside of the core business domain or problem you are solving. Like, you can't get the data in the right format because your version of a DB library doesn't support it, something like that. I want the tools to get out of the way so I can focus on the problems.
I bring that up because I find certain ecosystems respect that a lot more than others. Some people are busy building Jenga towers of abstractions because coding is fun, but many of us are knee deep in the business domain and just want the tools be clear and easy.
Well said. Every opportunity I’ve had in my career, as an engineer, where I get facetime with customers has been a good use of time. I usually come away with a better understanding of their needs, new ideas, or take back issues to my team to investigate.
> The more organisational layers you have between you and the customers - architects, business analysts, and the like - the more disconnected your work will be from the business value.
Totally agree. However be aware that there is a tradeoff. I have seen too many projects where business logic was informally embedded throughout the codebase. Such that only the original developer could make sense of it.
Where I've seen this problem to be the most acute, it's not because of organizational disconnect -- it's the lack of domain knowledge on the part of the programmers.
Medical software, for example, is ripe with examples of how programmers are completely clueless about clinical applications and implications of whatever they are doing. There's probably no other field where programming was misapplied as often and failed in such hilarious (and devastating) ways. For example, there's a lot of programming happening around various imaging modalities. Often you'd hear about programs that do X better than a radiologist. Only to discover that the programmers who wrote the program had no idea what radiologist was even doing.
Just at the start of COVID pandemic there were countless programs written to identify the virus in chest X-rays. A lot claimed success... while the clinical truth is that X-rays on their own cannot tell you if a patient has COVID, it's indistinguishable form pneumonia or a bunch of other things. The diagnosis must be made using labwork. Sometimes it's possible to be moderately confident relying on both X-ray and presentation, but not judging by X-ray alone.
Similarly, I had a chance to work on complicated budgeting and banking software. It was a joke how nobody knew what the hell the program was supposed to do and how the QA were driven up the wall by not being able to get reliable information from the partners about how certain aspects of the program should perform. Forget the developers, who'd at best got a written spec for their tiny fraction of the product and had no idea how it fits into the larger picture or what exactly it's supposed to do.
Sadly, a lot of programmers have this unwarranted confidence that they just need to read the problem description and they'll be able to program a solution for it. For complex problems that require a lot of expertise this, at best, results in solutions that are extremely convoluted and painful to use for the end user.
Until very recently, I used to work in the healthcare industry before switching to a different one. I only got into it because it was hip during covid, until it wasn’t.
The pay is bad and developers are treated like commodities by the upper management. You are expected to understand all the context around building products and at the same time, chase deadlines in every sprint. This usually happens because the business people have no idea how to run a tech company.
Also, the products are usually fraught with technical debt and no one wants to work with the codebase because working on that crap for a longer period of time will just make them unemployable. Healthcare tech is garbage for a reason.
There’s a disconnect between product owners and the engineers working on the product. Smaller teams can usually avoid it if less cross team communication is required to build and maintain the product.
I work in consulting and corporate dysfunction is how we make money. The disconnect between "business" and "IT" is at the core of most of these engagements. Because of the relationships between funding and ownership, the relationships often become downright antagonistic. In the latest example, we were working with the IT side of a company trying to get connections to the business side because we know that solving those sorts of issues is the best path for real improvement. During this process the business went out and hired their own consulting company to work on things instead. It's absolutely insane what happens inside of multi-billion dollar companies.
>The more organisational layers you have between you and the customers - architects, business analysts, and the like - the more disconnected your work will be from the business value.
But what about software middle managers? What's going to happen to them??? Honestly, for over a decade that is how I worked on professional software project. I never saw the need of manager.
And software development works better when there are NO middle manger just customer and developers. Of course I'm aware most developers don't care about the business logic.
Seem like management stuff in software is just for show. Corps are just trying to impose their process on software developers, which don't really work.
>> There must be a middle ground where developers can focus on the core business logic that yields the most value without incurring technical debt and making the development process a nightmare. I don’t have an answer for that, nor have I worked at a company that found the perfect balance. Plus, I’m not a technical lead, manager, or business owner. So if you are one of them, I’d love to hear how you or your organization plans to tackle this
I'm a technical lead, business owner, and ex-manager. So I guess I qualify to answer :)
The short answer is that we have only ever had a very (very) small development team. Typically around 3 or 4 people.
Secondly we're using "old" tech, nut with up-to-date tooling. Our product contains code written in 1996 and iterated on since.
I'm lazy, and I'm not interested in writing it all again in some new language. So we haven't done that. I'm lazy, so I'm not interested in re-architecting it every 5 minutes. I'm lazy so i prefer simple, maintainable, easy to read, code over cleverness.
Most of all, as a startup (boot strapped) I didn't get paid in we didn't make sales. (That happened a lot in the early years.) So shipping is a priority. Business is a priority. But since I'm lazy I avoid solutions that'll break things, that'll cause unnecessary work in the future.
I have enough autonomy to dictate pace of delivery. I have enough incentive to move the business forward (in the short and long term.) I don't need to resume pad (this is the first ,and last, job I'll ever have.)
Alas none of this advice is transferable. What works for me likely won't work for you. Our context is likely too different.
So yeah, there are businesses in between the extremes. Ones with competent developers. Ones either competant managers. Ones where all the business interests are kept in balance. They are not easy to find. Good luck.
As a dev turned entrepreneur one thing I will do when selling projects is talk to clients about software as a depreciating asset. While there are some limitations to the comparison this framing makes sense to business people in general, especially well with finance or operations departments, and can work with internal stakeholders too.
The sources of the depreciation are tech debt and general turnover/churn in the method and process within an industry/vertical (like if you built a web app in 2002 based on ASP.NET, it's pretty tough for that to be a lively project today).
We know there are systems out there that have been running well and doing their job for 30 years, and there are systems that need to be replaced/rewritten from scratch every couple of years. Which one do you want to buy/fund/budget for?
If the customer or stakeholder only cares about a two year horizon you approach the project one way. If they say they want this thing to be firing on all cylinders 10 years from now then we approach it and price or budget for it differently. When you talk like this it doesn't sound too weird to introduce the idea that we should do an annual tune-up to a system that they don't want to have to replace until 2035. The tune-up is how you get 15 years of life out of your intellectual property instead of 10, this is how you sell time to pay down tech debt. The eventual rebuild is also part of the discussion (not if it will happen, because it will. But when it will happen is something we can influence, so the framing can be, would you rather do an expensive rebuild every 10 years, or do it in 15-20 because you paid for a regular maintenance along the way?).
Boom now in their head this software you're writing is like a car. Everyone understands cars. A car has a lifespan. If you didn't go to the mechanic and do your annual scheduled maintenance and the car breaks down, that's what you get for being cheap. Like I said big ops and finance departments actually really like the idea of a tuneup that prevents the car from breaking down. They're used to thinking about that with lots of physical assets anyway. A marketing department at a startup maybe not so much but their time horizon is usually short anyway.
This is very valid and well written. One point I like to stress is the external context I often observed for why it is deprecating. As a company grows initial architectural decisions become more of a burden. So even though some software is running just fine for the specific business case, your new stack can do 80% but also a lot more. This is my way of selling modular solutions, where you can easily take away things and put new things into place.
Yes, that's a great point. Once we think of a software system as a depreciating asset we start to see a number of factors which are beyond our control. Another example when we're building web apps is that the competitive landscape has changed -
* 15 years ago no one expected their website to work well on a phone, or serve up much in the way of streaming video.
* 10 years ago no one was expecting that a server should respond in under a second, aka Lighthouse's TTFB, and the concept of say a "Largest Contentful Paint" did not exist.
We can go on forever, so what has happened is that all the practices and expectations within the industry have been redefined (occasionally even for good reason!). To stick with the car analogy, you don't expect much in the way of self-driving features from an older car... hell power windows didn't even become ubiquitous until the 1990s.
So maybe we just can't get a great TTFB and LCP out of our old dependency stack which existed before those concepts were really a thing, and there you have an example of why a rebuild will probably continue to be a "when" rather than an "if" for years to come.
Now we can frame the discussion as "let's partner up to ensure we make the best use out of this system and extend its life as much as is reasonable," which doesn't have to be an adversarial discussion.
I remember reading an article (linked from here, I believe), written by a former PornHub developer.
He talked about how the whole thing relies on "boring" tech, like PHP.
Probably second only to Google, for hits per second. Uptime, robustness, and ease of maintenance were a really big deal. Very mercenary, and very practical.
>>> I'm lazy, and I'm not interested in writing it all again in some new language. So we haven't done that. I'm lazy, so I'm not interested in re-architecting it every 5 minutes. I'm lazy so i prefer simple, maintainable, easy to read, code over cleverness.
> There must be a middle ground where developers can focus on the core business logic that yields the most value without incurring technical debt and making the development process a nightmare. I don’t have an answer for that, nor have I worked at a company that found the perfect balance.
Personally speaking, the best place that I've worked at that mostly solved that balance, was Pivotal Labs. Technically, it was Cloud Foundry, but same people. The way they use Pivotal Tracker and the process around it was mind blowing. Unfortunately, it is really one of those things that you can't just read about and understand. You really had to be part of their culture. It was a cross between cultish and military precision. You can google around and read up on many blog posts. The PT documentation [0] spells a bit of it out too. It was intense, but I really enjoyed my time there.
I worked at a health insurance company 6 years ago where they brought in a group from Pivotal for a week and we worked with them to see how they ran a Scrum team.
I was amazed. The Scrum master wasn't just some busy body manager who only ran stand-ups, he was constantly floating around throughout the day helping people out. We had 4 developers and 4 consultants, so we mostly paired up and that was super productive. Pivotal tracker was better than Jira. They had a strong focus on getting something working and collaborating. It was eye-opening.
Unfortunately, the company was picking 4 random employee developers from across the organization each week. Most of the organization was contractors, including everyone on my team aside from me. My team used Jira and had no option to use something else. Back on my team, the contractor Scrum master and contractor product owner continued to run the team incredibly ineffectively.
I can't tell if those few consultants I worked with were representative of Pivotal as a whole, but I will say that that week was a bright spot in an otherwise dismal part of my career.
The way Pivotal Labs works is that people who don't buy into the whole culture tend to migrate out pretty quickly. The ones that do buy in, stay around for a long time, learn things inside and out, and end up getting farmed out to places like yours. So you generally get the best of the best on consulting agreements like that.
I've heard some horror stories about Pivotal as well. Not everyone or every project was perfect. Their pricing is also insane. I'm sure your company paid through the roof for what you got.
But overall, the general education that I got there for how to run projects and build products is hands down the best thing I've ever learned in software engineering. It is funny, they don't teach that sort of stuff in schools as much as they focus on just teaching you how to code.
Oh and PT is hands down better than Jira, but that is a low bar since Jira is really shit. That said, it isn't necessarily about the tool, but how you use it. PT can be used for good as well as evil and knowing how to write stories, point them properly and manage them, is a learned skill.
> Unfortunately, the company was picking 4 random employee developers from across the organization each week.
Oh god. I worked at Pivotal, and i saw something like this happen on one project. Four Pivotal Labs devs, four client devs; each client dev was with us for two weeks, so each week, we started with two new people, and two people with one week's experience on the project. Absolute madness. There was some weird politics, and i think the people running the project on the client side wanted it to fail, so this may have been an attempt at sabotage. The client devs were very sharp, and fully cooperative, so we did get some work and some training done despite all that. Common consulting L.
> constantly floating around throughout the day helping people out
I unironically love being distracted by other peoples problems. Usually the person giving me the work I'm really supposed to be doing gets annoyed at me for it, I'd love it to be my actual job.
The people at pivotal labs I got to know were amazing and I learned so much from then. However, there was also a ton of cargo cult going on among them and the code was far from brilliant.
In the End they couldn't change much of my company's approach to software development. Managers mostly want to be fast like a Silicon Valley startup only if it does not mean they need to change anything in the organisation. Anyway, years of semi-successful software projects are more likely to bring a change...
Definitely cultish and yes, I agree... a lot of the engineers were brilliant, but their code was sometimes a bit crazy. I worked with one guy who meta programmed everything in Ruby to the point where even the tests were meta programmed and none of the stack traces made any sense when things failed. The complexity level was just too far over the top for the benefits of the meta programming.
I've also tried to 'change' companies towards Pivotal process. It never really works. People are more than willing to listen, but when it comes down to the practice of things, it just doesn't happen for one reason or another. I think that is why Pivotal Labs works so well... everyone working there is already on the same page and sticks around for many years... and anyone who isn't on page, migrates out relatively quickly.
I think this is also a bit why 'agile' tends to get a lot of hate/fear on HN. You really have to be military and cultish about it from day one, like PL is. Few people are willing to buy into that. Even I was skeptical of it when I first started there and it took me a while to relax into it.
It’s humorous to see your company’s seemingly-wise desire to do Agile properly shadowed by the classic attitude of “Contractors? The people we pay to do a job same as FTE? Fuck ‘em!”
Can I just take a minute to thank you for CloudFoundry, it seems like it was what PaaS really should be and when one of the organizations I was contracted with moved to AWS one of the things we realized was that there really was nothing like CF anywhere else. Thanks for your work if you worked on CloudFoundry then you probably know some of the people on my team.
That's really nice to hear. When I was there, it was super early into the acquisition and CF was a shit show. For context, this was even before Docker was announced. The people who stayed on eventually rewrote most of it and turned it into a much better product. This was a great example of using the Pivotal process to do good.
In principle, at least for business applications, the middle ground is something like low code - data access, UI, user management, authentication, logging, auditing, ... all taken care of already, you make the data model and business logic. In practice this works up to a point, when you have requirements that do not fit the model or capabilities that the creators of the platform had in mind, when you have to start fighting the platform, then you are right back in wtf code land. When you decided which platform to use, you of course evaluated this and made a good choice, building the first version worked nicely. Now you have a running system and want to extend it, now you are getting the more unusual requirements that were not important for version one and therefore not taken into account when making the platform choice, now your nightmare starts.
That is why talking is important. Get all (or a good sample of all) the people who are going to use your software and discuss a draft/UI mockup. Get them in the mood where they go wild and creative and spit out all kind of ideas. Rinse and repeat till you get a coherent picture and a list of likely additional requirements. Add potential future requirements that you could think of and keep them in the back of your mind. When everything is done you can offer those.
This might not catch all the surprise requirements, but it might catch the worst ones.
A sure way to get surprised is to do the opposite:
- don't talk to the people who use your software, but some middle-men
- just take the customers first idea as gospel and assume they disected the problem space themselves
- keep everything constrained and never let anybody get wild or crazy with ideas
- don't anticipate potential future requirements and don't let them influence your decisions ever
Of course those things don't come for free, but it is important to realize that the initial phase of a project is crucial also for the impression your shop will leave behind. If people feel they have been heared and had a chance to come to a shared understanding with you, they are less likely to view your software in an unfairly harsh light later on. They will remember this and will hire you for the next thing.
This is in general hopeless, you will not be able to anticipate where the business moves over the coming years. Does not even have to be the business itself, could also be the regulatory environment or whatnot. When you decide to not write your application from scratch, you will always have the risk of running into limitations eventually. If it is low level stuff like your O/R mapper or your logging framework, that is no big deal, but if it is some high-level application or business framework, then things might get tricky.
In one of the projects I worked on there suddenly came the requirement that they need metrics on how long people were working on different screens, completely unrelated to the actual business. No big deal in general, just put some code in some base class to collect the current time on enter and on leave. But the application framework we used just does not allow this, you can not modify the base classes used for screen, you have to add code for the enter and leave event on each screen individually. It just never occurred to the people that made the framework that you might want to do the same thing on all your screens and nobody expected that we might ever need this either, until we did.
Yeah this sort of hypothetical paper prototyping is really useful because you can map out the whole territory you're operating in, instead of just finding a single point to work towards. I think the approach actually applies beyond just UI mockups - if you can mock out whole workflows in the business with the right stakeholders/personas represented, you can also start to build up a loose dataflow model right there, and also understand some of the likely articulation points of the resulting systems, just by having conversations like "what if we needed this human process to happen first" or "what if we eventually built a model to automate this process" etc.
I specifically did say you won't catch all future requirements this way. But you want a mechanism to catch the reasonable and likely ones.
And for the unreasonable ones you can always say: "The system we originally intended wasn't meant to do this technically. If you want to have this feature we can help you but it will cost $X and this is a entirely new project."
If a graphic designer designs a logo and the customer in the end has the idea that they want that logo as a stamp or in a black and white version ANY graphic designer worth their salt will have anticipated this. If they want a 3D animated video of the whole thing rotating that is a different thing.
This is no problem for the initial version but businesses and the environment they operate in change, you are in general not going to figure out what you will need ten years down the road. And then it really depends, maybe everything is fine, but maybe you also made choices early on that allowed you to save time and the price of limited flexibility and this eventually bites you.
I'm not sure what you're responding to. I'm suggesting you specifically model how things might change to give you a better chance of identifying the right articulation points for your architecture - as in, what if I have to drop one side of this because our assumptions were wrong, but I don't want to lose _everything_? You can't predict the future, but you can built at least _some_ flex into your business and platforms.
What you say is correct. But there is literally no way to figure out what will be needed 10 years down the road using any kind of methodology. If you are clever you try to use stable things, technology that is likely to be an ok choice years down the line and maintain an architecture that is easy to maintain and modify.
But if your software managed to survive a decade of good use without any major issues I'd already count that as a job well done.
I think there is a tradeoff, you can pick some customizable off the shelf software or some low code platform that fits your initial needs, that will give you speed at the risk of not being flexible enough later. Or you fire up your IDE and start from scratch, do everything on your own, that will make you slow, at the very least in the beginning, but you get all the flexibility and also all the opportunities to screw it up.
... and this is when the development company (architect, technical director, ...) decides to extend the "data" models to the point where you've invented a programming language in XML. A programming language too advanced to use even for the programmers that invented it, let alone the "no code" people that it was intended for. A programming language with all of the complexities and none of the tooling that you would expect from a programming language, where failures are caught in runtime since you have no compile time checks of the XML that you throw at it.
Next step is, of course, to extend the XML validation to make sure it conforms to your application's expectations. And suddenly you have invented a compiler as well.
But isn't that something of the past? I am mostly working with systems that handle the bulk of the problem, providing process, data model, etc. And extensions are done via a specific programming language, ABAP for SAP, APAX for Salesforce, but in the end, it all ends up to be like any other software development projects, with compile time checks, etc.
I most certainly hope so. But I've been involved in extending a system as I described as recently as a few years back. Everything was supposed to be configurable (by no-code customers) and XML was the language of choice, so the XML was... complicated.
Yes, reminds me of about 25/30 years ago when low code was at its high. Back then there were strict guidelines where to put business logic (client, server or both), how to name them. Everything had to originate from the model (model driven development).
Since then the developers took over and killed waterfall and 4GL in the process.
Forgot most of that, I consider this 'enterprise period' in my life as pretty boring.
There was one product really popular then, but forgot the name. Sorry.
A product I personally know is Oracle Designer. This could generate fairly complicate applications (like master-detail-detail layouts) based on the ERD.
Another product I also forgot it's name, was considered very good, but used only within the IBM ecosystem. Fun fact, I was the product owner for a project where the IBM team vastly outperformed the web team (think half the size, twice the speed).
There is no "perfect balance", but you need tech leads and PMs to understand that you are aiming for a balance where your tech debt is under control and you are continuously and consistently delivering business value.
So far, I've failed to verbalise exactly how I achieve that in my org, but it's a combination of strategies and tactics like timeboxing refactorings, setting new tech tryouts as experiments that you re-evaluate, iterative development, and recognizing that you are never getting perfect code, nor a perfect balance.
An interesting challenge I've found with the term "technical debt" is that engineers can use it as an excuse to work on all kinds of things that might not genuinely be paying down technical debt, taking advantage of situations where the people making the prioritization decisions don't have the hands-on technical experience of the codebase to evaluate if the proposed "improvement" is a good investment of effort or not.
AKA someone needs to be able to push back against "we'll rewrite it in Rust to reduce our technical debt".
The most effective place I've worked had a fixed allocation to technical debt work (25%) and let engineering agree how to prioritise that. Want to rewrite it in Rust? If you get buy-in from the rest of the technical team to say that that's more important than sorting out the database schema or whatever you can do that, but you'd better be able to do it piece by piece and keep delivering business value the whole time.
Theoretically someone who understood both the business and the technical side of things ought to be able to weigh up individual tasks from both sides and figure out which was higher priority. But in practice that seems very hard - you'd need both intimate knowledge of both domains and immunity to organisational politics. Explicitly adjusting that high-level dial (so maybe the 25% becomes 50% when it's a codebase you want to publish/reuse, or 0% when it's an EOL project that you're just running until it falls apart) might be the best you can do in practice.
Worked in a similar setup as a PM and really liked it. As you said, it can be somewhat apples-to-oranges comparing new functionality with a stream of tech debt opportunities, so it’s helpful to target a ratio, more like stocks/bonds in a portfolio.
We called it “product health” so it could encompass tech debt but also UX debt, performance, cosmetics, etc.
I've been quite frustrated that the term "technical debt" is so misused. It feels like project managers immediately assume technical debt means "the developer wants to play around". I only use it when it means "the features you want can't be implemented, correctly, and under the time constraints".
For what it's worth, I shy away from the term and talk more about approaches and outcomes. It's still baffling when I give a project manager the options:
1. Rewrite and have a working prototype in 2 weeks and solid product in two months. We will be able to use better tech, learn some new things, and the product will match the feature needs.
2. Refactor this month and get the feature you want in a couple months after that. We won't learn as much, but we'll have a cleaner technical product and we'll get some features.
3. Try to implement the feature now. It will be late, broken, and other features will break. We will spend all of next quarter patching it. We won't learn anything new. It will slow us down on the next feature.
Some of this is a product of job-hopping culture and of having boom-time teams (technical and management) with too many early-career members. Some of it is also just the perpetual back and forth of competing interests.
That said, for those who do good work and develop long-term trust with long-term colleagues, addressing technical debt is often “poke at this while I sip coffee” work that get chipped away at gradually and without a lot of paperwork, or that gets quietly slipped in alongside other feature work.
When you’re having to pitch the work to someone, you’ve often already failed the pitch. The debt payment is already due.
But if your peers come to trust you, then they learn not to reject a PR that cleaned up this module a little more than strictly necessary or that included a commit that revisited an adjacent abstraction.
To get there, though, you need to work on healthy, stable teams with experienced colleagues rather than in the mad churn that seems to dominate a lot of work cultures these days.
> I've never had an issue where a coworker rejected a PR, even when doing a massive refactor that isn't strictly necessary.
As an engineer, I have rejected massive refactors in unrelated PRs: heck, I have rejected massive PRs period.
They always:
- are hard to review
- bundle a bunch of unrelated changes together
- hard to accept piecemal
- risky to test and deploy
- hard to revert
- hard to pivot according to new learnings along the way
- hard to improve
- have lots of review iterations
- slow down all the other work (conflicts, remerging effort for all the other changes in flight...)
- usually "one way doors" (relates to hard to revert)
When these are done as small incremental steps where we improve one thing at a time, none of the above hold, and coupled with a good CI/CD pipeline, take less time.
I know that many engineers believe there are things that can't be done incrementally like that, but I've always been able to give them a plan for any "impossible-to-split" refactor/rewrite.
I think you are offering bad options: refactoring is something that comes with any new work. You don't have to justify it, but include it in your estimates.
Obviously, the risk here is that you include non-neccessary refactoring work in it, and then people stop trusting you.
And finally, there is a hack-it-together approach, but I always try to keep that outside the core product to make it clear this is throwaway effort (if you can have another deployment, that's ideal).
But honestly, it is engineers job to find that hard to reach balance: keep improving the code, and keep delivering value.
That's the hard part of software engineering, and we should all embrace it.
> refactoring is something that comes with any new work. You don't have to justify it, but include it in your estimates.
Sometimes it's too much work to do ad hoc. Oftentimes people won't go out of their way to refactor. Having the fair discussion can make it real and important.
If the team can't talk about refactoring, it's an unhealthy team. Managers who want to act like maintenance of a project isn't something that should ever be their concern don't deserve a paycheck.
> But honestly, it is engineers job to find that hard to reach balance
This attitude is bullshit. It's everyone's job. High level balance is more of a concern for management. Low level balance is the more of a concern for engineers. High and low level balances can work for or against each other. Management that just pushes their responsibilities down the hierarchy aren't pulling their weight.
Sure, it is everyone's job and they should certainly openly talk about it, but no manager can go and do it for an engineer.
A great engineer can find an incremental value with any refactoring they do: otherwise, they are extremely likely to refactor for the wrong future. I've seen this play out a number of times.
And the root cause is always exactly the same: engineers can't design code for the future that's not here today or at most, tomorrow. When they think they've done it, a new future comes and that code is even harder to refactor because it prematurely catered to cases that never materialized.
But that's exactly why managers need to understand and accept that refactoring is software engineering, and engineers need to do it continuously and keep delivering value while they do it.
And while CTOs, Eng Directors, architects and technical leaders might be "managers" in a sense, to me they are still all engineers, and they are the ones ensuring technical direction enables a healthy project while satisfying business goals.
Non-technical managers are there to bring clarity to business requirements, but they don't need to know exactly how sustainable technical excellence (or at least health) is achieved, the same way engineers don't need to know how user research or user testing that proves something works or not, is performed.
The issue here is offering up #3 “It will be late, broken and other features will break” means you are not delivering anything of value, so it is not an option. Offering #3 just communicates “this can be done at a cost that is not relevant to you” as opposed to “there is no way to achieve the stated outcomes”
Offering #3 is absolutely not a business or management problem, this is something you are doing wrong.
If engineers do that, then they don't need to use the term "technical debt" anyways. They can just claim a feature takes twice as long and then play around 50% of the time.
So in reality, it's not "technical debt" that's the issue, it's that engineers don't have the right incentives from the beginning.
Well, when developer productivity is measured against PR rate or number of commits, I guess this is what happens. Also, it’s debatable if someone should be managing a team of engineers if they don’t have a honed bullshit detector to catch this sort of things.
It's often not a matter of verbalizing if the decision makers don't want to believe what you're saying.
You have to be working with people who are more interested in finding the best outcome for the organization/team/people than the most convenient decision for personal or political reasons.
I would like to start by teaching developers what business logic IS, and how to separate it from the other "layers" of software design.
I can't count the number of applications I've seen that have no clearly defined "place" for business logic. So you see it buried in views, state management, persistence, databases (I'm looking at you, Postgres functions and stored procs), services, controllers ... anywhere and everywhere. Impossible to test, impossible to reuse, waiting for someone to extract and isolate it but no one seems capable of even recognizing it for what it is.
I think if you were to ask 100 developers to define "business logic" you would get 100 different answers. That's the first problem to solve.
I mostly agree with you. But I'd guess that at least 10 of those 100 developers might say something like "business logic belongs in framework-agnostic pure functions, unless you have a good reason not to." And many of the remaining developers would say "what is a pure function?"
I'm not claiming that FP is always the answer, but it's a great technique that somehow is still very under-utilized despite being enshrined in popular frameworks like React (I think). I don't mean to criticize developers who are unaware of FP - I graduated in 2008 but didn't learn FP until 2014, and I still can't believe that I was ignorant of it for so long. And I can't believe how many developers are still ignorant of it. Unless I'm wrong and FP actually sucks, there is an education problem.
> But I'd guess that at least 10 of those 100 developers might say something like "business logic belongs in framework-agnostic pure functions, unless you have a good reason not to."
Those 10 developers might have a decent answer to the question "Where should business logic live?" (decent depending on other design considerations).
But that does not answer the question I asked them, which was "What is business logic?"
You're right. And I realize I can't answer the question. But I wonder how much it matters? If a colleague and I disagree on whether some authorization logic counts as business logic or not, what is at stake?
Business logic is often the type code that changes least. If it is well designed, isolated and organized your business logic can survive UI changes, framework changes, stack changes, database vendor changes and all sorts of other "infrastructure" change. If you can't identify that "layer" from the others, then business logic will make its way into all sorts of areas that make change harder. And the ability to change is the very value that software has to offer in the first place. If it didn't need to be resilient to change then we could stick with fixed circuits which are far less expensive to build and maintain.
I love FP; at least some aspect of it. But it’s not popular because it’s not the panacea that the evangelists want you to believe. Also, the community loves getting engaged into pseudo-intellectual ramblings that most people find useless.
Yeah that kind of stuff can definitely scare people away. I don't know what a monad is and I've stopped pretending to care. For me immutability and pure functions are the only important concepts; everything else is a bridge too far for practical corporate use. Still, I think that while null is commonly called "the billion dollar mistake", I wonder... I think that going back in time and making C# and Java immutable-by-default would have a bigger impact than adding null-safety. Maybe not a panacea, but FP does seem to set a pretty acceptable lower bound on how terrible a codebase can get in my experience.
to have a sacred place for business logic it'd require a nice interface, maintaining that has an overhead compared to "can't we just intercept the request at ... "
I’m on a sabbatical because I find myself very jaded by the industry, more or less for the reasons stated in this article. What I tell people is something like “I LOVE building stuff with code, but lithe EVERYTHING ELSE.” Writing code may be better suited as a hobby for me… after all, I have the most fun with side projects… but then, as most of you can probably empathize with, how will I make the sweet, sweet six figures? I’ve looked into being an AWS Solutions Architect, but they want someone with more sales experience (???). You’d think it would be the other way around, that’s certainly what we wished for when interacting with them as “customers.” It was always “hmmm, good question, let me get back to you on that” with the equivalent of LMGTFY in my inbox days later like that wasn’t the first article I read on the topic. Anything else so far feels like a waste of my talent (“talent” - I mean I don’t want to be installing Microsoft Office on people’s computers or replacing ink cartridges).
I could probably handle the “bleh” of our industry if I didn’t have to bathe in it. Even 40 hours/week seems like too much, and why do I have to pretend I work 40? Just pay me less and let me go yellow on Teams at 1pm every day.
Sounds like freelancing/consulting might be right up your alley. Maybe building mvps or working on relatively new products or codebases. Charge for your time and no pretend work.
Yes, you’ll have to find clients. No, there is no perfect alternative to the “sweet six figures”, something’s gotta give, but you gotta start somewhere
Yep, I’ve done some freelancing over the last few years. I like it, but it’s hard to do that and a full-time job at the same time. Obviously the issue is getting enough work when you’re not normally employed, and when you are normally employed, the issue is not getting too much work. Clients like stuff ASAP and telling them an MVP will take 3 months because you’ll only be working on it 8 hours/week doesn’t go over well. The clients I’ve had are understanding but they all eventually want me to quit my job and come work for them, either as FTE or still a freelancer.
I hate both types of organizations. I think I hate the rapid tech switching even more than the business focused one. There's a lot to hate about languages like COBOL, but honestly it looks awesome create neat code with actual documentation and design documents, that will just work for decades with minor upgrades and changes.
Durable systems are gone. Longterm support is 12-18 months. Iterate or die is how it works now, I guess.
I see no point in automating anything if I'm expected to re-write it every year. If they are going to pay me to do that, they might as well just pay someone to do the work manually. It would probably be faster and cheaper than the illusion of time savings we're getting from reimplementing the same solutions year after years against our will.
The best things I've done are the things I've never told management about. I make something, it works, people use it, and I don't have to touch it again for years. When I do, it's a minor change that takes 5 minutes. Anything coming from management is a multi-year project that requires hundreds of hours of meetings and at least 60 devs across a dozen teams... all to make something that 1 person could do in a few weeks if they were left alone and didn't have to conform to their overly complex frameworks.
A good balance comes when both employees care about business decisions e.g. "is it worth spending 3 months of the teams time on X?" and vice versa non-engineers are aware of technical debt and why teams are trying to 'pay it down'. The ideal situation is when there is someone or a group of people who can take in all the information and make decent business decisions. After all - the codebase and it's maintainability is a part of the business (just like a well maintained oil rig vs. a bad maintained one impacts the business). It is just not as visible as managers can't walk around and eye up the code as easily as see cracks and rust in the building work.
One thing I noticed when consulting SAP or Salesforce implementations, where the tech stack is a given, the focus switches only to the business logic, people are forgetting how important tech is. Reporting starts to be included, millions of validations are running, bringing the system down. As the tech is a simple given, limitations are not to be overcome, but rather accepted. Which often turns out funny, because it has all real-world consequences of some obscure limitations.
High level architects should focus primarily on creating a system architecture that is designed to incrementally and composably add functionality.
This often means adopting some sort of overarching architecture that nudges people towards composable implementations. For write-heavy, highly stateful systems with complex business logic, this means using something like a workflow engine where you can simply declaratively add tasks and conditions to pre-existing DAGs while being confident the existing workflow will not fundamentally change. To create new functionality, it's often enough to use existing functionality as a drop-in template. Duplicate and add / remove - no need to worry about what's there because you're not touching it.
For read heavy systems thinking about composition at the very highest levels is also very important. This allows engineers to easily add functionality without being concerned about breaking what's already there.
Always favor additive models over models where updating existing functionality is the norm. This means junior engineers can "color within the lines" so to speak, and the risk to the rest of the system is low.
Would also emphasize that while unit testing is often useful for the individual developer working on a piece of code, but as far as ROI, end-to-end integration testing has the most bang for the buck. If you have the full endpoint tested, for instance, from end-to-end, including database writes, your confidence level goes up by an order of magnitude when you have to modify that functionality. If you have to choose between investing in extensive unit testing and extensive end-to-end API or contract tests, always choose the latter.
> Would also emphasize that while unit testing is often useful for the individual developer working on a piece of code, but as far as ROI, end-to-end integration testing has the most bang for the buck. If you have the full endpoint tested, for instance, from end-to-end, including database writes, your confidence level goes up by an order of magnitude when you have to modify that functionality. If you have to choose between investing in extensive unit testing and extensive end-to-end API or contract tests, always choose the latter.
It's not so clear-cut, unfortunately. Extensive end-to-end tests can be very hard to setup, can be flakey, can take forever to run (especially if you do it on every change), etc.
I agree you should have some layer of automated end-to-end testing (and not enough people do), but in the end, I think you have to work towards making the system as compositional as possible, so you can also test things extensively in isolation - the end-to-end tests then serve to make certain that the individual pieces fit together, but don't hit all the edge cases.
I can think of few better investments than to have a reliable suite of end-to-end tests running as part of your merge pipeline even if they're difficult to set up. Sleep quality improves once you have this in place. Your tests don't have to be brittle or flaky. If your tests are brittle it is definitely a good investment to fix whatever's causing the brittleness rather than accept it as a fact of life.
Having the tests run against an isolated data and infrastructure environment without additional noise from shared activity is a good first step.
Distributed systems automatically bring challenges that makes it very hard to create reliable test suites. You can reduce the flakey-ness, but I don't think you can ever completely eliminate it.
Beyond that, you haven't addressed the fact that a comprehensive end-to-end test suite in a complex system is really, really slow.
Just to clarify, I'm talking about integration testing the service itself - posting a payload, saving to the database, producing messages to a mock queue, etc. Test the entire service in isolation from other services and validate that it is behaving correctly. Not end to end across services. You should be mocking out all service dependencies and testing against the contracts for those systems.
Our API tests run flawlessly every time because they write against an isolated database with well-defined endpoint and messaging contracts. They also execute all remote operations against a mock API that conforms to those contracts. This is perfectly achievable.
I agree that this is achievable but it seems to contradict your original assertion. Mocking out other services doesn't give you the same assurances as end to end testing the whole stack.
To me it's a classic tradeoff: the more you integrate in your tests, the more meaningful they are - but also the harder to write and maintain.
There is no perfect balance. If you have the awareness of the extremes you can adjust course. A little goes a long way.
If you are see that your organization is too chaotic, you should put some structure and discipline in place to curb the tech debt and chaos. You’d be surprised by how far adding in basic listing and teaching the team to use it will get you. You can add automated testing after that, and move on to other initiatives.
Similarly if you are in a resume-oriented development organization, bring attention to business outcomes through little initiatives. Make your engineers attend at least one sales or support call per month. Explain the company budget. Show the true costs of going with the flavor of the month.
You can foster change bit by bit. Things will get more balanced if you continually do it.
It is tough though. You are fighting a culture battle. Changing peoples mindsets and habits is a long process. You won’t get overt support and you won’t see immediate results. But it’s worth it.
I've only read the abstract, but hasn't this been achievable for at-least the last twenty years with dependency injection? I've seen time and time again developers conflate microservices with the general concept of service abstraction and separation-of-concerns. These things have existed before microservices, and will continue to exist.
A good DI implementation allows for one to define these business services as interfaces and implementations within an initial monolith, and then trivially reimplement those interfaces against an API or RPC when it comes time to scale. Interfaces can be separated out into a common library which can be referenced in all parts of the system. I am simply curious as to why this has been so staunchly avoided over the past decade of microservices hype.
DLQ and retry are complementary strategies. DLQ is where the message goes after retries are exhausted. Whether you need one depends on your SLA, whether the message is still relevant later, whether the data is going to be reconciled through some other channel, etc. It's not an aesthetic preference.
I agree with this, but the article feels like it's talking about a world that no longer exists (or is rapidly ceasing to exist). Questions like "should salaried engineers focus on code quality at the expense of business goals" are a ZIRP phenomenon, i.e. relevant at a time when companies were hiring engineers just to prevent competitors from hiring them and when graduating from a boot camp was all it took to land a junior SWE role. Now that the engineering job market is, alas, looking a lot like other job markets, this feels like it's from a different reality.
You’ll be amazed at what goes on in some of these multi billion dollar companies. Why do you think the mass tech layoff happened? There are still a ton of coasters in large companies that didn’t get affected by the last layoff spree but are responsible for the next one.
"Domain work is messy and demands a lot of complicated new knowledge that doesn’t seem to add to a computer scientist’s capabilities. Instead, the technical talent goes to work on elaborate frameworks, trying to solve domain problems with technology. Learning about and modeling the domain is left to others. Complexity in the heart of software has to be tackled head-on. To do otherwise is to risk irrelevance." - Eric Evans
Just note that any organization is quite complex, and surely at different times different people in an organization have different purposes, and they may have different understandings if they share the same purposes. So it is pretty common that everybody's actions, understanding, and purposes are not coordinated.
The purpose of an organization is never simple. From Hebert Simon's <Administrative Behavior>:
> The survival and success of organizations depend on their providing sufficient incentives to their members to secure the contributions that are needed to carry out the organizations' tasks
I've found that small, focused teams are generally better at this, especially when they're close to the stakeholders and can arrange a metting to get some feedback if need be.
Unfortunately companies tend to throw people at the problem, so you get 10+ sized teams where 30% at maximum do any productive work, because the rest is either too caught up in their personal lives or too junior to pull their own weight.
I've been both part of that 30% and the rest in different projects and I've managed to flip one dev to the better side once, so it's not typically an inherent feature of a person.
There are several level of relationships in a company: technical, business and social. Often, management hires consultants to push for organisational changes, and most of the time social relationships are ignored because in contrast with the other two, they're mostly unwritten and implicit.
It's sort of sad and funny to see people pushing for organisational change through new internal applications deployment or new procedures and workflows, without ever tackling the human interaction factor.
I've found that really inquisitive, good devs tend to get bored if stuck with the same stack for a while. This in my experience is what often drives adopting shiny new things which are not needed and cause progress to grind to a halt.
I was one of these developers until I saw how my shiny decisions to failure. But it's a tricky problem with good devs who have not learned that lesson, as you don't want to lose them.
> Raking in absurd sums to tweak linters or buttons may not be the worst thing in the world, if it also didn’t lead these bored people to dream of becoming architecture astronauts by introducing absurdly complex tools to solve imaginary problems.
I'm forced to sit back and admire a sentence that's as well wrought, funny, and true as this.
I'm a manager who tries to take this middle road. But it's frustrating to see other teams on either side of the spectrum celebrated, and hard to get recognition for your small team that just gets the job done, while trying to keep an acceptable level of quality.
It never bodes well when an essay opens with a strawman.
In the opening paragraph, the author describes one work style that’s on the verge of seizing up, and sets it against a work style that apparently produces nothing of business value at all?
They then work to criticize the latter as impractical, even though it’s just an imaginary business archetype that couldn’t possibly exist.
The reality is that most operating workplaces are already situated somewhere in the middle, navigating some version of the compromise the author spends the rest of the essay trying to lay out.
If the author is earnest and really worked at places that appeared as paralyzed as their strawman archetype, then either the business was already evaporating as they were hired into it or (more likely) they didn’t really understand what was going on and probably don’t have the insight to be writing about it.
I think the author was using hyperbole to illustrate two problematic organizational tendencies. I could say something like "I see two kinds of drivers in the northeast: ones that drive like they're fleeing a bank robbery in a stolen police car and others driving like they're piloting a parade float carrying a human pyramid." I think it's obvious I'm not saying literally all drivers in the northeast fit into one of those two categories, rather, I'm setting up the scene to talk about two extremes in a common premise.
I’m sure that’s what they were trying to do, but their argument only carries weight in the context of those hyperboles.
In your example, it’d be like continuing the essay to argue that people should really try driving like they were just in a plain old consumer car. The thing is: that’s essentially what everybody is already doing in the real world, and so you’ve not contributed anything with your argument.
> I’m sure that’s what they were trying to do, but their argument only carries weight in the context of those hyperboles.
I think this is a bit uncharitable. Yes, nominally everyone is trying to do this, but in practice this comes down to a lot of tradeoffs in terms of timeframes, team sizes, and maintainability/scalability. Even if you have balanced perspectives, what usually ends up happening in larger corporations is some group has a louder voice and more power, and ends up influencing decisions in a way that is globally suboptimal because executives are too busy, and local managers are too focused on their own silos/perf evaluation.
For example, complexity is often built into the product and systems according to the resourcing allocated to a particular problem at a certain point without a full understanding of the implications over time. However, when that complexity is found to have a poor ROI there is then a natural aversion to removing it because at that point it's hard to quantify how exactly the business depends on it. This creates a tax on the entire org, which many can observe, but structuring incentives to fix is fiendishly difficult, especially in comparison to new growth opportunities that get executives, board members and investors excited.
People aren't that self-aware though. Many people assume they're being normal and rational when in reality they're behavior is more towards a problematic extreme. Explicity showing which behaviors are extreme and why that's problematic— versus saying "drive safely," which everybody thinks they're already doing— is the purpose of this sort of rhetoric.
The startup (bootstrapped, non-US based) I work for creates a developer productivity tool which gives you high quality architecture “for free”, allowing you to instead focus on those bespoke un-automatable business rules. We essentially aim to solve the author’s stated issue here.
I'm very skeptical that's high quality for all use cases. Architecture is a huge trade off space and what's right for a particular product is most certainly wrong for another.
Absolutely, you choose or create the architecture or architectural components which make sense for your particular situation/requirements. After all, there is no one size fits all architecture, there is instead a choice of trade-offs.
The whole software development industry has all of a sudden been relegated to what it has always been: just a tool. Like the hammer. Those who promoted the hammer as the holy grail have been reading this article and commenting some shit their alcohol imbued sick brain has been capable of producing in their zero thought process. If you get huge sums to work on linter for years and you use that money to fuck up your brain instead of developing you end up a cripple. Just like the software development industry has become. A cripple that no one wants to meet, touch, see
The more organisational layers you have between you and the customers - architects, business analysts, and the like - the more disconnected your work will be from the business value.