Disagree - "investing" is an activity that can be deferred without anything but opportunity cost; "paying off debt" expresses that not doing it has real consequences.
Well, paying off debt can often be deferred indefinitely too, as long as you keep paying the interest.
Instead of thinking of it as "debt payments" you could just think of it as high operating expenses for the development team, and invest in reducing them.
You don't even have to pay indefinitely, just as long as the code is still in use (for the subset of technical debt that has externally visible effects, such as performance issues) or even just while the code is still routinely being worked on (for the rest). At least it looks like that at first glance, which makes not paying so attractive.
But your alternative mental model is spot on, it's a costly hindrance to development, not a cost imposed by the code itself. And "only until you stop working on that code" is unfortunately just half of the truth, because a team that is used to living with a problematic style instead of dealing with it might easily end up repeating the patterns they abhor when given the chance to greenfield, because they don't know the alternative well enough.
I sort of agree with the author, for a similar reason that I don't like the concept of code rotting. Digital artifacts left alone don't change. They don't become more broken over time. Technical debt in code you're not currently modifying isn't incurring interest - and a debt without interest is one that you can safely leave unpaid indefinetly.
Both "code rot" and "technical debt" are concepts that, unlike their real-world analogs, are only problematic in context of ever changing environment. I feel this is a crucial and unmentioned difference that makes the analogy bad.
> Both "code rot" and "technical debt" are concepts that, unlike their real-world analogs, are only problematic in context of ever changing environment.
It should be taken as a given that code (and everything else) exists in an ever changing environment. Computers are physical machines; they run down over time. Capacitors need to be replaced, and you've got to blow the dust out of your fans once in a while.
Especially in web development, the environment changes quickly. Who could have foreseen that the authors of the JavaScript spec would introduce their own "remove" function? Not me: I left my code unchanged, and it broke. Or, more technically, it rotted.
Let's be honest - web development is crazy land. It's an outlier, by far the fastest moving area of our industry. And that's still only if you follow the "best practices"
Maybe I should've put it in a different way: one of the biggest benefits of putting things in a digital form is that the digital form is immune to decay, as long as someone maintains the infrastructure for copying and reading that digital data. The default state of digital data - including code - is not rotting. We do a lot of extra work to make it rot. So the question should be, why are we doing it, and shouldn't we embrace this feature of permanence that we get for free? It's not impossible [0][1].
Version pinning and only doing updates in explicit, planned steps, is a start. Runtime environment can be virtualized and thus made to outlive hardware (virtualization is a way to turn hardware into software, to make the former immune to decay). Web folks have Babel, which today is used to evolve the language faster than the browsers can keep up with, but it could be used for the reverse - instead of compiling ES1234 into ES5, it could also be made to compile ES5 into whatever backwards-incompatible ES-whatever the browsers will be running 5 years from now. This is how you restore the pristine state of code undecaying. The nice thing about the digital medium is that it only requires maintenance at the boundaries of change, and we get to move those boundaries - we could opt to keep them concentrated in few places (like virtualization software, transpilers, hardware abstraction layers), instead of spreading them out everywhere and then saying that the code "rots".
--
[0] - Common Lisp has many warts that make it suboptimal for modern software development, but one of the best things about it is that CL code doesn't rot much. When you pull in a CL library that's 15-20 years old, you expect it to run without changes. With 25+ years codebase, you might need some fixes, mostly because the language wasn't standardized until 1994.
[1] - Maintaining backwards compatibility matters too. I can take a Windows .exe compiled 15 years ago and I expect it to run on a modern Windows machine just fine. Arguably, this is one of the main reason for success of Windows - Microsoft put in work to ensure old software doesn't break in new Windows versions.
Rot and rust are unavoidable on the small scale without special effort. Code does not "rot" on a small scale; it only starts to matter once you start playing the IT industry fashion treadmill with the world at large.
Have to disagree with this. Unmaintained code is a massive security risk. Eventually someone will find a flaw in some part of your stack and your code is dead.
No, your code still isn't dead. It's revealed to be potentially exploitable by someone, somewhen, maybe. Code is not accruing danger or becoming dead by just sitting there, untouched.
Unless you've been particularly careful with your build chain, your ability to actually do anything with an untouched codebase degrades over time. There may well come a point where just getting the damn thing to build so you can release a version with whatever vulnerability patched can cost more than the remaining lifetime value of leaving it running. In that case, it's already dead, it just doesn't know it yet.
What you say "being particularly careful with your build chain" was the status quo just few years ago, and I'd argue is still good engineering. You should pin your dependencies, and updates of everything - including build tools - should be explicit steps in the process.
This should be trivial on the small (just pin versions and disable automatic updates of your build tools / package repos), and with recent proliferation of cheap VMs and containers, it should be simple over time - you should be able to pull a 5 years old image full of old software and rebuild the original binary.
> There may well come a point where just getting the damn thing to build so you can release a version with whatever vulnerability patched can cost more than the remaining lifetime value of leaving it running. In that case, it's already dead, it just doesn't know it yet.
Vulnerabilities aren't sins, they are things that can be fine to leave unfixed. The software isn't dead, it'll just keep running with a known exploitable issue.
> You should pin your dependencies, and updates of everything - including build tools - should be explicit steps in the process.
Yes, and in the not uncommon case where part of the build environment is left unspecified, the untouched code accrues debt over time. As you say, it's possible not to take on that debt by making a significant investment up front. This isn't an argument against the debt metaphor, it's an argument for it.
> Vulnerabilities aren't sins, they are things that can be fine to leave unfixed.
My point doesn't change if you s/vulnerability/sufficiently serious vulnerability/.
It does, though. A single line might not change - but the world around it is always shifting. Libraries in frameworks, Operating Systems, drivers, external data structures - nothing stays static in the environment our code lives in. As long as code is running, it needs to be maintained. (Most of the time it can be negligible, sometimes serious...)
Most of the things you mentioned are a choice, not something inherent to the code as an artifact. If the external systems your code interfaces with change, you need to change your code. Other than that, updating your OS, libraries, drivers - these are all choices. You can not update them, and the code should run exactly as fine as it did when originally written. You can virtualize the environment to mitigate the risk of the underlying hardware platform going down or you. Code decay, to me, is mostly self-inflicted.
It's a feature of the dependency system and speed of change of that ecosystem. A large number of systems have been written with these 'high maintenance' stacks in recent years.
I agree the users of these stacks are often short sighted and you could call that bad engineering. However it's very much a feature of those platforms, not so much individuals decisions on then.
Personally I prefer ecosystems with large common libraries that are highly stable and cohesive to avoid those problems .. but the reality is so much software is built in fragile/high maintenance stacks these days, npm and react being the worst bit not only examples of extreme code rot from time decay of the ecosystem. Try updating a react app from two years ago to really understand..
That's how I think about it, and that's hopefully how a lot of people think about it, but I suspect for most people that's not true. For a lot of people debt isn't nearly as scary as it ought to be, and the idea of, instead of paying off debt, spending money to theoretically make more money seems like much more of a sure thing than it actually is.
Perfect example: I'm in SV, and I bet there's a not insignificant slice of the population around me who think that the cure to budget deficits (i.e., debt) is to reduce income by cutting taxes, based on a mystical belief never seen in reality that the growth fueled by the tax cuts will result in both lower taxes for the population and higher tax revenues for the government (i.e., investment).
I'm almost 50 years old, and there has never been one iota of truth to this notion in my lifetime. Not one teeny tiny iota. But some people just can't resist going back to that well over and over again with the same results every time. The notion that you can kill two birds with one stone, and both ignore debt and create wealth at the same time via magic Austrian fairy dust must be very powerful. It's like a black hole of self-evident idiocy that people keep flying into for some reason. It must be a form of cognitive bias, but if so I don't know what to call it.