People often assume that code is knowledge. They want "self explanatory" or "well documented code". Companies and managers often treat developers as interchangeable. But therein lies the mistake.
Knowledge exists in mental models and team structures. Small components and systems can be understood by a person, but team structure also embodies knowledge of larger systems. People will need complementary mental models to understand a large system together.
That's why adding manpower to a late project makes it later. That's why maintenance is hard and handover is harder. That's why systems devolve into big balls of mud. Because companies and managers do not respect the fact that you need people and teams who have good mental models and that mental models take time to build and share.
No amount or quality of code can make up for this fact. And simulacra of ownership - having "product owners" or whatever - won't cut it. You need people to own their systems, understand them deeply. Moving people around, churn, treating developers as interchangeable, substituting rituals for deep work, accumulating 'technical debt' (deferred work as in ship now and think later) etc are all detrimental to building and sharing sound mental models.
Managers who know what they're paid for also want code with a bus factor of more than 1, just like in other engineering disciplines. Having code well documented is a feature in that sense; like unit tests it won't fix all problems but it goes a long way.
Speaking of tests, I've many times learnt more about how some code is supposed to work from the tests than from the documentation. Yet another reason to test everything you can.
Unit tests often double as documentation. They show the expected behavior of a function.
And if you take the time to write a series of high level cases, they can show the full expected behavior of a process. E.g: "Don't accept another request on the same object while we have another request on that object in the queue."
A unit test is great, but I've seen people delete unit tests rather than try to understand what's going on.
I've seen code like this:
- 10 lines
- every line has a different author in git blame
- there are conditional branches inside conditional branches
- last but not least: all conditional branches have the same behavior!
How does it end up like this? Why doesn't the last commiter just delete everything and write it in a single line instead?
Treating code as a means to an end, which is what I think you're describing, is just as bad as treating it as an end in itself. It should be neither an opaque black box nor a transparent 'bicycle of the mind'. In my opinion it cannot be lumped all in one category. It is an aspect of software systems. It may have many features, depending on where it is and how it is used.
Code can embody knowledge, but it is not the embodiment of knowledge. It can express functionality but it is not a functional component of a system. I think aspect is the best description: when you look at a system from the source code, you see some of it. Not a projection of the system over a set of dimensions as some people seem to treat it. It is not a textual description of the system. It is the part of the system you can see when you come at it from that side.
It's true that the 10-line code I was describing did provide some extra information, like the fact that there are different cases that are, have been, will be or could have been different... I agree that the code isn't the end result if that's what you're saying.
But the resistance to change is everywhere not only in the code and it locks projects into what they are. Only expansion is allowed. I'm not saying it's bad it's just what it is.
I've been contracting and taking care of legacy software this year. My initial parallel steps are to ask questions, read the code, write tests and literally any kind of documentation, all while trying to implement features. You can spend months trying to understand some software, and never really get anywhere because there's no requirements/documentation, your manager doesn't know, and no one who wrote that code works there any longer, or they're just too busy. It's a hot mess.
(I'm not even a Test Driven Design evangelist. There's just no other way to "prove" that things kind of, almost, sort of, work in a possible environment.)
There's no replacement for knowledge and sound mental models. Documentation, tests, they can help but they cannot replace having a knowledgeable person around.
So I constantly had to fight management about paying my people when well I moved to IT. Management saw them as replicable by anyone that knew the software we used. I saw them as domain experts that knew HOW we used the software we used, the software was secondary to knowing the company (and basically how every job was done in the manufacture of a 30,000+ part product). When I was a dev, we were partnered with industry domain experts so that we understood how the software was implemented to a level that 'self documenting code' never will.
Software is a cog. You're code can't be that self documenting to become domain expert for the domain it is trying to fill. That's like documenting how to train for a marathon by looking at running shoes.
Unlikely. LLMs only understand the code they're looking at, but not in the context of the complex interactions. E.g. LLMs won't understand how a particular line of code fixed a system outage that occurred last year.
I mean, it might be able to, as could a junior software engineer. That's besides the point.
It's rare that just reading the code will actually capture the spirit of what it means, that you could skip the step of asking the folks who wrote it why things are the way they are or the step of experimenting with it yourself to get a feel for it.
Or in other words, it doesn't really matter who reads the code. You still don't get to skip the knowledge building.
Knowledge exists in mental models and team structures. Small components and systems can be understood by a person, but team structure also embodies knowledge of larger systems. People will need complementary mental models to understand a large system together.
That's why adding manpower to a late project makes it later. That's why maintenance is hard and handover is harder. That's why systems devolve into big balls of mud. Because companies and managers do not respect the fact that you need people and teams who have good mental models and that mental models take time to build and share.
No amount or quality of code can make up for this fact. And simulacra of ownership - having "product owners" or whatever - won't cut it. You need people to own their systems, understand them deeply. Moving people around, churn, treating developers as interchangeable, substituting rituals for deep work, accumulating 'technical debt' (deferred work as in ship now and think later) etc are all detrimental to building and sharing sound mental models.