or maybe, them who know how to do this are just unable to spread this knowledge... something about how they think their private secret codes are the source of their wealth
when in fact, it's merely the scheme by which they mantain an advantageous capacity to extract energy from them seeking to learn how to build quality software
There's an obvious comprehensibility complexity to code to anyone who has spent almost any time what so ever trying to make something happen in software. However, we've got zero academics or theory around it.
Just 'best practices' (ie a thing other people are known to do so if things go wrong we can deflect blame).
And code smells (ie the code makes your tummy feel bad. yay objective measures).
And dogma (ie "only ONE return point per function" or "TDD or criminal neglect charges").
Sure, please do something for QA because it'll be better than nothing. But we're probably a few decades of waiting for actual theoretical underpinnings that will actually allow us to make objective tradeoffs and measurements.
There is plenty of academics on it, as real engineers, those that studied Software Engineering or Informatics Engineering, instead of fake engineering titles from bootcamps, should be aware.
Usually available as optional lectures during the degree, or later as Msc and PhD subjects.
Although, so far I've only bumped into cyclomatic complexity (with some studies showing that it has worse predicting power than lines of code) and lines of code.
I don't know. I was hoping for something like: "We know inheritance is bad because when we convert the typical example over to this special graph it forms a non-compact metric space" Or something like that.
Even though I find cyclomatic complexity uncompelling, it at the very least can slurp up code and return a value. Nicely objective, just not particularly useful or insightful to whether or not things are easy to understand.
The provided link looks suspiciously like they're going to talk about the difference between system, integration, and unit tests. The importance of bug trackers. And linters / theorem provers maybe.
I don't think these are bad things, but it's kind of a statistical approach to software quality. The software is bad because the bug chart looks bad. Okay, maybe, but maybe you just have really inexperienced people working on the project. Technically, the business doesn't need to know the difference, but I would like to.
I don't suppose you know where I can get their list of references without hitting a paywall? Specifically [16] and [24].
EDIT: [For anyone following along]
The linked paper is Measuring Complexity of Object Oriented Programs. Although, the paper isn't free. They reference several other papers which they assert talk about OO complexity metrics as well as procedural cognitive complexity, but unfortunately the references aren't included in the preview.
Apparently, there's also a list of Weyuker's 9 Properties which look easier to find information on. But these look like meta properties about what properties a complexity measurement system would need to have [interesting, but they don't really seem to comment on whether or not such measurement is even possible].
It looks like a lot of this research is coming out of Turkey, and has been maybe floating around since the early 2000s.
EDIT EDIT: References are included at the bottom of the preview.
EDIT EDIT EDIT: Kind of interesting, but I'm not sure this is going to yield anything different than cyclomatic complexity. Like, is this still an area of active research or did it all go by the wayside back in the early 2000s when it showed up? The fact that all the papers are showing up from Turkey makes me concerned it was a momentary fad and the reason it didn't spread to other countries was because it doesn't accomplish anything. Although, I suppose it could be a best kept secret of Turkey.
Renamed programs are defined to have identical complexity, which is pretty intuitively untrue, so I've got my concerns.
EDIT ^ 4: Doesn't seem to be able to take data complexity into account. So if you're dividing by input, some inputs are going to cause division by zero, etc. You might be able to jury rig it to handle the complexity of exceptions, but it looks like it can mostly handle static code. I'm not sure if it's really going to handle dynamically calling code that throws very well. I also don't think it handles complexity from mutable shared references.
Nice try, but unless there's a bunch of compelling research that no actually this is useful, I'm not sure this is going to cut it. And at the moment the only research I'm finding is more or less just defining functions that qualify as a cognitive measure under the Weyuker principles. I'm not seeing anyone even pointing it at existing code to see if it matches intuition or experience. Happy to be found wrong here, though.
The scientific groundwork for excellent testing, anyway, has already been done-- but not in the realm of computer science. This is because computer scientists are singularly ill equipped to study what computer scientists do. In other words, if you want to understand testing, you have to watch testers at work, and that is social science research. CS does not take social science seriously.
An example of such research done well can be found in Exploring Science, by Klahr. The author and his colleagues look very closely at how people interact with a system and experiment with it, leading to wonderful insights about testing processes. I've incorporated those lessons into my classes on software testing, for instance.
You may still not buy into it, but note that single exit was established for languages like C where an early exit can make it difficult to ensure that all resources are freed. It isn't meant for every language – and, indeed, languages that are not bound by such constraints usually promote multiple exit because of the reasons you bring up.
And even that is wrong, single entrance/exit was originally because you had subroutines designed to be goto'd into at different points for
different behavior and would goto different points outside the subroutine as the exit.
There are pretty much no languages left today where it's even possible to violate this principle without really trying, it's not about having single a return it's about all the functions starting at the top and return statements always taking you back to the same place in the code.
I wish more ppl felt this way. What a compliment it is to oneself when I hear ppl saying "write clean code" as if they know its address and had dinner with clean code just last night.
I was thinking there should be some metric around d(code)/dt . That is, as the software is used, 'bad' code will tend to change a lot but add no functionality. 'Good' code will change little even when it's used mode.
d(code)/dt isn't a very good metric though. Think of the Linux kernel. Drivers get some of the least maintenance work and are broadly the lowest quality part of the kernel. arch/ is busier than drivers/, but anything you find in the parts being touched are also significantly higher quality.
> you cannot be taught what nobody knows how to do
It's worse than that. No one can agree what "quality" means.
Mostly, the word is used as a weapon.
The pointy end of the weapon is what management pokes you with whenever anything unexpected happens. Typically they do this instead of making sure that problems do not happen (a.k.a. "management").
The weapon's handle is sometimes flourished by process gatekeepers who insist on slowing everything down and asserting their self-worth. This is not good for throughput, anyone else's mood, or eventually even for the gatekeepers.
People usually refuse to talk about quality in terms of actual business metrics because if anything unexpected happens that's not covered by the metrics, there will be fault-finding. And the fingers pointed for wrong metrics are typically pointed at middle management.
or maybe, them who know how to do this are just unable to spread this knowledge... something about how they think their private secret codes are the source of their wealth
when in fact, it's merely the scheme by which they mantain an advantageous capacity to extract energy from them seeking to learn how to build quality software