I'll take a stab. Quality software is software that is testable, able to adapt to new features and is architected to match the current organizational structure of the software team so that communication and dependencies don't have an impedance mismatch.
- why is testable software higher quality? Does it add value to the software? I'd venture that untestable software has the same value (if not more) than testable software (due to time-to-market). You can write software that is 'obviously correct' and "high quality" at the same time, without any tests.
- Why does software that can adapt to new features increase the quality? If that is the case, we must argue that WordPress is extremely high-quality software. Or SAP.
- How does architecture influence quality? If that is the case, then there isn't any need for different architectural styles since there should be "one true style" that has the best quality software.
Testable software usually has a better quality because you can automate some parts of the quality assurance.
Sacrificing quality assurance to favour other aspects is common, but the quality usually suffers.
A company favouring time to market over testability is likely to release buggy software. They can get away with it.
Adaptability is a common quality, but you can find counterexamples. WordPress and SAP are successful software that may not check all the quality boxes.
Some architectures are for sure worse than others, and there isn’t one good architecture for all kinds of problems.
> why is testable software higher quality? Does it add value to the software? I'd venture that untestable software has the same value (if not more) than testable software (due to time-to-market). You can write software that is 'obviously correct' and "high quality" at the same time, without any tests.
Note I said testable software, not software with tests (there is a difference!)...I'd agree that software with tests (which is by definition testable software) has a huge developer cost to it that may not always be in the best interest of the company (like you said, time to market might be important). But in my experience, writing code in a way that can be tested later is only marginally more costly (time-wise) than writing code that isn't. A good example of this is writing modules that communicate with message passing and have state machines over direct function calls. The former has a slightly higher cost for dev time, but you can always retro-fit tests to it once you've achieved market penetration. You can't always do that with direct function calls.
> Why does software that can adapt to new features increase the quality? If that is the case, we must argue that WordPress is extremely high-quality software. Or SAP.
This is a good point that you bring up. I think what we are getting at ultimately is that quality and value are distinct entities. Software can have high value without being high quality. In my mind, being able to provide the business with new value-producing functionality without causing a spike in bug reports is my (admittedly vague) standard.
> How does architecture influence quality? If that is the case, then there isn't any need for different architectural styles since there should be "one true style" that has the best quality software.
Architecture has to match how the software teams communicate with each other. Like actually communicate, not how the org chart is made (see Conway's Law). So my point is then that if there are two separate teams, your code should communicate between two "modules" that have an interface between them. Just like real life. It would be silly to implement a micro service architecture here. That's why Amazon's SOA design works for them: it matches how teams are organized.
Good start, but too broad and open for interpretation.
- Who gets to define testability?
- I want to add a coffee maker to my crash test dummy; is the lack of room for the filter and water tank a sign of a bad design? Or not flexible enough for my feature?
- (cue meme) "You guys have organizational structure?"
- Who gets to claim the impedence mismatch? What are those consequences? Wait, where are the dependencies defined again outside of the software?
I do (just kidding!)...Testability is the ability to add testing at a later point. There is no hard definition of this, but if you can't test at least 75% of your public facing functions then I'd say you don't have testability. Remember testability means you can have a tigher feedback loop which means that you don't have to test in production or in the physical world. This means you get where you want to go faster.
> - I want to add a coffee maker to my crash test dummy; is the lack of room for the filter and water tank a sign of a bad design? Or not flexible enough for my feature?
I know you are joking, but imagine for a second that your business did in fact invent a brand new way to test crashes and that coffee makers were the key to breaking into that market. If the dummy can't accommodate that then...yes! It is a bad design, even if it was previously a good design.
> - (cue meme) "You guys have organizational structure?"
Remember: there always is an organizational structure, with or without a formal hierarchy. You want to match your software to the real one.
> - Who gets to claim the impedence mismatch? What are those consequences? Wait, where are the dependencies defined again outside of the software?
There are no "the company blew up" consequences with this type of failure mode. Instead you get a lot of "knock on" effects: high turnover, developer frustration, long time to complete basic features and high bug re-introduction rates. This is because software is inherently a human endeavor: you need to match how it is written to how requirements and features are communicated.
My entry: one that is easy to refactor regardless of code size.
If this is given, every other metric, like features, bugs, performance is just a linear dependence on development resources (maybe except documentation, but that is kinda an externality).