So, this I agree with too, in contrast with my other comment. There are particular industries (engineering, banking, aerospace, defense, for some examples) where it -has- to be correct. These are also the industries where at least the final product should probably be waterfall or spiral, and generally you need formalism all the way down.
The problem comes when people bring lessons learned from that to companies that don't need that level of assurance. Documentation that's never read again is a waste. Spending time to make tests replayable if they'll never be replayed again is a waste. And so forth.
As for bugs being literally unknown, I'm jealous--it'd be nice to claim I ever guided a project to that state. But keep in mind that's different than no bugs period, just means no bugs in what was tested or used.
If you're delivering a product with very specific use cases, flows and a tight scope with strong bumpers around anything unintended, it's pretty straightforward to guarantee all that. But if you're delivering a sandbox product on the internet to a bunch of randoms, much less so.
After all, ten checkboxes on a dialog are 1024 testcases. That's not even accounting for free-entry fields. You'll never cover all combinations of all uses of the whole app interface, ever, period, and equivalences and fuzz testing only get you so far. That's a simple (if slightly fantastic) example for UI testing, but the same basic thing applies to most other kinds of testing. At the end of the day you often have to pick your battles.
So I really disagree on the better is more. There is absolutely a diminishing return. It just so happens that for what -you- were doing, that point of diminishing return is pretty high.
The problem comes when people bring lessons learned from that to companies that don't need that level of assurance.
It goes the other way as well. Bringing agile methods to a situation in which having a requirement changed can take three months. Situations in which your "customer" is a faceless defence ministry (or two) with the level of responsiveness you'd expect. Processes are tools; use the right tools for the job.
As for bugs being literally unknown, I'm jealous--it'd be nice to claim I ever guided a project to that state.
I certainly can't make that claim. I was only there for five years; I think I saw three staged releases (which were for the purpose of allowing integration testing with other components). Maybe two. :)
Yeah, totally agree. That's what I meant when I said certain industries should be waterfall. It's turned into a bad word, but waterfall or cyclic/spiral is tailor-made for situations where requirements are both known up front and necessarily rigid.
Quality is defined in terms of customer expectations. Certain customers demand more, or are willing to pay more, for higher quality; certain ones aren't.
However, I think most consumer or enterprise software companies greatly undervalue quality and its impact on their competitive value proposition, and especially its impact on development efficiency and speed.
Even if the customer isn't willing to pay for higher quality, your developers are still paying for it with their time and ability to innovate; and your competitors will happily eat your lunch while you frantically hire "better developers" and use up all your time putting out fires.
There is a diminishing return on level of quality and level of QA done; you need to do what's appropriate for your market and your customer expectations. I just don't believe that most companies are anywhere near the appropriate level—not just of after-the-fact testing and best practices, but also in up-front built-in quality, design investment, and debt management.
I basically agree, but in many cases I know it's not so much a question of amount of effort as it is spending effort doing things that either sound "tried and true" or happen to be quantifiable, at the expense of the things that would actually make quality better.
It's another type of taking the easy way out in that a potentially more effective context-driven strategy takes more effort to justify and maintain faith up the chain. But not adopting a strategy specific to your context can easily result in spending a lot of effort to get only marginally better results than unit testing, dogfooding, and nothing else.
Yes, that's the critical paradox: the things that are more measurable or easier to measure are the things that are done, despite them having less impact on quality than things that are difficult to measure or unmeasurable.
Deming said it: "The most important things cannot be measured." How right he was.
I like the old joke about the guy searching around the lamppost at the corner. Another guy asks him what's up, and he says "I dropped my keys." When asked where on the corner he might have dropped them, he replies "Oh, I actually dropped them down the street a couple of blocks, but it's way too dark over there."
I find that joke to frequently be all too relevant to current quality practices.
>>The problem comes when people bring lessons learned from that to companies that don't need that level of assurance. Documentation that's never read again is a waste. Spending time to make tests replayable if they'll never be replayed again is a waste. And so forth.
To me, writing thorough documentation and solid tests from the beginning serves a very important purpose that is often overlooked: it establishes the right type of culture. I think of it as laying the foundation of a building. If you skimp out on materials, the whole thing will be unstable and prone to collapse. And going back later to fix it is often times a lot more costly than doing it right in the first place.
You would never make the argument that a solid foundation is unnecessary if an earthquake never hits, right?
After doing quality for 15 years after another 5 as an application developer, I'm prepared to say that it's an awfully expensive and ineffective way to try to set tone. All it mostly does it get the rest of the company to assume you've got quality all handled and start chucking stuff over the wall at you. It doesn't inspire them to a higher standard.
If QA groups spent half as much time actually testing or helping stabilize development processes as they did maintaining brittle docs they'd probably be considerably more effective. At the end of the day most verbose test docs should have been one set of canonical product docs and a list of inputs/strategies. Instead we repeat and fragment the same info across a bunch of disconnected test docs and then let them rot under the weight of maintenance.
The problem is everyone thinks they have a "one true way." We know that concept is bullshit in software dev and that approach should be tailored to the problem at hand. We understand how important YAGNI and SPOT is, and why sometimes insisting on a perfect architecture is a bad thing in the real world.
Why people think it's any different for QA is beyond me. The same principles largely apply.
Late reply, but on reread I wanted to clarify: solid tests, yes. At the very least, pass needs to mean pass, though you can tolerate some level of spurious failure in some kinds of tests. But if you can't trust a pass, for the scope the test actually tests, better to not have the test and not give false confidence.
It was the replayability part on old packages I was pushing back on. It's certainly a nice to have, but you're more likely to refer back to old test results than actually run the tests again, so having to architect mechanisms to cache and restore old packages, etc, can be overkill. Just cache the results.
As for docs, it's a matter of level of detail. Most QA groups I've been part of or had visibility in overshoot, and generate a lot of docs that aren't particularly useful for any real process. YAGNI applies to docs too.
Further, the docs themselves aren't modular--they usually end up mixing concerns between product documentation and the testing, and in doing so make themselves extremely brittle to product changes whether or not they should really matter to the tests. UI step/verify docs are the absolute worst about this and the most prevalent kind of test documentation.
I've come to the conclusion that most QA groups would be better off just writing one set of canonical docs for the product, if they don't already exist, and then pointing to that with extra info about what inputs or strategy to use for a given test. It's the same SPOT argument for modularizing automation (or any other kind of code), and has the advantage that others can reuse the product docs.
Re: foundation/earthquake argument, there's something to be said for knowing what you can't live without in the case of a disaster, what you can live without with a little bit of pain, and what you can generate JIT if you need it. The former set is usually pretty small. The middle set is where your educated bets lie. The latter set, you should probably never do up front until you've found yourselves having to scramble a couple of times.
The problem comes when people bring lessons learned from that to companies that don't need that level of assurance. Documentation that's never read again is a waste. Spending time to make tests replayable if they'll never be replayed again is a waste. And so forth.
As for bugs being literally unknown, I'm jealous--it'd be nice to claim I ever guided a project to that state. But keep in mind that's different than no bugs period, just means no bugs in what was tested or used.
If you're delivering a product with very specific use cases, flows and a tight scope with strong bumpers around anything unintended, it's pretty straightforward to guarantee all that. But if you're delivering a sandbox product on the internet to a bunch of randoms, much less so.
After all, ten checkboxes on a dialog are 1024 testcases. That's not even accounting for free-entry fields. You'll never cover all combinations of all uses of the whole app interface, ever, period, and equivalences and fuzz testing only get you so far. That's a simple (if slightly fantastic) example for UI testing, but the same basic thing applies to most other kinds of testing. At the end of the day you often have to pick your battles.
So I really disagree on the better is more. There is absolutely a diminishing return. It just so happens that for what -you- were doing, that point of diminishing return is pretty high.