Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
QA = Time and Money. How much should you invest? (rainforestqa.com)
89 points by ThatMightBePaul on May 18, 2015 | hide | past | favorite | 55 comments


QA is interesting, sure. I'm more interested in the Q part, and you don't get that just by having a role or department checking work before it hits the customer.

This article does a great job of describing the minimum level of quality allowable to scrape past certain stages of growth.

The question you should be asking is: what level of quality, built in from the beginning, intentionally and with full support, will create a product so valuable to the customer as to generate a wild feedback loop of success—not just scraping by at a minimum acceptable level. What level of quality will generate returns, not just reduce your debt to an acceptable level? And more importantly, how do you achieve it?

Quality is your value to the customer. If you think of it systemically, you won't need a QA. As Dr. Deming said, "Inspection does not improve the quality, nor guarantee quality. Inspection is too late. The quality, good or bad, is already in the product." And Harold F. Dodge: "You can not inspect quality into a product." Deming advocated instead for a holistic understanding of the factors driving quality, including management and leadership, the processes used, and continuous improvement of systems.

This is the type of discussion I always see lacking from discussions of software QA. On the manufacturing timeline, it's like we're in the 1920's. It's all very realistic, but we could be so much further along.


What you describe is QA. Testing is technically QC: Quality Control. It's only one facet of QA, though the terms are usually conflated. Most companies only have a QC department.


You're proposing a definition that is out of line with how 99.99% of people use it.

Even if your definition is more logical, this isn't how language works.


I'm not proposing anything. This is a well-known concept among quality professionals. I'm going to guess you have a different specialty.

Google "quality assurance control" and you'll find a lot more, but here's one source:

http://whatis.techtarget.com/definition/quality-control-QC


You mentioned 'level of quality'. That can be pretty difficult to quantify. In the absence of well defined metrics, a lot of thinking about QA is qualitative.

"I feel like we have a high level of quality here".

That makes conversations about QA pretty challenging. Metrics is the one area I'd LOVE to see move forward.


Heh, maybe that's because the nature of quality is qualitative rather than quantitative.

There are things that can be measured that might be correlated to quality ("bugs", sales, support, etc) but ultimately, classical quality seems to be to be a marriage of what was expected or desired with what was actually there. As a developer, my take is that quality usually stems from "developing" those expectations as much as making the relatively concrete thing to compare them too. When anyone can quantify the first half of this equation, please let me know at once.


That's a really good question, and it requires some more context.

First, you have to understand your company from a systemic viewpoint. Break it down, and understand the inputs and outputs of every part of your company. You've got your individuals, your various teams, the people they work with, the systems they build, and the product itself. You can measure the input and output of each of these systems in meaningful ways: number of tickets opened, number of defects reported, raw performance metrics, uptime, availability, quantitative user feedback like NPS, and engagement metrics like product usage or feature usage.

But there are caveats: you can't turn these metrics into goals. You need to lead toward the improvement of the systems responsible for those metrics, with the hypothesis of improving them and ultimately the end goal of providing a high quality product of value to your customers. The metrics are valuable information about how the system operates, not the end goal.

There's another problem: you're absolutely right about quality being fundamentally qualitative. It is, after all, the root of the word "qualitative." That's significant. Quality is the humanity of the engineering game: while you can measure a lot of things toward it, ultimately, as W. Edwards Deming said, "The most important things cannot be measured." It sucks, but it is closer to reality than defining any metric or set of metrics. Instead, it requires leadership and a comfort with doubt and complexity. Here's my answer to a Quora question expressing discomfort with this paradox: http://qr.ae/f5iUg

So it's a bit of a Bermuda triangle. First, there are significant metrics you can measure, and you need to decide which ones mean quality to you and your customers—I promise they exist. Second, you can't aim directly for your metrics, since any large enough organization is highly complex and over-optimizing for a single metric or set of metrics can cause significant unintended side-effects. Third, the most important things critical to quality actually can't be measured; or measuring them would cause more harm than good. This makes for a challenge, and it's why this is such a difficult problem to solve in an organization.

What I'd LOVE to see moving forward is instead a profound comfort with complexity, and an understanding of why numeric metrics aren't always the most vital ingredient to success.


Can someone please explain why one prefers unit testing to integration / acceptance testing when you're a seed stage company? I mean, your primary objective as an early stage company should be getting an operational product out to the market, so it makes sense that you test that your users can operate your product - aka integration / acceptance test with crapybara or phantomjs. On the other hand, if you're just unit testing you're bound to miss the bigger picture (usually this shows up in the form of misspellings in your html views, some CSRP bullshit, external apis misbehaving, etc.), and will wind up with all tests that pass but a product that still doesn't work.


Unit testing is easier to do with the developer resources on hand since they probably already know how to write them, probably necessary for quick pace in dev anyway, and gives you a large portion of the risk mitigation testing in general will give you. You also check in unit tests with the code if you do it right, whereas integration or acceptance tests require additional passes once all the code covered is checked in, so it's more painful to the schedule.

Integration tests also require more development to allow integration while still being hermetic (isolating out any external factors, since you might have to architect to share injected dependencies between modules) -or- more maintenance if you don't make them hermetic and they have to track external dependencies. They require more time to triage failures as well if they're not hermetic, as external failure is a possibility; you can't just trust the test results. And final acceptance tests should never be hermetic--they should accept based on realistic environments--so they're almost always a little high-maintenance.

So while the value is there, the cost is much higher and it'd be more of a distraction from primary development. Cost and benefit have to be considered.

So in this model the integration/acceptance part is handled by dogfooding, which is the lowest cost way to get a decent chunk of it, even though it only walks happy-path for the most part.

But that's why I said I'd add exploratory to B2C seed, is to get back some of the integration testing on non-happy paths and obscure paths.

Edit: find docs on the test pyramid and consider that the shrinking number of tests as you go up also implies somewhat lower technical debt as you skip going up. The shrinkage is a function of the cost vs. benefit--you're just taking it to its logical conclusion of omission.


This. Unit testing is better for a lot of reasons. Dev familiarity is definitely a big reason. I find they're a bit more robust, too.

Integration tests are more brittle. Which, can be good ("Hey, there's a bug here..."). For an early product, your app is probably simple enough that you get a good enough feel while dogfooding. 100% correct that also includes some informal exploratory work :D

RE: Complexity & the relative need for testing, I really like the perspective 'Out of the Tar Pits' takes. http://shaffner.us/cs/papers/tarpit.pdf


In my line of accessibility work, I find QA fills the role of testing to educate. If the engineers don't have the experience, looping issues into the backlog, gives them the experience to train up the skill set. They might have had verified experience before hiring, but getting team collaboration helps mature new people or smooth out the rough skill sets of a dev who might be awesome in one particular area but meh in others.

I suspect this could be the case for any subject matter expert where the skill set is viewed as fringe or not expecting the team will be required to have that exact knowledge before hire.


Yes I came here to say this. I think it's primarily a terminology problem, but integration testing should be in place well before unit testing.

I would go so far as to say that genuine unit testing is never useful in web application development, unless by coincidence you write a test that happens to fulfil th requirements of a unit test (and before everyone says how many unit tests they've written, just because your framework of choice has a class called UnitTest that you use when writing tests, doesn't mean you're writing unit tests, and yes some frameworks make it easier to get closer to the idealised notions of a true unit test than others).


There are a few reasons to do unit tests first, over and above my direct reply to GP.

The first is that integration tests can be (not always, but can be) too inclusive and thus fail too much, especially early on. A good test process requires tests to stay green most of the time, otherwise you learn to ignore them as "known" and distrust and eventually dismiss the results. So you turn off or xfail failing tests while you triage. But if you turn off or dismiss an integration test as a known failure, you lose a lot of coverage.

The second is isolation. Even if you don't have full unit coverage, having -something- beneath the integration tests lets you tell a lot more from the combo of failures. Integration covers ABC and fails, have unit test for B and C, must be A.

The third is that they're for different purposes. Unit tests are there to tell you what you thought you were guaranteeing with your code has been guaranteed. That's a really important step in knowing that all the other stuff you -didn't- cover with integration testing (remembering that as you add moving parts, you have the combinatorial of all the ways they move--you won't cover it all) is probably correct. And, of course, it's what will ever let you refactor while still knowing your interface is correct.

Finally, there's simple terminology differences as to what's a unit and we might be on the same side. There's debate in the community, but I'm on the side that you always unit test to a public interface, period. Don't test within something not exported or exposed. That may mean you only test a module instead of component classes if they're all private implementation classes. It's still a unit test for that module.

Also, a political reason: once you start down that route it's super-tempting (especially for your boss or your boss's boss) to say "we'll test everything full stack, that gives us all of the coverage." In reality, it only gives you some of it, and you probably have no idea what even with code coverage tools (remember that paths are ultimately what you're testing, not lines or branches). And it's incredibly indirect, since you're trying to find externally driven cases to make the internals do different things. It's usually easier just to manipulate the internals directly via the nearest interface.

One more point: YAGNI is fine and well, but we all know modules or classes usually have behavior that isn't used yet because it's part of an obviously complete API (like, D from your CRUD API when nobody's had to remove anything yet). Unit test that stuff, since there is no integration test that'll cover it. Otherwise it's landmines waiting to be found. The unit tests -are- the client code at that point.

So one or two high level acceptance/integration sanity tests, sure. But I'd never say that you should have "integration testing in place" first. That implies a level of completeness or formality that I think would probably be counterproductive in almost all cases.


The first is that integration tests can be (not always, but can be) too inclusive and thus fail too much, especially early on. A good test process requires tests to stay green most of the time, otherwise you learn to ignore them as "known" and distrust and eventually dismiss the results. So you turn off or xfail failing tests while you triage. But if you turn off or dismiss an integration test as a known failure, you lose a lot of coverage.

I have never done that, and no-one on my team has ever done that. All tests need to pass when we deploy, every time. The fact is that the cause of regressions are usually so trivial that having unit tests doesn't have any value. All you need to know is that something is broken, and see some error message, and you can track the problem down in a matter of minutes. Starting with high level tests pays dividends on day one. Unit tests have much longer payback times, and a much higher chance that the system will still fail, even if your tests pass.

The second is isolation. Even if you don't have full unit coverage, having -something- beneath the integration tests lets you tell a lot more from the combo of failures. Integration covers ABC and fails, have unit test for B and C, must be A.

I've never failed to identify the cause of a failure in a matter of minutes when a test fails. The concept of isolating the system under test is taken to extremes in XUnit frameworks and the amount of work you have to do to properly isolate the system under test is cost prohibitive. Given the fact that an integration test is by definition a test of any system not sufficiently isolated to XUnit standards, really what you need to have is an integration test that is "as isolated as practical". On greenfield developments, it's typically just a test that makes sure the thing runs at all. Once you start fixing bugs, you write tests that are more focused, but worrying about isolating systems under test (with mock objects and that kind of malarky) kills productivity.

Also, a political reason: once you start down that route it's super-tempting (especially for your boss or your boss's boss) to say "we'll test everything full stack, that gives us all of the coverage." In reality, it only gives you some of it, and you probably have no idea what even with code coverage tools (remember that paths are ultimately what you're testing, not lines or branches). And it's incredibly indirect, since you're trying to find externally driven cases to make the internals do different things. It's usually easier just to manipulate the internals directly via the nearest interface.

Yep, I say "we'll test everything full stack until the QC team finds bugs, then when we fix bugs we'll add specific tests" (I'm the boss). The philosophy we operate under is that it's the job of QC to find bugs, it's the job of automated tests to ensure we only fix each bug once and to reduce the cost of QC (more here: https://github.com/iaindooley/Murphy).

We use a mix of backend integration tests (using Murphy) and frontend integration tests using this: http://monkeytestjs.io/ but the main tenet is: don't waste time testing components on a system you're developing, only test components once they have been shown to fail. Even then, isolating the system under test might happen by coincidence, but is not a requirement of the tests.

If someone gets it wrong, and creates a test that passes even though the bug is still present, it's the job of QC to catch that -- the only requirement is that this happens infrequently enough that it doesn't make QC cost prohibitive.

So one or two high level acceptance/integration sanity tests, sure. But I'd never say that you should have "integration testing in place" first. That implies a level of completeness or formality that I think would probably be counterproductive in almost all cases.

I've found exactly the opposite in all cases. In fact, I've never seen productive Unit Tests (real ones) -- only integration tests that people called unit tests, and in all cases I've seen, people started with high ideals and then all testing was abandoned when the shit hit the fan, mostly because they were too ambitious and low level with test coverage to begin with.


This is probably the most honest article I've seen, ever, regarding how much QA/testing companies at various stages need.

I'm impressed, because there's a real tendency within QA culture to say "the ideal is all the things all the time, we're the gatekeepers," which has forced quality into a bad all-or-nothing situation. We all know intuitively that it's not all needed a lot of the time and the success stories are clear, so the trust in any of it is undermined. So then you get a dev culture saying "QA is unnecessary, we can handle all of it," which is generally true at the beginning but can send your company off a cliff if you don't manage transition.

The sad part is that even once quality processes get around that, the all-or-nothing attitude leads to a pyrrhic victory. Quality teams still end up only doing what can be afforded (because math) but everyone thinks they should be doing more. That leads to the pervasive opinion of ineffective quality teams. And, unfortunately, the scramble of trying to do everything at once (and therefore mastering none of it) often makes that a self-fulfilling prophecy.

And I love that it stresses that QA is about hedging bets. Bugs get out, period. The gatekeeper attitude is what leads to the do all the things attitude in the first place. "At any cost" is generally a bad way to strategize.

As for this chart, I'd have added exploratory/informal to seed B2C--exploratory is biggest bang for buck for finding new bugs, and incubating products are -all- new bugs--but I suspect they're lumping what I'd have recommended under dogfooding.

So yeah, very nice. Of course, the best thing to do in any situation is consider the context of your company, product, and market. Think about how maintainable it really needs to be and what would cause the most damage: losing customer money, leaking their data, embarrassing you in the market, eroding their trust. Are there only a few customers to lose or are you mass market and can afford a round of sufficiently obscure failure?

Those are the things to prioritize when you start picking your battles. But this is such a great set of guidelines for where to start that conversation.

(Edits for typos only)


Thanks for the good word (I'm the author). I 100% agree about setting the right culture / the difficulty of managing the transition.

It takes a ton of work to establish the right infrastructure + culture. Specifically for the reason you mentioned: a lot of folks see QA in black and white.

It's doubly hard when you're growing fast.


As a QA Engineer, I had to create an account to reply to this.

Where are these people saying "the ideal is all the things all the time, we're the gatekeepers,"?

If anyone knows the limitation of QA, it's the people of QA themselves. Trust me, we know that common and non-sensical expectations for "full coverage" and "test everything" are a scourge.

Maybe I'm just lucky to not have worked with the types you describe, but in my experience a QA will be pushing a message of priority-based tradeoffs rather than an unrealistic all-or-nothing approach to quality.

Regarding the article itself, that it advocates being thoughtful about what you test it makes me think that maybe it was written by someone who actually understands QA.


Wow, crazy timing. We interviewed Fred at Rainforest on the Talking Code podcast – and that interview launches tonight at midnight (PDT).

I'm going back through and picking out my favorite quotes right this moment in preparation for the episode launch. Some of my favorites:

"How did we get to the place where the generally accepted best practice is not very good?" (referring to automated testing)

"The visual representation is only somewhat semi linked to the actual underlying code that represents it."

"A really good QA person can actually start to QA the requirements themselves."

"Good QA people are the customer before the customer."

"Shipping a bug at Facebook scale is pretty catastrophic."

"For tiny company X, a YC company, shipping a bug – while painful – is not going to be the end of the world."

"For 95% of people, QA is not a competitive advantage, so it doesn’t make sense to have it in house."

This has honestly been one of my favorite interviews ever, and I'm just really lucky it happened on my own podcast.

http://talkingcode.com/podcast/episode-6-fred-stevens-smith/


Look forward to hearing this. Will Venmo $5 in exchange for the outtakes of Fred cursing ;)


"QA the requirements".

Safety-critical software does not commonly has bugs. The problem it faces, is incorrect requirements.


Zero QA, or at least zero QA by dedicated QA people.

Developers end up using QA as a crutch. The end up not testing their code completely, and sometimes they don't test it at all. Too many times I've seen code checked in, after going through code review, that just doesn't work. I'm not talking edge cases, it does not function at all.

Get rid of the crutch, and force developers to be responsible for their code. Developers should do all the QA on their code.

If you must have QA, don't waste them on checking if features are I plmented correctly, repeating a job developers should do. Instead, have them actually check quality: have them look at the whole app or site holistically. Have them try to find the bugs that developers won't be able to find.


This article, as is typical around here, addresses a very small part of the industry. Here's a view from a different part; the part where a project has a lifecycle of a decade or so.

I used to work for a software consultancy that had been acquired by a US defence company (second tier). Over the previous 15 years, the software consultancy had elevated their QA to the point that bugs in the delivered product were literally unknown. Nobody ever saw any in the delivered product, over a decade. Every requirement could be traced to the design that would meet it, to the code that implemented it, to the test that verified it. The test documentation was beautiful, and anyone could (and did) execute a complete set of tests, writing their own personal signature on every confirmation of test success. The signed test specs were scanned before being sealed in an envelope against the customer ever choosing to inspect. QA had (and did) veto over requirements, design, and tests.

We were working with two other major, globally known defence names, and the customer had learned through experience that when something went wrong, come to us last. Sometimes, the customer came to us to ask us to work out which of the other companies involved had screwed up (because sometimes, the other companies were themselves unable to work it out). Once, they sued one of the other names for incompetency and gave us their work because we knew how to fix it better than they did, because our interface tests highlighted everything that their software didn't do in accordance with the spec.

We could replay any test, at any version, at any time, and compare the results now with the signed results then from the sealed envelope. The time savings we had because we got things right the first time, by doing it properly, was immense.

Where this is going is that this was priceless. We had the customer's absolute trust, based on experience. Nobody ever found a bug in our delivered software (I'm not saying there weren't any; I'm saying that our QA was thorough enough that the customer was never going to do anything with it that we hadn't tested already). Code coverage, all warnings on, static analysis, valgrind, cross-platform compilation and testing, we did everything. Customers outright told us that we were worth paying extra because of the peace of mind and the reduced risk, so they paid us more and we spent less doing it.

QA is time and money. The better your QA, the more time you will have and the more money you will make.


So, this I agree with too, in contrast with my other comment. There are particular industries (engineering, banking, aerospace, defense, for some examples) where it -has- to be correct. These are also the industries where at least the final product should probably be waterfall or spiral, and generally you need formalism all the way down.

The problem comes when people bring lessons learned from that to companies that don't need that level of assurance. Documentation that's never read again is a waste. Spending time to make tests replayable if they'll never be replayed again is a waste. And so forth.

As for bugs being literally unknown, I'm jealous--it'd be nice to claim I ever guided a project to that state. But keep in mind that's different than no bugs period, just means no bugs in what was tested or used.

If you're delivering a product with very specific use cases, flows and a tight scope with strong bumpers around anything unintended, it's pretty straightforward to guarantee all that. But if you're delivering a sandbox product on the internet to a bunch of randoms, much less so.

After all, ten checkboxes on a dialog are 1024 testcases. That's not even accounting for free-entry fields. You'll never cover all combinations of all uses of the whole app interface, ever, period, and equivalences and fuzz testing only get you so far. That's a simple (if slightly fantastic) example for UI testing, but the same basic thing applies to most other kinds of testing. At the end of the day you often have to pick your battles.

So I really disagree on the better is more. There is absolutely a diminishing return. It just so happens that for what -you- were doing, that point of diminishing return is pretty high.


The problem comes when people bring lessons learned from that to companies that don't need that level of assurance.

It goes the other way as well. Bringing agile methods to a situation in which having a requirement changed can take three months. Situations in which your "customer" is a faceless defence ministry (or two) with the level of responsiveness you'd expect. Processes are tools; use the right tools for the job.

As for bugs being literally unknown, I'm jealous--it'd be nice to claim I ever guided a project to that state.

I certainly can't make that claim. I was only there for five years; I think I saw three staged releases (which were for the purpose of allowing integration testing with other components). Maybe two. :)


Yeah, totally agree. That's what I meant when I said certain industries should be waterfall. It's turned into a bad word, but waterfall or cyclic/spiral is tailor-made for situations where requirements are both known up front and necessarily rigid.


Quality is defined in terms of customer expectations. Certain customers demand more, or are willing to pay more, for higher quality; certain ones aren't.

However, I think most consumer or enterprise software companies greatly undervalue quality and its impact on their competitive value proposition, and especially its impact on development efficiency and speed.

Even if the customer isn't willing to pay for higher quality, your developers are still paying for it with their time and ability to innovate; and your competitors will happily eat your lunch while you frantically hire "better developers" and use up all your time putting out fires.

There is a diminishing return on level of quality and level of QA done; you need to do what's appropriate for your market and your customer expectations. I just don't believe that most companies are anywhere near the appropriate level—not just of after-the-fact testing and best practices, but also in up-front built-in quality, design investment, and debt management.


I basically agree, but in many cases I know it's not so much a question of amount of effort as it is spending effort doing things that either sound "tried and true" or happen to be quantifiable, at the expense of the things that would actually make quality better.

It's another type of taking the easy way out in that a potentially more effective context-driven strategy takes more effort to justify and maintain faith up the chain. But not adopting a strategy specific to your context can easily result in spending a lot of effort to get only marginally better results than unit testing, dogfooding, and nothing else.


Yes, that's the critical paradox: the things that are more measurable or easier to measure are the things that are done, despite them having less impact on quality than things that are difficult to measure or unmeasurable.

Deming said it: "The most important things cannot be measured." How right he was.


I like the old joke about the guy searching around the lamppost at the corner. Another guy asks him what's up, and he says "I dropped my keys." When asked where on the corner he might have dropped them, he replies "Oh, I actually dropped them down the street a couple of blocks, but it's way too dark over there."

I find that joke to frequently be all too relevant to current quality practices.


>>The problem comes when people bring lessons learned from that to companies that don't need that level of assurance. Documentation that's never read again is a waste. Spending time to make tests replayable if they'll never be replayed again is a waste. And so forth.

To me, writing thorough documentation and solid tests from the beginning serves a very important purpose that is often overlooked: it establishes the right type of culture. I think of it as laying the foundation of a building. If you skimp out on materials, the whole thing will be unstable and prone to collapse. And going back later to fix it is often times a lot more costly than doing it right in the first place.

You would never make the argument that a solid foundation is unnecessary if an earthquake never hits, right?


After doing quality for 15 years after another 5 as an application developer, I'm prepared to say that it's an awfully expensive and ineffective way to try to set tone. All it mostly does it get the rest of the company to assume you've got quality all handled and start chucking stuff over the wall at you. It doesn't inspire them to a higher standard.

If QA groups spent half as much time actually testing or helping stabilize development processes as they did maintaining brittle docs they'd probably be considerably more effective. At the end of the day most verbose test docs should have been one set of canonical product docs and a list of inputs/strategies. Instead we repeat and fragment the same info across a bunch of disconnected test docs and then let them rot under the weight of maintenance.

The problem is everyone thinks they have a "one true way." We know that concept is bullshit in software dev and that approach should be tailored to the problem at hand. We understand how important YAGNI and SPOT is, and why sometimes insisting on a perfect architecture is a bad thing in the real world.

Why people think it's any different for QA is beyond me. The same principles largely apply.


Late reply, but on reread I wanted to clarify: solid tests, yes. At the very least, pass needs to mean pass, though you can tolerate some level of spurious failure in some kinds of tests. But if you can't trust a pass, for the scope the test actually tests, better to not have the test and not give false confidence.

It was the replayability part on old packages I was pushing back on. It's certainly a nice to have, but you're more likely to refer back to old test results than actually run the tests again, so having to architect mechanisms to cache and restore old packages, etc, can be overkill. Just cache the results.

As for docs, it's a matter of level of detail. Most QA groups I've been part of or had visibility in overshoot, and generate a lot of docs that aren't particularly useful for any real process. YAGNI applies to docs too.

Further, the docs themselves aren't modular--they usually end up mixing concerns between product documentation and the testing, and in doing so make themselves extremely brittle to product changes whether or not they should really matter to the tests. UI step/verify docs are the absolute worst about this and the most prevalent kind of test documentation.

I've come to the conclusion that most QA groups would be better off just writing one set of canonical docs for the product, if they don't already exist, and then pointing to that with extra info about what inputs or strategy to use for a given test. It's the same SPOT argument for modularizing automation (or any other kind of code), and has the advantage that others can reuse the product docs.

Re: foundation/earthquake argument, there's something to be said for knowing what you can't live without in the case of a disaster, what you can live without with a little bit of pain, and what you can generate JIT if you need it. The former set is usually pretty small. The middle set is where your educated bets lie. The latter set, you should probably never do up front until you've found yourselves having to scramble a couple of times.


> QA is time and money. The better your QA, the more time you will have and the more money you will make.

Just to caveat again that this is for your part of the industry. That level of detail gets you few benefits if you were Facebook, for example, or a very small startup still doing product validation.

Also to say that "the better your QA the more money you'll make" is a very strong position to take, even in your industry.


Also to say that "the better your QA the more money you'll make" is a very strong position to take, even in your industry.

It is a bit strong, isn't it? I felt that to caveat it might detract from the effectiveness of the point. There does come a point where the software is effectively flawless from the customer's point of view, and doing anything to improve it (including more/better QA) from there doesn't make any more money. But if you reach that point, you know :)


More than that, better QA or better software doesn't actually make you money. As in, you can't make your software twice as good and make twice as much money. It's a lot more nuanced and much more indirect than that.

So certainly you can say "investing X QA hours per month should help reduce customer incidents by 50%, which will reduce churn rate by 30%, increasing MRR by 100% over 1 year, given current churn rates". Or whatever other metrics affect your actual revenue.

Overall, you need to draw a much stronger connection between QA and money, esp if you want to claim a continuous function between them.


Woah! Thanks for sharing. Love hearing about the other side. All of the projects I've been involved in operate on either a continuous release, or a one to two week release cycle.

So, I'm definitely ignorant to the standards at spots with months-long release cycles.


I've done two government contracts, just one for defense.

I won't ever take a defense contract again because the primary contractor selected the wrong part, then hired me as a subcontractor based on my expertise with a part they should not have selected.

This actually came to the point that I spent six weeks furiously trying to find a workaround while they burned a bunch of money making prototype assemblies that did not work, then delivering them to "The Client" with the expectation that they would integrate my firmware later.


I am not sure you want to delay penetration testing until you are post IPO.


Agreed.

Caveat: do you need to do pen testing every release? I'm guessing, not. YMMV.

If you're a defense contractor (as some other commenters mentioned), your priorities are probably quite different.


I head infosec for a “Series A - C” B2B company and a fairly standard request from a potential customer is to see not only our own penetration test reports but third party penetration test results, as well. As a result, we run automated pen tests on weekends and before major releases. We also work with an application security firm every 6-12 months. For what it’s worth, we don’t do anything nearly as intense as defense contracting or handling financial info.

That said, I liked the article - thanks for sharing.


Np! Thanks for sharing.

Glad to hear security is taking the front seat some places. Anecdotes like this help me expand my world view. <3


If I were handling private customer data or money, I'd absolutely pen test early. Comes back to that "what do you have to lose" consideration re: context. If the worst that happens is my own service goes down, I might delay it.


thoughtful article


As a consultant, I used to ask my clients to QA my deliveries. Few of them were willing to, to the extent they did they could not or would not provide meaningful bug reports.

This led to my adoption of test-driven development - mostly but not all automated testing - with the eventual result that I advertised that my final deliverables would be ready to ship to end-users, without any requirement for QA by my clients.


TDD is great. It should be a given on any project that isn't just a spike.

I'm curious what kind of products you were building? Simple websites, or full fledged applications?

100% unit test coverage can still miss a host of gnarly bugs. A lot of the ugliness of QA comes from how A interacts with B. Or how A + B + C + D work together. Or at least, that's my 2 cents :D


You also have to qualify 100% coverage: SUT lines, branches, or paths? Those are all very different levels of rigor.

The two most useful code coverage stats I've found are 0 and "less than yesterday." The former tells me something pretty important about the culture of the team who owns that code; and having rules against the latter is about the only way to ensure a test-on-checkin policy.


I love this metric, and am totally stealing it.


All yours!


I've done a wide variety of projects - cross-platform GUI, a windows database kernel implemented as a dll - that is, not a server - embedded storage firmware, mac os x device drivers (kernel extensions).

For some but not all of my projects, failure was not an option. Imagine I lost a disk sector in my storage firmware.


Test driven development usually refers to unit testing or integration testing. It would be interesting to know if you went beyond that and the nature of contract work you were doing (testing for an internal service with a RESTful API being very different from a mobile app, or a website).


TDD, in my view, starts from the outermost layer -- the end user -- and moves inwards progressively to the unit tests.


John Lakos differs from you. In his book "Large Scale C++ Software Development" he advocates starting at the lowest level of your own code, that is, subroutines that only make system calls or call libraries that are provided by the system.

Then you unit test the second level - subroutines that only call that first level, or that call the system or libraries.

main() sits at the top level.

Not every program is straightforward to levelize. His point is that one should do so.

Also the unit tests for any one level should focus on what is new on that specific level, with the assumption that all the lower levels are flawless. In general they won't really be but when that's the case you write a unit test for the lower level.

This keeps the LOC on the tests about the same as the LOC in the deliverable.

It is unfortunate that his book focusses on C++; really he should have written a separate book on testing that was language-agnostic. There is very little of that section that's really specific to C++.

In addition to unit tests I do integration tests. If there's a file format involved I create lots of input files that contain various edge cases.


With the caveat I haven't read Lakos' book, there's a pretty big backlash going on in the industry against hyper-granular unit testing. Comes down to people realizing that when their implementation needs to change, they often have to change a bunch of unit tests that were written to the implementation.

One problem is that modularity should let you change implementation without friction; that's the whole point of modularity to begin with. So there's not much profit in writing a bunch of client code (tests) that intentionally break modularity and put friction back on the process. It's not quite as bad as just random breakage, because at least you know where it all is, but it's still painful.

Another problem is that if you're not careful, you end up with a set of tests that don't tell you when something works as expected or not, it just tells you when something changed. Having a test that tells you something isn't coded as expected anymore is pretty useless. You know you changed it.

And still another problem is that the testing itself has become too invasive. We're architecting things for dependency injection that would never really need it if it weren't for invasive tests. It's fine and well to drop some testing hooks in, but if you're having to completely invert control in your code to do it and that makes things significantly more complex, that may not be great.

I think there were some fairly recent Martin Fowler posts on this, but I believe his point was that we're very possibly doing it wrong: if the point is to guarantee an interface then tests should be to the interface. And maybe injecting test doubles is good in some cases but too invasive in others--especially when it's done to test an implementation--and so forth.

So not sure where Lakos is on the scale there, but what you describe about putting in layers upon granular layers of tests strikes me as setting yourself up for these issues. I'm a pretty big believer in testing to the interface, and do try to draw a line between functions that are there to serve a particular implementation and functions that describe something more abstract. I test to the latter.

Modularity is the key to maintenance; the biggest benefit of this type of testing to me is to validate your modularity, even more than validating the code within the modules.


Outstanding answer - thanks for the pointer to the Lakos book.


I've been taking this approach. I'm the sole developer on my company's API so it's important to me to not waste time fixing issues that get brought up by the UI team when they start working on with the API.

To that end, I write integration tests in that all my tests call the REST endpoint then verify the result of the API call. My tests treat the API call as a black box.

A single endpoint can have dozens or hundreds of tests. By the time the API gets to the UI guys, I know it's going to work correctly (except in those cases where I missed testing certain scenarios).

Not only does it save me from bug reports, but it makes me look good because everything I ship works.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: