We have an excellent (and big) QA department, but 13 years ago when I started at this company we were only just beginning to hire dedicated testers. We had a mature product which was a communication handset and it worked well and was stable. Our software engineers had pressed every button they could think of in every menu and there weren't any problems.
Then we hired Kevin.
Kevin had the handset for 40 minutes before piping up "crashed it". The lead comes over to have the sequence explained to her, and says "huh, nice edge case". Half an hour later "crashed it again" (in a completely different way). Explains the sequence to the lead again. An hour later this happens again and he explains the sequence and she finally bursts out "Why would you even do that?! How did you think of pressing those buttons like that with that timing?!!".
Good testers just think differently than software engineers.
I'm pretty sure I subconsciously try to use my software safely and as-intended cause I don't want to crash it. Obviously saying out loud I know this is dumb, but why would I want to break something I created?!
There aren't any bugs as long as I don't look for them!
A good tester looks at all the expected cases, and then infers all the cases that exist in between those. They explore the negative space between what we're supposed to do.
> Good testers just think differently than software engineers.
I'm sure in some cases it's useful to detect all possible crashes, e.g. to make an app as secure as possible. In other cases I'd watch out for diminishing returns; perhaps instead of "think differently than software engineers" it would be enough to "think in the similar way as product users".
There's a consideration about number of users. If 100k users are using your product a lot, in a similar way to the 'million monkeys' thing they're accidentally going to find bugs.
Software engineers tend to use products in a consistent way (based on how they know it's meant to be used) whereas good testers explore the space of possible inputs in a much more 'creative' way.
> good testers explore the space of possible inputs in a much more 'creative' way.
My point is that the extent of testing should depend on the actual product; eliminating all bugs is not the primary goal of every team.
Some companies might decide on other goals and prioritize e.g.: just paying users (giving them better support to resolve issues), or acquiring new users, or something else.
Then we hired Kevin.
Kevin had the handset for 40 minutes before piping up "crashed it". The lead comes over to have the sequence explained to her, and says "huh, nice edge case". Half an hour later "crashed it again" (in a completely different way). Explains the sequence to the lead again. An hour later this happens again and he explains the sequence and she finally bursts out "Why would you even do that?! How did you think of pressing those buttons like that with that timing?!!".
Good testers just think differently than software engineers.