Actually, it happens as long as your "stricter hiring practices" increase your false negative percentage by a lot more than they decrease your false positive percentage.
Try it out with some numbers.
10100 candidates, 100 are "good".
Suppose you have 2% false positives and 1% false negatives.
You hire 99 good candidates and 200 bad candidates.
Suppose now you have 0.5% false positives and 90% false negatives. (You decreased your false positive rate by 4x but increased your false negative rate by 90x. This is typical for employers who look for every little excuse to reject someone.)
You hire 10 good candidates and 50 bad candidates. Your "good hire" percentage went down, and you're churning through a lot more candidates to meet your hiring quota!
So, "it is better to pass on a good candidate than hire a bad candidate" is FALSE if you wind up being too picky on passing on good candidates.
Assuming you can identify losers and fire them after a year or two (with decent severance to be fair), you're actually better off hiring more leniently.
It's also even worse when you realize that the candidate pool is more like:
10200 candidates, 100 are "good", 100 are "toxic", and the toxic people excel at pretending to be "good".
Also, the rules for hiring are different for a famous employer and a no-name employer. Google and Facebook are going to have everyone competent applying. If you're a no-name startup, you'll be lucky to have 1 or 2 highly skilled candidates in your hiring pool.
Also, what makes this mistake common is the feedback you get.
When you make a false negative, you never find out that you passed on someone amazing.
When you make a false positive, it's professional embarrassment for the boss when he's forced to admit he made a mistake and fire them.
So the incentive for the boss is to minimize false positives, even at the expense of too many false negatives. The boss is looking out for his personal interests, and not what's best for the business.
> Try it out with some numbers. 10100 candidates, 100 are "good".
What you're attempting to do works well for hypothetical drug testing[1] or terrorists but not for hiring developers (or anyone else). With the numbers you used you're proposing that less than 1% of all candidates are "good" - nobody would reasonably set the "good" threshold to include only the top 1% of developers.
You are assuming that the percentage of people interviewing for a job are a good representation of the general programming population. That's not true at all.
First, unless you really think we are terrible at hiring as an industry. So even if on a given day all developers that start looking for a job have a skill level that matched the average population, the good developers will find jobs faster, leaving the 4th, 5th and 6th job applications for the developers that did not manage to get hired after applying ina couple of places at the most. So yes, your talent pool on any particular day, just due to this effect, is far worse than the average talent in the industry.
Then there's how bad developers are fired or laid off more often than the good ones, so they are added to the pool more often. Typically companies make bigger efforts to keep good developers happy than those that they considered hiring mistakes.
And then there's the issue with the very top of the market being a lot about references and networking. In this town, no place that does not know me would give me the kind of compensation that places that do know me would. I'll interview well, but nobody will want to spend top dollar in someone based just on an interview. In contrast, if one of their most senior devs say that so and so is really top talent, then offers that would not be made normally start popping up. The one exception is 'anchor developers', people that have a huge level of visibility, and you still won't get them to send you a resume at random. You will have to go look for them, at a conference, user group or something, and convince them that you want them in the first place.
My current employer has a 5% hire rate from people interviewing off the street, and that's not because our talent is top 5%, but because you go through a lot of candidates before you find someone competent. We've actually tested this: Interviewers do not know candidates, even when they were referred by other employees. But, as if by magic, when there's a reference, the interviewed almost always is graded as a hire.
I completely agree most applicants are going to be ones you wouldn't want to hire (this is true for any job) but it's not going to be as low as only 1% are worth hiring (which is what the comment I was replying to was suggesting). Even your 5% number seems suspect (i.e. that sounds like your company doesn't have good screening to determine who to interview...you shouldn't need to interview twenty people to fill a position).
Even if you set the "good" percentage to 10%, too high a false positive rate will still ruin your results.
Based on the people I've worked with over the years, I say that the actual skill distribution is:
5% toxic - These are the people who will ruin your business while deflecting blame to other people.
25% subtractors - These are the people who need more attention and help than the amount of work they get done. In the right environment, they can be useful. (Also, this is mostly independent of experience level. I know some really experienced people who were subtractors.)
60% average - These people are competent but not brilliant. These are solid performers.
9% above average - They can get 2x-5x the work done of someone average.
1% brilliant - These are the mythical 10x-100x programmers. These are the people who can write your MVP by themselves in 1-3 months and it'll be amazing.
You first have to decide if you're targeting brilliant, above average, or average. For most businesses, average is good enough.
If you incorrectly weed out the rare brilliant person, you might wind up instead with someone average, above average, or (even worse) toxic.
Actually, when my employer was interviewing, I was surprised that the candidates were so strong. There was one brilliant guy and one above-average guy (My coworkers didn't like them; they failed the technical screening, which makes me distrust technical screening even more now). They wound up hiring one of the weakest candidates, a subtractor, and having worked with him for a couple of months my analysis of him hasn't changed.
There is no reasonable definition of average that would only allow for 9% above that (or 10% including the 1% you marked as brilliant). Average is usually considered as either the 50th percentile (in which case you would have ~50% above this) or some middle range (e.g. 25th - 75th percentile).
Since you said 60% are average we'll consider an appropriate range as average, the 20th - 80th percentile. That leaves you with 20% of applicants below average and 20% above. Your math falls apart real quick when we're dealing with distributions like 20%/60%/20% instead of 99.5%/0.5%.
[As an aside, the toxics and brilliants are outliers, they should be fairly obvious to a competent interviewer (and as someone who previously spent a decade in an industry where nobody conducts interviews without adequate training I'll be the first to say most interviewers in our industry are not competent)].
The problem is that programming skill is not normally distributed. There are some big outliers at the brilliant end, and some big outliers at the toxic end.
So "average" is not really a meaningful term. I mean "average programmer" as "can be trusted with routine tasks".
Behind every successful startup, there was one 10x or 100x outlier who did the hard work, even if he was not the person who got public credit for the startup's success.
If you're at a large corporation and trying to minimize risk, hiring a large number of average people is the most stable path. You'll get something that sort of mostly works. If you're at a startup and trying to succeed, you need that 10x or 100x person.
I didn't say it was impossible to construct a set that would yield only 10% as above average, I said there "is no reasonable definition of average" - if you feel the above set accurately represents the distribution of the caliber of developers then we clearly have very different opinions of what's "reasonable."
That would depend on what set of developers we're looking at:
All developers - this will be very bottom-heavy, people [usually] get better with experience and there's obviously a lot less people that have been doing this for 20 years than having been doing it for two. Additionally people who are bad at a profession are more likely to change careers than those that are good (this is by no means an absolute, I wouldn't even go as far to say most bad engineers change professions, I'm just saying they're more likely to - further contributing to higher caliber corresponding well to years of experience).
Developers with similar experience - this is much more useful as there's not much point comparing someone who's been doing something for decades with someone on their first job. I would expect this to be a fairly normal distribution.
Developers interviewing for a particular position - applicants will largely self-select (and the initial screening process would further refine that) so this group will largely have similar experience (i.e. you're typically not interviewing someone with no experience and someone with 25 for the same job). But it won't match the previous distribution because, as someone else commented, the bad ones are looking for work more often (and for a longer period of time). Do the interviewees you wouldn't hire outnumber the ones you would? Yes, definitely. Do they outnumber them by a factor of a hundred to one? Definitely not. Ten to one? Probably not - if they do it probably represents a flawed screening process causing you to interview people you shouldn't (or not interview the people you should) rather than an indication that only one out of every ten developers are worth hiring.
I'm not OP, but it feels like you're arguing semantics. YES, that's the technical definition of "average," no argument, but I don't think he/she meant mathematically average.
If you substitute with these terms:
- 5% toxic
- 25% subtractors
- 60% competent
- 9% exceptional
- 1% brilliant
...then there's no reason to apply (or defend!) the mathematical definition of "average." And I think those numbers actually seem somewhat reasonable, based on my own exposure to working developers in various industries. What this doesn't count is the the "FizzBuzz effect," where ~95% of the people who are interviewing at any one time (in a tight market) tend to be from the bottom end of the spectrum.
Even within the broader pool of programmers, the line between subtractors and competent is very project-dependent, in my opinion. For some levels of project complexity, the line might actually invert to 60% subtractors and 25% competent, while for far less complex projects, it might be 5% subtractors to 80% competent.
In the former case I'd want an exceptional developer, while in the latter the exceptional developer probably wouldn't even apply, or would quit out of boredom.
This is reasoning from something that's harder to estimate (how good is the hiring pool and your interview process) to something you know more about (how good are the company's employees). It seems like you should be working backwards instead?
For example, if you assume that 90% of your employees are "good" and 1% are "toxic", what does that tell you about the candidate pool and/or your interview process?
It's my crude estimate based on the places I've worked over the years, and the people I've seen come in when my employers were interviewing.
If I was the boss and had a "toxic" employee, I'd just dump them rather than waiting. I've been forced to work with toxic people because I'm not the boss, and I've noticed that toxic people are really good at pretending to be brilliant.
Over the years, I've also worked with a couple of people who singlehandedly wrote all of the employer's key software. I also worked with several people who wrote a garbage system but conned everyone else into thinking it was brilliant.
If 90% of candidates are "good", then why waste time with a detailed technical screening at all? Just interview a couple and pick the ones you like the best.
Try it out with some numbers.
10100 candidates, 100 are "good".
Suppose you have 2% false positives and 1% false negatives.
You hire 99 good candidates and 200 bad candidates.
Suppose now you have 0.5% false positives and 90% false negatives. (You decreased your false positive rate by 4x but increased your false negative rate by 90x. This is typical for employers who look for every little excuse to reject someone.)
You hire 10 good candidates and 50 bad candidates. Your "good hire" percentage went down, and you're churning through a lot more candidates to meet your hiring quota!
So, "it is better to pass on a good candidate than hire a bad candidate" is FALSE if you wind up being too picky on passing on good candidates.
Assuming you can identify losers and fire them after a year or two (with decent severance to be fair), you're actually better off hiring more leniently.
It's also even worse when you realize that the candidate pool is more like:
10200 candidates, 100 are "good", 100 are "toxic", and the toxic people excel at pretending to be "good".
Also, the rules for hiring are different for a famous employer and a no-name employer. Google and Facebook are going to have everyone competent applying. If you're a no-name startup, you'll be lucky to have 1 or 2 highly skilled candidates in your hiring pool.