The interviewer is not thinking logically. How does he know it's a good interview problem? Let's look at the data:
>The fastest I had a candidate solve the entirety of the problem with all bells and whistles was in 20 minutes, by a freshman from the University of Waterloo, the type who did high-school competitions.
>The most depressing failure I saw was a PhD graduate from University of Toronto who could not produce working code for the first section of the problem in 45 minutes.
>I once had a candidate with 15-years work experience give me lots of attitude for asking them such a simple question, while at the same time struggling at it.
All of this data points to the fact that this question may not be good. A phd graduate and a person with 15 years of experience rejected for someone who practices programming for competitions? What gets me is that the data is painting an obvious picture here. A very obvious picture. An obvious picture that we aren't sure what's a good interview and a bad interview question.
But the problem is that most people when looking at this completely miss it. It's not obvious to the interviewer and it's not obvious to alot of people who like google style questions. We literally have not much data and not much science backing any of this up.
It's an illustration of how biased humans are and illustration of how extra biased interviewing for software positions is. If there's anything more unknowingly biased and then the replication crisis in science it's technical interviews for companies. There needs to be real feedback loops that correlate interview question passing with Actual performance.
Google is in a good position to grab this data but I'm not sure they are doing so given how they just okayed this guys gut decision to go against the grain and use this question. I'm not against this question, but certainly to call this great in the face of controversial data that he himself gathered and listed on his post is just a complete blueprint of the extent of human blindness.
The reality of what's going on here is the person here in the interview is just getting off on dominating other people with a hard question. It's not on purpose but he's doing it without realizing it. The blog post in itself is a bit showy. It's like "I can answer this question but a phd graduate can't".
As a hiring manager, I’ve noticed most peer hiring managers vastly overestimate their ability to hire.
What I’ve seen happen is that there are two narratives they flip on: I hired this great person and they are doing amazing so I am amazing; I hired this person and they aren’t doing great so I fired them so I am amazing.
Of course, they also tend to be terrible at assessing who is actually good/bad.
I call it the “arbitrary onion”: layer after layer of plausible claims that each turn out to be just as arbitrary as the outer claim.
>The fastest I had a candidate solve the entirety of the problem with all bells and whistles was in 20 minutes, by a freshman from the University of Waterloo, the type who did high-school competitions.
>The most depressing failure I saw was a PhD graduate from University of Toronto who could not produce working code for the first section of the problem in 45 minutes.
>I once had a candidate with 15-years work experience give me lots of attitude for asking them such a simple question, while at the same time struggling at it.
All of this data points to the fact that this question may not be good. A phd graduate and a person with 15 years of experience rejected for someone who practices programming for competitions? What gets me is that the data is painting an obvious picture here. A very obvious picture. An obvious picture that we aren't sure what's a good interview and a bad interview question.
But the problem is that most people when looking at this completely miss it. It's not obvious to the interviewer and it's not obvious to alot of people who like google style questions. We literally have not much data and not much science backing any of this up.
It's an illustration of how biased humans are and illustration of how extra biased interviewing for software positions is. If there's anything more unknowingly biased and then the replication crisis in science it's technical interviews for companies. There needs to be real feedback loops that correlate interview question passing with Actual performance.
Google is in a good position to grab this data but I'm not sure they are doing so given how they just okayed this guys gut decision to go against the grain and use this question. I'm not against this question, but certainly to call this great in the face of controversial data that he himself gathered and listed on his post is just a complete blueprint of the extent of human blindness.
The reality of what's going on here is the person here in the interview is just getting off on dominating other people with a hard question. It's not on purpose but he's doing it without realizing it. The blog post in itself is a bit showy. It's like "I can answer this question but a phd graduate can't".