I do almost all of the technical phone screens for my company. The process starts with a "where-do-you-see-yourself-in-5-years" personality screen, with an H.R. drone or outside recruiter. It moves on me, and then on to a brief "homework" coding exercise that we ask people to write and submit. If we like their code sample, then they come in for the panel of "whiteboard-exercise" people who conduct the face to face round. At my stage, I seldom bother with certification style questions about the programming language, or with highly abstract thought exercises, because other people will do those things in the face to face stage.
Typically, I ask questions that give me an idea about the candidate's depth of experience and awareness. For example, "I see that you've spent X years using Subversion for source control. What are your opinions on trunk-first development vs. branch-first development?" I ask questions that speak to practical experience and design ability, without getting into too much depth. For example, "Pretend that you're using an OO language to build an application for <insert purpose here>. Just off the top of your head, what are some classes that you would expect to see in your class diagram?". Etc.
I find that this level of questioning is a much better screening tool than trick questions about programming language quirks or the minutiae of frameworks, or cliche puzzles about how many golf balls fit on an airplane. However, I groan and roll my eyes when I hear people challenge the need for technical interviews at all. Yes, they are necessary.
Having performed a thousand interviews by now, I am awestruck by how poor the software development talent pool is. I am aware that the Bay Area is overrepresented in HN's readership, and that crowd tends to take for granted the talent level found in the technical equivalent of Mecca. However, I assure you that the rest of the planet is dominated by sleepy line-of-business developers... who have all the passion beaten out of them in the first 5-10 years, and spend the rest of their career just phoning it in and not growing.
I sometimes ask candidates what the letters "MVC" stand for. The successful response rate is around 50/50. I ask candidates to briefly explain the advantages of the Model View Controller pattern, and only 10-20% can field the question. We bring in Java and C# candidates who have been working with their respective language for 10 or 15 years, and they get COMPLETELY EXPOSED during the face to face round when asked a series of basic certification exam style questions. Nothing tricky, just core fundamentals.
People who post here are not the norm. The "norm" is atrocious. So yes, unfortunately we all must endure technical interviews... to filter out people who have enough confidence or personality to excel in the other interview segments, yet are utterly useless.
Let's put it this way: the more experience I've had with interviewing, the more selective I've been in my own job searching, and the more aggressive I've been in my salary negotiations. If you are really good, and are located outside of San Francisco, then you are worth your weight in gold and should value yourself accordingly. You wield tremendous leverage once you make a strong showing in a technical interview.
> "I see that you've spent X years using Subversion for source control. What are your opinions on trunk-first development vs. branch-first development?"
Actually, this demonstrates a problem right here. It's not clear to me what you mean by these terms. A quick Google search ("trunk-first vs branch-first svn") suggests that you're not using common terminology.
After thinking about it for a minute or two, what I _think_ you're asking is "should all development happen on a common branch, or should developers create separate branches for individual features/fixes, merging back into the mainline when finished". And, indeed, I'd be happy to have a conversation with you about this.
But if it takes me a couple of minutes to figure this out while just sitting here at home, in the pressure of an interview, I'm probably going to fumble, or say "I don't understand what you mean", which will make me wonder if I've blown the entire interview. This despite the fact that I've been programming for 20 years and have used a number of version control systems (CVS, ClearCase, P4, Subversion, and most recently Git).
In an actual interview, I would only pull out that specific example for a candidate who's worked with CVS and Subversion almost exclusively. The actual question is also a bit more verbose:
Have you worked for a company that strives to do as much development as possible in 'trunk', creating a release branch near deployment time for production bugfixes? Have you worked for a company that preferred to branch at the outset of new development, merging back to 'trunk' periodically? What did you find to be the strengths and weaknesses of each approach?
Their answer lets me read between the lines and gleen much more information about their work history. Have they worked in settings where multiple work streams were in development simultaneously? Do they have substantial experience in collaborating without trampling on shared resources? If so, then they usually mention the different pain points in merging. I don't consider there to be a "right" or "wrong" answer to this question, and not having a clever answer certainly doesn't disqualify someone from being a strong programmer. But it does help to level-set, and identify candidates who think like team leads or might be well suited for responsibly beyond raw coding.
However, I don't want too much text between parentheses that are injected into a long sentence. So you get "trunk-first vs. branch-first". :)
Exactly why is it that you would making a screening decision based on version control practices? Of the many things you'll need to ramp candidates up on, this seems like one of the very easiest. Not only that, but because most firms do release management a little differently from each other, there's some VC ramp-up effort you'll need even with people who have mastered your VC tool.
I feel like dev interviews are full of questions like this, things that require some expertise and experience to answer, but do virtually nothing to predict on-the-job performance.
When you ask an interview question, you are pricing candidates. That's obvious when you think about it: you're screening, and so your questions alter the supply of candidates that will hit the bottom of your screening processes. Fewer selectees -> poorer employer BATNA -> higher prices.
Do you really want to price candidates based on how they use version control tools? How much are you willing to pay extra for people who have a lot of experience with different VC methodologies? Are you sure the weight of your questions about VC match up with the (hopefully minimal) premium you're hoping to pay for VC expertise?
I'm not sure that you fully read the parent comment. I could care less about their Subversion expertise. My company doesn't even use Subversion anywhere.
However, if you list "10 years experience with <Technology X>" on your resume, then you should absolutely be prepared to discuss your experience with <Technology X> at a high level. Not anal minutia or contrived trick questions, but certainly you should be able to respond to an open-ended question about the basics with enough context to show that you weren't lying to pad your resume.
More importantly, as I explained in the parent comment, I am interested to see if their response reflects experience with multiple teams working on parallel over overlapping efforts within the same codebase simultaneously. Everyone says that they have experience like that, but if you poke a bit deeper you find that half the time it's exaggerated. They may have worked in a context with multiple teams, but affecting the same area of the application or system. If you have had legit experience of this kind, or at least show a high level of insight in talking through the issues that can arise, then you might be considered for a team lead role sooner than you otherwise might have been. As I said earlier, it's not a "right or wrong" question that can disqualify you from being a capable programmer... it's a "level setting" question that helps gauge which level of responsibility you might start out with.
There is no less productive form of interview than the "resume validation interview". Resumes are practically useless in the best case. Here, you seem to propose paying a premium for candidates who can properly estimate their facility with VCS systems and then cogently discuss that estimation in an interview. That can't possibly be a skill relevant to building software on your team!
>Here, you seem to propose paying a premium for candidates who can properly estimate their facility with VCS systems and then cogently discuss that estimation in an interview. That can't possibly be a skill relevant to building software on your team!
I think that's an unfair interpretation of what StevePerkins is saying.
I don't get the impression from him that VCS knowledge in particular will make-or-break a candidate. He's saying that IF the candidate put it on his resume, then the candidate himself is the one who opened that door for discussion.
Out of the infinite list of programming topics to discuss, what are some options to narrow it down? Well, the candidate (through his own volition) put <Topic X> on his resume... so... let's talk about Topic X! It doesn't matter what Topic X happens to be (whether it's "svn", "parallel algorithms", "Ruby", "TDD", "cloud scaling", whatever). What matters is that the candidate is the one who thought it important enough to highlight it. From there, it's reasonable to think it's something the candidate is already comfortable with discussing in depth. If not, he shouldn't put it on the resume.
Your response makes it seem like StevePerkins is playing Alex Trebek with Jeopardy random topics and asking "gotcha" questions. It's not random -- the source is the candidate's resume. It certainly seems fair and reasonable to discuss any topics the candidate put on his resume. Imo, it's also fair to augment with questions that are not represented on his resume (but that's a different discussion from StevePerkin's example.)
Personally I think it's an excellent question. He's probing into a candidate's understanding of why you use version control systems and why different approaches might be used.
If you have a significant amount of experience then hopefully you have seen enough different situations to be aware of the advantages and disadvantages of each approach.
It's really a question to figure out whether or not you can reason about high level concepts.
personally, I think the SVN question is a pretty good example of "can you talk intelligently about any aspect of any complex system you've ever worked with?". If a candidate can explain one, I'll recommend to hire them. The resume provides a good list of possible subjects to discuss, otherwise I just cast about randomly, try to chase threads in the conversation until I find something with some depth.
I completely agree with this - asking about VCS is a cross-cutting question that nearly all candidates have experience with unlike, for example, graphics, desktop, or cloud experiences. So, requesting that the candidate explain their VCS workflow is a way to generate a practical and relevant data point to use when evaluating multiple candidates.
For example, the 1-in-a-million candidate that has used git with a CI/CD configuration (note: I'm also outside SV) versus the candidate who uses TFS ("git? You mean Github? Yes).
I've carefully read every comment on this subthread, and here's what I think:
This question is a trifecta of ineffective candidate screening tactics:
(a) It's a technical screening question, one a strong candidate could get wrong, based on a technical aptitude that is trivial to teach on the job and thus rarely worth paying a premium for.
(b) It's a subjective technical question, for which reasonable engineers can have differing opinions, which means it's an outlet for interviewer subconscious bias. Did the interviewer just eat lunch? Candidates will do better on this question if they have.
(c) It's a tea-leaf-reading question about engineering/team management: it's superficially and overtly about technology, but subtextually about a bunch of other things. Let's hope the candidate realizes that.
A typical interview lasts about 60 minutes. Let's say it takes 10 minutes to pursue this particular line of inquiry. That's 16% of your interview you're spending with a question that greatly rewards people who are good at talking about technology. Worse, if you ask that question early to a quiet but excellent candidate, you can psych them out, which means you pay for that version control question in every other question you ask.
It is totally reasonable to assess soft skills and team compatibility. But you have to design an interview that does it. You can't improv it based on candidate resumes.
Avoid questions like this.
This thread started with someone saying they're pretty good at interviews. It turns out that they try to assess soft skills with technology questions based on the luck- of- the- draw of candidate resumes. Candidates with effective resumes will have an easy time passing these interviews. From the comments on this thread, that obviously sounds reasonable to some people.
I submit: those people are not competing for talent. They may think they are, but the real contenders in this market won't be OK with letting good candidates slip past because their job-hunting skills aren't finely tuned. In fact: they'll do the opposite: those candidates are steals in this market.
The original commenter acknowledged that when he said that his experience was that the market was full of poor candidates. If that's the case, you especially can't afford to filter out effective devs because they fail to impress you when they explain how they use version control, or when their resume overstates their facility with version control.
Thanks for being open about your perspective on the interviewing process, and for taking the time to follow up on these threads.
Do you believe that technical phone pre-screens are ineffective in general? From what I've read Matasano doesn't pre-screen candidates, but provides complementary study materials instead. Is that because the subject matter is specialized? Would you approach hiring web dev roles differently?
From your remarks, it sounds like you would reject the practice of asking open-ended interview questions (e.g. describe your workflow, describe a typical day, describe a recent project) due to interference from the interviewer's bias. What, if any, value do you place on open-ended questions?
In general, my feeling is that the Matasano process (which I currently manage) works outstandingly well where there isn't a flood of qualified candidates. If you have a glut of folks who are ready to start working, you can get away with a terrible process.
We do free-form technical interviews, but only to try to detect candidates who really aren't ready for the work-sample challenges. Our in-person interviews are standardized and try to evaluate consulting/architecture skills that are hard (impossible?) to measure without people. These involve open-ended intermediary questions, but the final answers are structured.
The bottom line is this: you cannot compare candidates using free-form interviews. You must compare candidates who are going to be doing similar work. Thus, free-form interviews have no evaluative value.
It's highly relevant to getting a job on a software team, but only because most teams are assembled via interviews, and verbally relating stories from your past in a face-to-face meeting is the most effective interviewing technique.
And it's relevant to a job like sales. (That's unsurprising because a job interview is actually a sales meeting.) And many management jobs do have a sales aspect -- you have to justify budgets, sell work inside the company, et cetera.
But if explaining our work to outsiders were a particularly important or routine skill for programmers, we wouldn't be so bad at it. And, on average, we are bad at it. Because our actual on-the-job communication, which we practice all the time, is largely written and asynchronous, taking place on media like Slack or Github. It relies on plenty of job-specific shared knowledge, domain experience, and jargon, and it all happens in the shadow of a job-specific shared codebase that is supposed to speak for itself -- the whole point of software is to build something that works by itself -- but is also perpetually unfinished.
There are social skills that are important to have on a software team, but it's difficult to judge them in an interview. Interviews are staged events.
Challenging a candidate to defend a resume in an interview is like asking them to do improv comedy, and selects for many of the same factors: Verbal gracefulness, comfort in the spotlight, the ability to seamlessly change the subject, and the amount of time spent in rehearsal. Good candidates rehearse their resumes. We get to write them ourselves, after all, and with practice we learn to design them with hooks that lead into our best material.
I would say the whole point of software is to build something that people can use. If someone builds a module, but can't tell me how to use it, it's not of much use to me.
Oh, but surely they'll write documentation? Programmers, by and large, seem to suck at that too, which is why we have tech writers. But somebody still needs to explain to the tech writer what's happening! And only a few companies seem to carry tech writers for internal only products.
To put it simply, if I ask somebody how their code works and they say "Go away for an hour while I write documents" I think I'd rather not work with them. Or worse, they ask me for help debugging but can't tell me what they're trying to do. No thank you.
Your questions are good and relevant ones, I agree. Let me rephrase them a little:
"Hello, coworker! Did you enjoy the cake we both got to eat the other day in the company cafeteria?"
"By the way, I'd like to ask you a question, and don't worry: This isn't an interview or anything, so if you can't answer me right away, or if your answer lacks grace, it's not as if you'll lose your job."
"Anyway, coworker: I found this code, which you wrote while working for my company, under the direction of my company's management, and which solves a problem that my company actually has, and which builds upon my team's platforms, languages, and coding standards, and which might even link directly to my code, and which both of us have had a moment to read and think about and which is right in front of us on this monitor. How does this code work?"
"Also, can you help me debug this code I have here? It builds atop the code I showed you last week, and is written in the same language that we all use, and attempts to solve a problem you've seen before – which is not a coincidence, because you were the person who asked me to solve the problem."
These questions are incredibly relevant to our work, but interviews can't cover them. Candidates are not our coworkers and they share none of our context. Instead, interviews are, at best, an exercise in prediction. In practice, they are often an exercise in magical thinking.
During the workday, people aren't being constantly judged. They don't implement functions on whiteboards without unit tests, solve brainteasers out loud during stand-up meetings, or implement quicksort from memory. They do have to explain code to coworkers, but not to people who don't understand the problem space, the language, the background, or the constraints. These rarely-exercised feats of skill are valuable -- sales is valuable -- and our gut feeling is that such feats are somehow related to relevant job skills. But gut feelings are often wrong. And not every job is in sales.
That's not what I think is going on here. He's not asking "Tell me the commands that I have to type to do X, Y and Z in GIT, SVN and Mercurial", he's asking a higher level question about how one can utilise VCS (and DCVS) tools to work in different ways.
Depending on the experience of the candidates he's trying to acquire/interview, this may be appropriate to indicate that they have either experienced different ways of developing at different companies, or have an interest in software development practices wider than just "The way I've done it is the way I was told."
I'd consider it an 'indicator' question, indicating that the candidate has either experience or interest in software delivery and the way that different organisations work/operate. It's probably not a make-or-break question, but if a candidate had a good number of years under his belt and hasn't at least heard of some kinds of different practices then they might be the "do what I'm told but no more" coder that that organisation wants to avoid.
There is a subtle but crucial difference between "candidates with experience working on teams delivering enterprise software" and "candidates with aptitude for working on teams delivering enterprise software". Worse, many in the former set aren't in the latter.
You might want to build an interview process that selects candidates who belong to the latter set. I'm a little baffled why you'd want to select from the former set in preference to the latter.
> Actually, this demonstrates a problem right here. It's not clear to me what you mean by these terms.
I'm not trying to get in an ego stroking contest here, I'm just providing anecdata.
I worked with SVN for ~4 years and git for ~4 more. What StevePerkins was asking was perfectly clear to me after about three seconds of thinking. Of course, in an interview, I would be sure to parrot back my understanding of the question. :)
FWIW, you and I understood his question to mean the same thing.
I feel the same way, with approximately the same number of years of experience with each. And I definitely have strong opinions on the answer to the question :)
I got a bit baffled by that too, but came to the same conclusion as you did after a minute or two. In an actual interview, I suppose I would just start by clarifying what is meant by those terms before going on with a discussion about it?
I don't see the issue to be honest; it's an interview, not a written exam. You can think aloud, the interviewer can guide you if you're confused by the terminology and you can ask clarification questions.
Unless a software developer is tasked with deciding which version control system to use or for some odd reason has to do a deep dive into the philosophy/design of subversion, why on earth would they bother to know this? You might as well ask if they prefer Cherry MX Blue or Brown. I don't know SVN but if it's anything like Git, there are 100 ways to use it, exactly 5 of which are useful to 90% of developers on any given day.
A quick Google search ("trunk-first vs branch-first svn") suggests that you're not using common terminology.
It's not a terminology question, it's a concept question. A good candidate should be able to recognize abstract concepts regardless of the words used to describe them. Even in an interview setting.
Why? What makes being able to persuasively discuss this particular concept an attribute of a good candidate? And, when you identify the answer to that question, can you then answer: is this question the best means I have of assessing that attribute?
I agree with you that this type of technical interview is a great way to fail to hire ideal candidates. I've experienced this myself. I was commenting only on the specific complaint about "terminology," and that one should be able to see through unfamiliar terminology in an interview, not that such an interview is the best way to hire excellent developers.
Although I believe everything you say, I'd like to provide a counterpoint.
I think I'm a pretty good programmer and I've got a reasonable body of work on GitHub to back it up, but I've often failed technical interviews because I go to pieces under the pressure and my brain just stops working. I've been a dev for > 15 years but if anything I've got worse at interviewing over time. I only apply for positions that I genuinely think I'd be good at, but the technical interview gives the impression that I'm a clueless idiot. From my (admittedly selfish) point of view, the approach proposed in the article seems much better than the process you describe here.
I felt bad about my nervousness in interview situations too, so I started taking improv classes. Now interviews don't give me any performance anxiety at all; once you've performed in front of a large audience with no idea what you're going to say or do, it makes interviews a cakewalk.
And remember, even though the primary skill you're interviewing for is coding, they are also testing you for communications ability. Being able to clearly and concisely explain what you're doing is at least as important as being able to get things done.
I've been doing improv for 10+ years and had never realised this, but it may well be a large part of the reason why I don't get nervous in interviews. Thanks for helping me make the connection!
Plus, as one other commenter points out elsewhere in this thread, once you've been on the other side of the table for a while, you get a much better idea of where your own strengths lie (and what you're up against, which can instil a lot more confidence than you might think!).
Personally, I think (or at least hope!) that I'm pretty good at differentiating between nervousness vs. simply not having an answer. With the MVC example that I gave earlier, sometimes people fumble along and provide a lot of information even though they can't pull it together in any coherent way. Other times, they flat-out shrug and tell me that they don't know. Protip: I recommend saying something and at least making an attempt at the former.
I like to ask people about side projects, and what is the latest interesting thing that they've been learning in their own time. However, my issue with the "Your-GitHub-Is-Your-Resume" nonsense is that it falls apart past the age of 30. Before I got married and had kids, I used to spend almost every waking moment of personal time (and about half of my employer's time, to be honest) working on areas of personal interest. I wrote a handful of minor open source frameworks that are too obsolete to be worth mentioning, a set of Java bindings to the wxWidget library, authored a book, and served as technical reviewer for a stack of other books.
Then I got married, and became a father.
Now, quite frankly, I don't do that anymore. I just can't. It isn't a matter of passion, it's a matter of physics. My GitHub account includes some small personal apps that I tinker with from time to time (e.g. a diet and exercise tracking application), but I would be MORTIFIED if someone thought that was a representative sample for job application purposes. It's a fairly trivial web app, that I could just as well have written 10 years ago.
In fact, the type of work that you do later in your career really doesn't lend itself to showcasing in a small GitHub portfolio. You work on Node.js websites, with whichever client-side data binding framework is popular this week? Awesome, GitHub would be perfect for showcasing an example of that. But your most recent projects include integrating a dozen microservices with an AMQP broker and Apache Camel, or using Spark to crawl a mass of data in a Cassandra cluster? Large scale development just doesn't lend itself as well to representation in a personal GitHub portfolio.
So side projects may be a useful screening tool with junior level candidates for web developer positions, but I think it's a naive suggestion for more senior level candidates in more complex domains.
The notion of having an on-site coding exercise is better in my opinion, but that's not a panacea either. Anything worthwhile would probably take a matter of hours, turning your on-site interview into an all day affair for which the candidate would have to take a full day of PTO from their current job. There is NO WAY that I would submit to something like that unless I was already very far into the recruiting process with a particular company, and very much interested in working there already. You're not going to reach that point unless you already have technical screening tools in an earlier stage of the process, so there you are right back at the original problem.
I do agree with the article, as well as many of the comments here, that "whiteboard exercises" are a fairly pointless tool (e.g. "write a recursive function to traverse this tree", or "walk me through your process for guessing how many golf balls could fit in an airplane"). Even though my company does a bit of that too, I grumble about it and refuse to incorporate any of that into my stage of the process. However, almost all of the alternatives that I've heard suggested seem to be proposed by very young junior-level devs, who lack perspective on what their career path and mindset are going to look like 10 years down the line.
I think you're correct, but I also think it's a real shame that live performance skills are important to excel at a career that has nothing to do with live performance.
I wonder if there are any careers where you only have to get good at the thing you're doing rather than a bunch of meta-things.
Live performance skills are most absolutely unnecessary to excel in development; this comment thread is discussing how certain people can improve their interviewing performance through the practice of being spontaneous and present. A very helpful trait that some people have naturally, and that others do not.
I think it's great when the practice of some discipline or craft has beneficial effects in other areas of our our lives.
Hey I used to be in the same boat as you but recently I seem to have gotten over it and get lots of offers =D
If you're interested, the way I got over it was by just accepting the fact the interview process is not perfect and in the end I have no idea what these people are looking for. That being the case, I just do my best and enjoy the fact I get to work on algorithmic questions usually much more interesting than what I would get to see while working. I guess I try to just have fun!
It really seems like a lot of people freak out because they are afraid of rejection. I don't want to get stereotypical about how nerds are antisocial or whatever, but I definitely found that once I stopped caring whether I was going to get hired or not, I started doing and feeling a lot better. I actually started enjoying interviews because I get to see interesting questions, see what other people are working on, and because I am junior, learn about more technologies and why they decided to use them.
With respect, I think you might just be making (what we should now call) the "Fizzbuzz fallacy": "99% of applicants can't answer basic questions, therefore the talent pool is poor".
Really, it's more like "99% of the people still searching, who don't have a documented contribution history, and don't have a network that lets them find good jobs in a heartbeat, who must resort to this to find a job[1], can't answer basic questions."
Most of the talent pool is working and has to be pried away; they won't show up through HR channels.
[1] Not a diss, by the way; I count myself in that set, though am still not "on the market", and still don't fail at fizzbuzz style questions.
If you are outside any big tech hub, most jobs and candidates will be filled by recruiters or normal job posts. I've been developing professionally for about 15 years, changed jobs a lot of times, and never was through a referral/network. Most of the people I know are the same. And yes, also having done interviews and sorted through thousands of resumes, there is so much out of touch folks out there that I wouldn't hire anyone nowadays without at least 1-2 small tests to validate their knowledge.
Yes. Living in the midwest, I've done software development for nearly 30 years. I've had more than half a dozen employers and I have never been given any kind of "technical test" in an interview, such as writing code on the spot, diagramming an algorithm, or solving a puzzle. I don't know where this happens, but it's not everywhere.
I saw it in london once. I failed the test so badly that the interviewer refused to believe that I had written the demo project which got me a first interview. He said it was one of the best he had ever seen. But then said I couldn't have written it because I was failing the interview so badly. People who dont get that nervous just dont understand how debilitating being really nervous is. Also the white board code challenge was a total waste of time.
(I'm very late to this discussion, so I may try this question another time).
But I've been thinking about the marginal value of each additional technical question. In other words, how much additional relevant information do you get about a candidate increase with each increasing level of difficulty?
For instance, suppose you ask fizzbuzz and the candidate has an easy time of it. Then you ask about building/searching a binary tree, which the developer manages, but only after fumbling round a bit. Then you get into finding cycles in linked lists or graphs, and the developer takes a crack at it, but would need to look it up. Or maybe the developer gets it, and the interviewer ask about finding all permutations of a string...
How much more do you learn by going from fizzbuzz to binary trees. How much more do you get by asking about cycles in linked lists? And so on…
Just for the record, I can't stand technical interviews, and I dislike them so much that I'm considerably less inclined to apply for and interview for new jobs because I feel that I've studied for my data structures and algorithms midterm one time too many. I just don't want to re-load merge sort into short term memory again.
However, in spite of all this, I'm still somewhat sympathetic to interviewers[1]. The truth is, if you really don't have a lot of direct evidence about a developer, you truly are at risk of hiring someone who can't code.
[1] My problem with silicon valley hiring practices isn't that they have a process that leads to a high false negative rate. They should do what they feel is best for their company. My problem is that they do this while complaining about a critical shortage of developers so severe that it endangers the entire tech economy.
As a self-taught person beginning to write software who is not in SF, your description of someone who cannot explain the advantages of the MVC pattern makes me incredibly nervous. I don't really know the MVC pattern in detail, or what its advantages are. I do know about Big O complexity, data structures, loops, registers, as well as logic gates, and the basic structure of computer processors. I can code small programs in Python and Java, and have written some basic Flask apps.
Am I totally unfit for developing software?
If not, What is the correct answer that will prevent me from getting COMPLETELY EXPOSED (in your words) if you ask me this MVC question in an interview?
>If not, What is the correct answer that will prevent me from getting COMPLETELY EXPOSED (in your words) if you ask me this MVC question in an interview?
Every interviewer has their own pet question that the candidate is COMPLETELY UNQUALIFIED if they don't know.
My short answer to "What is MVC?" is that it's a popular web development trend, but without much substance to back it up. Every MVC project that I've been supporting/maintaining was a nightmare, taking 3x-5x longer to get stuff done than what I consider reasonable.
MVC = Model-View-Controller. It's usually for web development. You structure your code in a way that separates the data logic (Model), the user presentation logic (View), and the glue that holds the bits together (Controller).
There are many popular MVC frameworks that let you churn out a lot of mediocre websites quickly (angular.js, Ruby on Rails, Zend, Django(Python), and many others). The MVC framework does a lot of routine coding for you, but it comes at a price. The code can become a nightmare to maintain, especially if you use the MVC framework incorrectly. If you do something outside what the MVC framework provides, it can become a handicap rather than a help. The MVC framework usually demands you write your whole application in its preferred style, such as spreading out your code in various directories with forced naming conventions. MVC frameworks tend to be poorly documented compared to the underlying language (For example, compare the zend documentation to php.net).
Funny thing... MVC is a pattern for GUIs (Windows and Mac desktop applications). It was simulated for the web, and none of those that you mention with the possible exception of Angular allow a real MVC pattern in a web application.
The point of it is two-fold: separation of concerns, and DRY. But the reason it so ugly on the web when it produced elegant code for GUIs is the reason that web controller schemes are not MVC.
> Funny thing... MVC is a pattern for GUIs (Windows and Mac desktop applications).
Errm, it's a pattern often used for UI (CLI and GUI) programming that was first commercially introduced in Smalltalk-76 (in 1976). [This predates Windows and Mac by many years.]
MVC can be done -with varying degrees of difficulty- in any computer language. You don't even need a web development framework to implement it! ;-)
Just because it can be done, doesn't mean it can be done well.
iOS/OS X seems to do real MVC, with lightweight reusable view objects managed by ready-made controller classes. The obvious benefits are minimal memory and streamlined data flow. You only ever make/use the view objects you really need, you can recycle them to save memory, and - in OS X especially - controller objects include useful ready-made convenience methods.
Web frameworks often seem to do something that looks like MVC if you squint and don't think too hard. But when you're forced to use separate languages for logic (js), markup (HTML), and view design (CSS), and handle separate client/server environments, and there's inevitable overlap between all of the above (jquery etc) because stuff doesn't "just work", and you probably have yet another layer as a DB driver, it gets very complicated very quickly - to no great benefit.
The web has become a snarly ball of warty epicycles. That's why native is so popular - you get one common language for logic, views, and data, with a clean-ish interface to a remote server if you need one.
When everything works together like that, you can think seriously about MVC.
When the core concerns are all over the place already, it's a mess before you even start.
Which is not to say you can't work with it and build cool stuff - more that you can't just airdrop in a design pattern from a different tradition without thinking really, really hard about what, why, how, and what happens a year from now.
Actually, it never worked in Windows GUIs either. [1] It's just a broken concept.
Separation of concerns and DRY are both good. MVC doesn't succeed at either, except in idealized corner cases.
[1] A lot of widgets in Windows, for instance text fields, encapsulate the view (it's where the text is displayed), the model (which keeps the text in the control), and the controller (the widget handles the UI for entering text, catches and handles the mouse, etc.).
Yeah, I know, angular.js is MVVM and not MVC. It still sucked. The boss chose angular because he wanted to pad his resume with angular.js experience, and not because it was the right tool. That startup is still stumbling along, but wasn't as successful as it should have been.
(In angular, when you edit data in the view, the model is immediately updated, which is why it's MVVM and not MVC.)
Once you get beyond small programs, you run into software architecture questions, which touch issues of maintainability and complexity. One of the techniques we have to manage this complexity is to break the program into components, smaller pieces that couple to each other loosely. This limits the impact of a change, since a change in one component is less likely to require changes in other components. At least that's the hope.
There's lots of ways to decompose a program, each with its own strengths and weaknesses. MVC is a particular decomposition that is useful for interactive applications. The components are named Model, View, and Controller. An example of an advantage of this decomposition is that you can apply a different user interface (e.g. a command line one) by exchanging the View, without needing to change the Model.
Other answers here have focused on MVC as it applies to web development. But MVC is from the 70s, predating the web by many years! If you look beyond the trend, you see ideas rooted in an effort to architect software well, and see MVC as just one design, one particular decomposition. The key idea is not MVC itself, but the principles behind it. Being able to place ideas into larger contexts shows mastery, in my view.
As a complete beginner? No, I wouldn't call you totally unfit for developing software. I'd call you entry-level, but that's totally okay (as long as your applying for entry-level jobs and have realistic salary expectations). I didn't specify this in my comment above, but I've never really had to interview for entry-level req's. The positions that I described involved candidates with years of experience... so the fact that THEY don't have a clue about the basic design of virtually every web framework for the past two decades (i.e. MVC) is a glaring red flag.
To be honest, I'm not entirely sure that the MVC example would be a good question to ask to a younger candidate. If you had been around during the early days of web development (i.e. CGI scripts based on Perl or raw PHP, or Java where all of your logic is crammed into a JSP page), then you could say all sorts of things about the shift toward MVC being the norm. However, if you came along after that shift, and every web framework you've ever seen has been some variation on MVC principles, then it's a bit like asking a fish to describe water.
Anyway, if you're just getting started in the field, and haven't had much interview experience yet, then try not to let it get to you. Interviewing involves a TON of rejection, and that never really stops. If interviews ever start feeling easy, then it means you're selling yourself short... and you should be interviewing for more senior positions with higher salaries.
To get started though, I would suggest simply doing a Google search on "programmer interview questions", or something more specific to your background (e.g. "java interview questions"). This is exactly what the majority of your lazy interviewers are doing to come up with their questions anyway. As you encounter questions that you can't answer, then look them up and read about it (e.g. http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93cont...).
As an engineer with ten years of software dev experience who interviews quite a lot of candidates for my current team, I'd say don't overthink this.
BigO, data structures and having a good idea how computers work, what matters for performance both from theoretical and practical perspective, matters infinitely more to us that knowing how to play object-oriented BS bingo or subversion usage patterns.
If you know what MVC stands for, you're know enough about OO patterns to me.
Not going to argue that we got it right way(tm), just wanted to give you another perspective.
The MVC question is probably for candidates who have some experience in developing software. Not knowing it does not mean you are unfit for developing software. But, even if you have a little experience, you should probably know about MVC, even if you have never used it. This shows that you have familiarity with current technology. Not being familiar with current technology is definitely bad.
If you are applying for a beginner position, not knowing about MVC should not be a big minus. You will only be tested on algorithms and data structures.
If you don't know the answer, it is better to inform the interviewer of that and inform them of the topics you know about.
As a total curiosity, what's the biggest from-scratch codebase you've ever made? In my experience so far, things like MVC don't always make a whole lot of sense until:
1) You've got a code base large enough that they matter;
2) You wrote it initially without using something like MVC;
3) And then it grew into a ridiculous monstrosity, which you then cleaned up by breaking it into models, views, and controllers.
At that point, if it turns out well, you start to "know it in your heart" instead of just being a nebulous concept that you read in a blog/textbook/whatever somewhere.
>>I sometimes ask candidates what the letters "MVC" stand for. The successful response rate is around 50/50. I ask candidates to briefly explain the advantages of the Model View Controller pattern,
It's good you don't ask for a definition. Even the inventor of MVC thinks it's rarely been applied as intended [1] (see forward by Trygve Reenskaug).
If you look at the answers, aside from a few that are really poorly phrased, they all fundamentally say the same thing.
If I were interviewing someone and asking them about polymorphism, I would expect a definition which shows understanding and perhaps an example which explains to me that they really do understand it in concrete terms. (I'd probably give them bonus points for mentioning what the Greek words mean, but only because someone who knows that is likely to be someone I'd enjoy working with.)
Inheritance is certainly a dicier subject. I'm not 100% sure I'd ask about either polymorphism or inheritance in an interview nowadays (though I certainly have in the past), but if I were asking about the latter, I'd probably phrase it in the specific context of a particular language - or perhaps request a comparison of how inheritance differs between two languages. I guess, really, I'd actually want a discussion about inheritance versus composition - but, having asked for just that in the past, it was quite surprising to me (at the time) how many people had literally no idea what the term "composition" meant (despite, I'm sure, using it on a daily basis).
> (I'd probably give them bonus points for mentioning what the Greek words mean, but only because someone who knows that is likely to be someone I'd enjoy working with.)
I know what the Greek words mean, but it would never in a million years occur to me to mention it in an interview; I would have no way of knowing without being told that you would assign positive value to such.
I'm certain the vast majority of interviewers wouldn't care in the slightest if you demonstrated that knowledge; I just happen to have a degree in classics and thus a bias towards such things! (Not that knowing the meaning of two fairly common Greek roots really demonstrates much knowledge of classical languages, but there we are.)
Steve works for a relatively unknown company making products for businesses, mostly using Java/C#, whose technical staff seems to be based in Atlanta, which is not a super attractive city for tech workers.
Your experience comes from matasano, a company where people get paid to be hack, a dream job for many techies, with offices in New York, Chicago, and Silicon Valley.
Is it surprising that your experience is exactly the opposite? I would imagine there's a huge self-selection factor at play here with regards to who applies to work for each company.
(None of this is to say Steve's company isn't awesome - I'm sure it is - just that recruiting is going to be much more difficult).
p.s. - I agree with you that tech interviews are terrible - I just don't think anyone has got it figured out yet and what worked or didn't work at you for you at matasano is not guaranteed to work or not work for the rest of the industry.
If you read Thomas's recent post on Matasano's hiring process [1], they were looking to funnel people in. Steve's process seems pretty clearly designed to filter people out.
Compare the idea of sending would-be candidates books to study and the idea of an interview process that COMPLETELY EXPOSES candidates. One is trying to bring people in, the other is trying to drive them out.
Somehow tech hiring has often become an adversarial process. It shouldn't be shocking to us that approach makes it hard to find good people.
I'm still not sure how giving employees a book to read and then testing them on it is a good process. If I'm going to spend a couple months of free time learning something, I'd pick something like Python or Android development or HTML5 app development, rather than learning a skill that will only be useful for 1-3 employers.
How can I know if an employer is worth 1-3 months of my free time until I meet my potential future coworkers? Demanding a candidate invest 1-3 months of free time before you meet them seems like an insult to me.
If your job uses a niche skill, why demand candidates be an expert already before you hire them?
I did not have a problem with passing on candidates who weren't not sold on learning our subspecialty.
I do have a problem with passing on candidates who were sold on our subspecialty, had an aptitude for it, but could not pass an interview on it "cold".
To me, that sounds like a convincing argument that your specialty is not worth learning. I'd rather work someplace that wants to invest in their employees, rather than expecting them to already be experts in some obscure techniques.
Are you hiring people who are brilliant, or people who are so desperate that they'll spend a couple months preparing for one interview?
Let's not flee to abstraction. Matasano/NCC does software security. We were willing to invest some time and money to bring people up to speed in software security and exploit development.
Not your cup of tea? Totally fine. Not everyone is interested in doing security.
@tptacek, I'd be interested in your thoughts on how one could adapt your methods from Matasano to, say, a startup doing web dev or to an enterprise software company. I've done some thinking based on some of your writings and the idea that the best way to test someone for suitability to do a job is to have them do the job, and haven't really come up with any good ideas.
2. Package it, with all of the assets and utilities needed to get it running with "vagrant up".
3. Carve out some feature/features from the application, and replace them with stub functionality.
4. Deliver the vagrant app and a functional spec to candidates. Have them implement the missing feature.
5. Devise a scoring rubric (unit test coverage, lines of code, algorithms used, safe/unsafe APIs, performance, whatever). Mechanically evaluate candidate submissions.
6. (Optional) Devise a 15-20 minute on-site interview component to verify that the candidate actually did the work. We didn't bother with this, and multiplied the size of our team (NCC is the largest software security firm in North America) and had 100% retention. But it's a big concern for some people.
I remember back when I was a teenager looking at colleges seeing a private tech college with a high acceptance rate and a public state college with a low acceptance rate. The Princeton Review had rated the private tech college as much more selective than the public state college. Teenage me was confused - how was that possible? - the state school had a significantly lower acceptance rate!
The reason is, of course, self-selection bias. On average, the people who applied for the tech college were just stronger students than the people who applied to the state college. This likely is because applicants to the tech college were more likely to be people whose primary goal was to peruse an academic discipline that interested them while the state school had a good number of students applying who wanted to party and attend football games. If you were to look at students who applied to BOTH schools, you would actually see a lower acceptance rate for the tech school than the state school.
I suspect that the same factor is at play here. I suspect that the average quality of a candidate who applies for a software security engineering position is much higher than the average quality of a candidate that applies for a enterprise software development position. The software security engineering position is an esoteric position that is more likely to attract applicants who are enthusiasts or at least very interested in the field. The enterprise software development position is more likely to attract anyone with a tech background looking to phone it in and collect a paycheck.
If this is true, while both Thomas and Steve have a difficult time hiring developers, they have a difficult time for fundamentally different reasons. Thomas is like the tech school - his difficulty is with getting candidates to apply. Steve is like the state school - his problem is with separating qualified candidates from "phone it in and collect a paycheck" candidates.
This is why I am skeptical of Thomas thinking he's got a much better way of hiring figured out. Maybe he does for companies that focus on security - judging by his Linkedin profile, he's only worked for companies focused on security. This is certainly valuable, but he has no idea what it is like for Steve.
By the way, I think many of us have seen this self-selection bias at play even within our own companies. Post a primarily Java job and a primarily Scala job and then try to tell me that the same principles apply for filling both positions.
I seldom post lengthy comments on HN, and when doing so I tend to include qualifiers such as, "I know that the Bay Area is overrepresented among HN readers, and the preceding opinion is based on a perspective from elsewhere." ... "I know that 20-somethings are overrepresented among HN readers, and the preceding opinion is based on an older perspective." Etc.
Invariably, one of the top replies boils down to, "Hey! I'm a 20-something in the valley, and I see it completely different!".
"After a couple years of this I could tell which companies to worry about and which not to. The more of an IT flavor the job descriptions had, the less dangerous the company was. The safest kind were the ones that wanted Oracle experience. You never had to worry about those. You were also safe if they said they wanted C++ or Java developers. If they wanted Perl or Python programmers, that would be a bit frightening-- that's starting to sound like a company where the technical side, at least, is run by real hackers. If I had ever seen a job posting looking for Lisp hackers, I would have been really worried."
I would believe that if (a) the kinds of people we're looking for weren't so generally hard to find in the market and (b) we were tending to hire people with some kind of professional track record. Neither were true. If my pipeline of candidates was "people who would otherwise have gone to work for Dropbox", there'd be something more to this argument.
Atlanta is a pleasant enough city. I'm from there originally, though I moved to L.A. about 15 years ago.
I left because I wanted to live in a big city, not in suburbia. Atlanta has some of the plusses of a city, but they're relatively limited compared to LA, SF, NY, DC. It's more a collection of suburbs than a city.
If suburban life is what you want, Atlanta is fine for that. Honestly, most of Silicon Valley is suburbs, outside of SF itself. And they aren't as appealing as Atlanta suburbs, with many more trees and a lot more space. If that's what someone wants, Atlanta is a better choice than pretty much anywhere in California.
One of the best things about California, particularly in LA and the Bay Area, is that there are more people living their lives according to their own unique desires than anywhere else I've ever been. Having "odd" choices about your lifestyle or having crazy dreams and aspirations is the norm here, and that's something I want around me. It's a plus for me to be surrounded by that kind of thing. There's some of that in Atlanta, but it's more localized to certain areas. Here it's everywhere, just how things are.
You should come back sometime! The Beltline has really reinvigorated a lot of intown areas. Midtown is booming, as is the Old Fourth Ward and lots of other intown neighborhoods.
But I do get your point about "odd" lifestyles. We're certainly getting weirder here, but I don't think it can yet compare to what you see in NYC or the Bay Area. Still, there's a lot to like here, and cost of living alone is a great argument for Atlanta.
Heh, I kinda let that dig slide, because the rest of the comment was very well thought out and on-point. However, of course I disagreed.
Atlanta is by far the technology hub of the southeastern United States. It has no competition to the west until you reach Houston (and even that is mostly specific to the energy industry). It faces only mild competition from the Research Triangle region of North Carolina, and beyond that it's a technology DMZ all the way to Chicago or the northeast.
It's home to Georgia Tech, perennially one of the top 5 engineering schools in the nation, and pumps out thousands of new programmers per year in addition to the thousands who migrate here. It has a highly diverse business base, not tied to any one predominant industry, and its Buckhead and Midtown districts have one of the most thriving startup scenes that you'll find outside of SV or NY.
Of course, if a Georgia Tech grad is in the uppermost percentiles, then odds are he or she would be enticed by Google, or a west coast startup. I'm certainly not saying that Atlanta is in the same league as San Francisco, or even New York (at least not its finance/quant opportunities). But my earlier point was that these environments are extreme outliers, and virtually no other city is comparable to that either.
Sorry for not phrasing that better, I definitely would portray the scenario in the same way that you do, but I didn't want to distract from the main point. I probably should have phrased it "not globally attractive" instead of the negatively toned "not super attractive" but even that probably isn't nuanced enough.
Hey now, we're doing some pretty kick-ass things over here in Augusta. We may not be as big as Atlanta (yet), but we're working our butts off. You should come down some time and take a look at what we're doing at http://theclubhou.se
I'm not entirely convinced that the CS program is phenomenal at tech. I wish it was, since it's my alma mater (ECE/PHYS 08). I've tried to hire people out of there for ebaynyc and now shopspring.com, but they don't seem nearly as prepared as the northeast college graduates. Tech seems way too old school. They only teach java (I'm pretty sure they don't even offer a functional programming class!), and all of the candidates that I talked to had no experience even building a simple web app.
For what it's worth, I think these are great interview questions. You can quickly pick up how much experience a candidate has in a particular area by how in-depth their thoughts on the topic are. Asking a candidate to compare two languages, data structures or technologies in general usually yields good results too.
I think what some of the replies to your post are forgetting is there are no right or wrong answers to these questions. The questions are about gauging the experience of the candidate so you can make an appropriate offer if you wanted to hire them. It's probably not a huge deal if you couldn't describe MVC but you gave an in-depth answer to the OOP question for example. It is a problem however if several questions reveal there are major holes in your knowledge and experience though.
It's interesting that so few people can define, or fully explain, MVC.
I wonder how the success rate would change if you defined "model," "view" and "controller," and asked why developers might choose to organize projects in such a way.
My guess is that most working developers would get the concept right away and might even be able to relate it to other named models we've seen--we might just all exist in a web bubble where MVC is trendy because of the levels of abstraction that exist in our common toolsets.
Even Cobol's ancient "divisions" seem to have a similar organizational structure (http://en.wikipedia.org/wiki/COBOL#Features) and so, I'm sure, did a lot of old-school, green screen terminal apps.
Our computers are generally fast enough and our tools good enough for what we're asking them to do that we can often just define data models and user views fairly abstractly--index this, make that an input box rounded corners and indented text--and that we can often avoid delving into the low-level details of how data's stored or pixels laid out on the screen. I'm sure it would be possible to describe an operating system, or a video game engine, or a DBMS in terms of which sections of code model data, handle the meat-and-potatoes "business logic" and communicate with the outside world, but you'd probably need deeper layers of abstraction to actually plan how code would be organized.
We also have enough fast memory that we can basically pass data from one level to another without issue and don't have to deal with abstraction-breaking optimizations like processing data while a disk head or spinning drum is in the right place or before an old video game system's tiny RAM chip is full.
But I think programmers who worked with those kinds of systems could quickly jump into web and app development, and if they couldn't, it's probably not because they don't know what MVC stands for (although it's possible it's a useful proxy).
"We bring in Java and C# candidates who have been working with their respective language for 10 or 15 years, and they get COMPLETELY EXPOSED during the face to face round when asked a series of basic certification exam style questions. Nothing tricky, just core fundamentals."
Let me guess, they work for a "big box software factory/consultancy".
They don't program, they just click buttons on Eclipse.
What this process seems to be tuned for excluding potential candidates. And if the end result of many many interviews is the idea that the talent pool outside silicon valley is poor, perhaps that reflects the focus of the approach.
I suspect that it is very likely that this process ends up excluding very high-level producers.
Interesting report! Did you weight in this - the bad developers are most likely to interview many times, so they appear often to you? Talents either get advised into a new job or stay where they are, because many shops know they are valuable, I'd think.
> Did you weight in this - the bad developers are most likely to interview many times..
Which implies, almost by definition, that they will become much better at the process over time and eventually become very good at gaming it, therefore making it even less reliable.
i like the style/approach of your questions--a nice optimum that minimizes false positives and false negatives.
and i agree with you that threshold competence in the general applicant pool is so low that some form of technical screening is absolutely essential. My problem is how best to do it. For one thing, it has taken me far too long to recognize that the pool of interview questions and plausible answers are widely available and apparently studied by many applicants prior to technical onsite interviews. Their answers are quickly given and so polished that we bring them on, then fairly soon find that answers to those interview questions is pretty much all they know.
False negatives are a lot worse than most interviewers think. (False negative = you pass on a good candidate, false positive = you hire someone who turned out to be unqualified)
If a "good" candidate is a 1-in-100 find, then each false negative means you have to look at another 100 candidates.
Also, if you decrease your false negative rate by more than you decrease your false positive rate, you're actually hiring MORE bad candidates (while spending more time interviewing). I.e., every time you pass on a good candidate, that gives you another chance to make a mistake and hire someone bad.
Your odds of hiring one specific "bad" candidate may be small, but if most candidates are "bad", that actually makes it more likely to hire someone bad each time you pass on someone good.
No. Many companies have hiring rate of 10% [1] which means hiring 1 candidate may cost you 10 person-days assuming full day interviews for each candidate [2]. You can probably double that to cover the cost of phone screens, so let's say 20-person day of work per good hire. Assume you had 4 false negatives just because you are so aggressive and so a good hire ended up costing you 100 person days instead. This cost is still tiny compared to damages that a true negative would cause otherwise, i.e.,
(1) loss of entire person year or even two because annual reviews need to accumulate evidence for HR to fire (may be less time if you were startup without real "HR")
(2) amount of cleanup other people have to do after that new bad hire
(3) loss in moral for good people in your team who now perceives your hiring process at the company as "broken"
(4) delays + bugs introduced in product because actual work probably didn't got done or badly done despite of you filling up your headcount
(5) amount of money lost in salaries, signing bonuses, office space and benefits (typically > $200K)
(6) amount of productivity lost because of wasted time by good people in the team trying to "ramp up" your true negative
(7) emotional stress you caused to good people wondering them about their job stability and to managers who wasted their time in months of paper work and lot of explaining
(8) emotional stress you caused to your true negative being fired who had moved across the country for you, bought a house on mortgage and had 3 school going children
(9) Most likely, if you are big company, true negative didn't actually got fired because hiring manager never wanted to admit it. S/he was encouraged to join another team or role or even learned political tricks to get promoted contributing to ongoing bozo explosion[3]
(10) I could go on and easily justify probably 3X-10X loss compared the case if true negative was avoided
Footnotes:
1 - Many companies specify their hiring rate over total resume they received which is wrong. I'll use 10% as total number of full interviews that needed to be conducted which is average of 5%-15% at most companies.
2 - This is bad math. Assuming random trials, it would be actually 5 person-day on average but intuitive approach doesn't produce entirely bad results here so we will go with that.
You didn't actually respond to his comment. Assuming, for example, you have:
* 99% of applicants are bad
* 50% false negative (You look over about one good developer for every good developer you hire)
* 1% false positive (one out of a hundred bad devs can snooker you into hiring)
In that scenario, you're twice as likely to hire a bad dev as a good one. And if you halve your false positive rate by increasing your false negative rate by 50%, you're still twice as likely to hire a bad dev, it will just take you twice as much work.
I see what you're saying. I was pointing out the dangers of false negatives, but he responded that he hires 10% of the people he invites for an onsite interview.
Even if you hire 10% of the candidates you on-site interview, that says nothing about your actual false positive or false negative rate. For all I know, he could be weeding out all the top candidates at the pre-interivew stage, and then hiring the best of a mediocre group of people.
It's easy to measure your false positive rate, people you are forced to fire (or wish you could fire if not for corporate bureaucracy).
It's harder to measure your false negative rate. The only way you could measure your false negative rate is to pick a random sample of people who fail your interview, AND HIRE THEM ANYWAY. (However, that could be a lawsuit risk. It would be unfair to the people who hire despite failing the interview. A small business couldn't afford to do it, only some huge corporation could do the experiment.)
Also, I doubt the ability of most businesses to identify the best performers AFTER THEY ARE HIRED and working there for a couple of years.
No, I did not say I hire 10% of candidates I interview :). What I said was that's a fair estimate in industry.
I feel you are truly confused about FP and FN. Whether there are 99% bad developers out there or if you hire 10% of candidates you interview - these both quantities are independent of FN and FP. FN says that you are turning away X good people and it's again independent of FP which ultimately decides how many bad developers you would eventually end up hiring regardless of other 3 quantities I mentioned. See here: http://en.wikipedia.org/wiki/Confusion_matrix
It's not easy to measure FN, FP, TN or TP. Even good people fail due to different reasons like bad manager and bad people may succeed despite of mediocre skills. Looking at who you had to fire or who got promoted doesn't give accurate measurements at all although they may serve as weak proxy. The scenario I described was hypothetical to point out that cost of FP is far more higher than additional cost in hiring due to FN.
May be I'm completely missing something here but my understanding is this: FP = (good hires you made) / (all hires you made). Your likelyhood of making bad hire is 1-FP. If you hired 100 people and your FP was 10% then on average you would have 10 bad hires on your team. So FP determines the number of bad hires you would eventually have. FN has nothing to do with it - it only determines how long before you make a good hire, it doesn't influence actual number of bad hires you will make.
I'm using standard terminologies here. There are plenty of textbooks and articles on confusion matrix, precision, recall, RoC etc. Not sure what definitions you are using to arrive at conclusion that FN increases the number of good hires (it only increases effort).
Sorry, I did mixed up precision in my reply. I just got time to think about this whole debate more carefully and I realize you are actually right if we fix up some of the terminology you have used. The mis-statements and confusion on my part has occurred due to this terminology differences.
First FP and FN are not probabilities. They are just unbounded numbers. This may feel pedantic but in a moment I'll show you why this is critical. Let me draw the confusion matrix first (G = Good candidates, H = Hired candidate etc):
\ H NH
\---------
G | TP FN
B | FP TN
What you are referring to as probabilities is actually False Positive Rate or FPR and TNR respectively which is defined as follows:
Now the quantity you are after is probability that given you did hiring and ended up with good guy which is, nothing but precision:
precision = P(G|H) = TP/H
So how do we get TP to calculate precision if we only knew FPR, FNR, G and B? I did little equation gymnastics using above and got below:
TP = G - GFNR
H = TP + FP = TP + FPRB
So now you can plug this in to above equation for precision and find that as you increase FNR, precision goes down while you keep FPR constant. So you are actually correct. Although it might look like unnecessary exercise vs following intuition I think above equation can actually help calculate exact drop in precision and multiply that with cost of FP vs FN to get the operating sweet spot. On my part I need to do some soul searching to figure out why this didn't triggered to me before :).
The following comment by @fsk which I think what you are describing is inaccurate:
if you decrease your false negative rate by more than you decrease your false positive rate, you're actually hiring MORE bad candidates
First false negatives (FN) and false positives (FP) are independent of each other. FP estimates how many bad developers you would end up having regardless of your FN. The FN determines how many good developers you would turn away regardless of your FP. If you are confused about this, well, these numbers are part of appropriately called "Confusion Matrix". I would highly recommand reading up on Wikipedia (http://en.wikipedia.org/wiki/Confusion_matrix) or any textbooks before you jump on commenting and through bayesian equation around because you are certainly not using right terminology. Also both of these are again independent of actual % of bad developers out there (i.e. whether market has 99% bad or 1% doesn't matter, FP solely determines what many bad developers you would end up with).
Next, it might be actually easier for you to think in terms of precision and recall instead of FP/FN. Interviewing process is nothing but classification problem and P/R is standard way to measure its performance. Again Wikipedia is your friend to brush up on that.
A classic situation in classifier performance is referred to as precision recall tradeoff. You can plot that on curve called RoC and choose your operating point. The way you typically do that is by quantifying how much you would get hurt due to loss in precision (~ more FP) compared to increase in recall (~ less FN). You plug the costs in equation and decide your operating point. For companies that can rapidly deal with FP, increasing recall may make sense and other way around. However in most cases there are too many other reasons that I'd listed should typically prevent you from lowering your precision too much.
Hiring rate of 10%? You mean you hire 10% of all people who submit a resume? Or 10% of all people who come in for an on-site interview? Those are two different things.
Why should it take 2 years to fire someone? That sounds like a corporate bureaucracy problem.
10% of total number of full interviews that needed to be conducted. At most publicly listed companies, firing can't be done without accumulation of enough evidence (what HR usually refers to as "paper trail"). This is true even when employment was "at will" mainly because of legal liability (for example, fired employee can claim that he was a victim of XYZ) and bad PR it can generate. I think Facebook is (or was) probably rare exception in aggressive firing and not sure how they managed it. Most companies also require annual review to be in place before firing occurs. Typically hiring manager would avoid firing within first year because that usually looks very bad on them. Most true negatives don't get fired until hiring manager changes or years pass by. At startups things are obviously different. Resources are scarce and true negative probably won't survive beyond 6 month or in worse case beyond a year. But still that's a significant period to cause enough of hemorrhaging for a true negative.
Arg! I'm on my phone and I meant to upvote! Apologies, this was a very insightful comment.
I had never thought of it this way before but this is an instance of Bayes's rule. If the false negative rate goes too high and the percentage of good programmers is small then yes, the process could actually increase the odds of a bad hire.
Actually, it happens as long as your "stricter hiring practices" increase your false negative percentage by a lot more than they decrease your false positive percentage.
Try it out with some numbers.
10100 candidates, 100 are "good".
Suppose you have 2% false positives and 1% false negatives.
You hire 99 good candidates and 200 bad candidates.
Suppose now you have 0.5% false positives and 90% false negatives. (You decreased your false positive rate by 4x but increased your false negative rate by 90x. This is typical for employers who look for every little excuse to reject someone.)
You hire 10 good candidates and 50 bad candidates. Your "good hire" percentage went down, and you're churning through a lot more candidates to meet your hiring quota!
So, "it is better to pass on a good candidate than hire a bad candidate" is FALSE if you wind up being too picky on passing on good candidates.
Assuming you can identify losers and fire them after a year or two (with decent severance to be fair), you're actually better off hiring more leniently.
It's also even worse when you realize that the candidate pool is more like:
10200 candidates, 100 are "good", 100 are "toxic", and the toxic people excel at pretending to be "good".
Also, the rules for hiring are different for a famous employer and a no-name employer. Google and Facebook are going to have everyone competent applying. If you're a no-name startup, you'll be lucky to have 1 or 2 highly skilled candidates in your hiring pool.
Also, what makes this mistake common is the feedback you get.
When you make a false negative, you never find out that you passed on someone amazing.
When you make a false positive, it's professional embarrassment for the boss when he's forced to admit he made a mistake and fire them.
So the incentive for the boss is to minimize false positives, even at the expense of too many false negatives. The boss is looking out for his personal interests, and not what's best for the business.
> Try it out with some numbers. 10100 candidates, 100 are "good".
What you're attempting to do works well for hypothetical drug testing[1] or terrorists but not for hiring developers (or anyone else). With the numbers you used you're proposing that less than 1% of all candidates are "good" - nobody would reasonably set the "good" threshold to include only the top 1% of developers.
You are assuming that the percentage of people interviewing for a job are a good representation of the general programming population. That's not true at all.
First, unless you really think we are terrible at hiring as an industry. So even if on a given day all developers that start looking for a job have a skill level that matched the average population, the good developers will find jobs faster, leaving the 4th, 5th and 6th job applications for the developers that did not manage to get hired after applying ina couple of places at the most. So yes, your talent pool on any particular day, just due to this effect, is far worse than the average talent in the industry.
Then there's how bad developers are fired or laid off more often than the good ones, so they are added to the pool more often. Typically companies make bigger efforts to keep good developers happy than those that they considered hiring mistakes.
And then there's the issue with the very top of the market being a lot about references and networking. In this town, no place that does not know me would give me the kind of compensation that places that do know me would. I'll interview well, but nobody will want to spend top dollar in someone based just on an interview. In contrast, if one of their most senior devs say that so and so is really top talent, then offers that would not be made normally start popping up. The one exception is 'anchor developers', people that have a huge level of visibility, and you still won't get them to send you a resume at random. You will have to go look for them, at a conference, user group or something, and convince them that you want them in the first place.
My current employer has a 5% hire rate from people interviewing off the street, and that's not because our talent is top 5%, but because you go through a lot of candidates before you find someone competent. We've actually tested this: Interviewers do not know candidates, even when they were referred by other employees. But, as if by magic, when there's a reference, the interviewed almost always is graded as a hire.
I completely agree most applicants are going to be ones you wouldn't want to hire (this is true for any job) but it's not going to be as low as only 1% are worth hiring (which is what the comment I was replying to was suggesting). Even your 5% number seems suspect (i.e. that sounds like your company doesn't have good screening to determine who to interview...you shouldn't need to interview twenty people to fill a position).
Even if you set the "good" percentage to 10%, too high a false positive rate will still ruin your results.
Based on the people I've worked with over the years, I say that the actual skill distribution is:
5% toxic - These are the people who will ruin your business while deflecting blame to other people.
25% subtractors - These are the people who need more attention and help than the amount of work they get done. In the right environment, they can be useful. (Also, this is mostly independent of experience level. I know some really experienced people who were subtractors.)
60% average - These people are competent but not brilliant. These are solid performers.
9% above average - They can get 2x-5x the work done of someone average.
1% brilliant - These are the mythical 10x-100x programmers. These are the people who can write your MVP by themselves in 1-3 months and it'll be amazing.
You first have to decide if you're targeting brilliant, above average, or average. For most businesses, average is good enough.
If you incorrectly weed out the rare brilliant person, you might wind up instead with someone average, above average, or (even worse) toxic.
Actually, when my employer was interviewing, I was surprised that the candidates were so strong. There was one brilliant guy and one above-average guy (My coworkers didn't like them; they failed the technical screening, which makes me distrust technical screening even more now). They wound up hiring one of the weakest candidates, a subtractor, and having worked with him for a couple of months my analysis of him hasn't changed.
There is no reasonable definition of average that would only allow for 9% above that (or 10% including the 1% you marked as brilliant). Average is usually considered as either the 50th percentile (in which case you would have ~50% above this) or some middle range (e.g. 25th - 75th percentile).
Since you said 60% are average we'll consider an appropriate range as average, the 20th - 80th percentile. That leaves you with 20% of applicants below average and 20% above. Your math falls apart real quick when we're dealing with distributions like 20%/60%/20% instead of 99.5%/0.5%.
[As an aside, the toxics and brilliants are outliers, they should be fairly obvious to a competent interviewer (and as someone who previously spent a decade in an industry where nobody conducts interviews without adequate training I'll be the first to say most interviewers in our industry are not competent)].
The problem is that programming skill is not normally distributed. There are some big outliers at the brilliant end, and some big outliers at the toxic end.
So "average" is not really a meaningful term. I mean "average programmer" as "can be trusted with routine tasks".
Behind every successful startup, there was one 10x or 100x outlier who did the hard work, even if he was not the person who got public credit for the startup's success.
If you're at a large corporation and trying to minimize risk, hiring a large number of average people is the most stable path. You'll get something that sort of mostly works. If you're at a startup and trying to succeed, you need that 10x or 100x person.
I didn't say it was impossible to construct a set that would yield only 10% as above average, I said there "is no reasonable definition of average" - if you feel the above set accurately represents the distribution of the caliber of developers then we clearly have very different opinions of what's "reasonable."
That would depend on what set of developers we're looking at:
All developers - this will be very bottom-heavy, people [usually] get better with experience and there's obviously a lot less people that have been doing this for 20 years than having been doing it for two. Additionally people who are bad at a profession are more likely to change careers than those that are good (this is by no means an absolute, I wouldn't even go as far to say most bad engineers change professions, I'm just saying they're more likely to - further contributing to higher caliber corresponding well to years of experience).
Developers with similar experience - this is much more useful as there's not much point comparing someone who's been doing something for decades with someone on their first job. I would expect this to be a fairly normal distribution.
Developers interviewing for a particular position - applicants will largely self-select (and the initial screening process would further refine that) so this group will largely have similar experience (i.e. you're typically not interviewing someone with no experience and someone with 25 for the same job). But it won't match the previous distribution because, as someone else commented, the bad ones are looking for work more often (and for a longer period of time). Do the interviewees you wouldn't hire outnumber the ones you would? Yes, definitely. Do they outnumber them by a factor of a hundred to one? Definitely not. Ten to one? Probably not - if they do it probably represents a flawed screening process causing you to interview people you shouldn't (or not interview the people you should) rather than an indication that only one out of every ten developers are worth hiring.
I'm not OP, but it feels like you're arguing semantics. YES, that's the technical definition of "average," no argument, but I don't think he/she meant mathematically average.
If you substitute with these terms:
- 5% toxic
- 25% subtractors
- 60% competent
- 9% exceptional
- 1% brilliant
...then there's no reason to apply (or defend!) the mathematical definition of "average." And I think those numbers actually seem somewhat reasonable, based on my own exposure to working developers in various industries. What this doesn't count is the the "FizzBuzz effect," where ~95% of the people who are interviewing at any one time (in a tight market) tend to be from the bottom end of the spectrum.
Even within the broader pool of programmers, the line between subtractors and competent is very project-dependent, in my opinion. For some levels of project complexity, the line might actually invert to 60% subtractors and 25% competent, while for far less complex projects, it might be 5% subtractors to 80% competent.
In the former case I'd want an exceptional developer, while in the latter the exceptional developer probably wouldn't even apply, or would quit out of boredom.
This is reasoning from something that's harder to estimate (how good is the hiring pool and your interview process) to something you know more about (how good are the company's employees). It seems like you should be working backwards instead?
For example, if you assume that 90% of your employees are "good" and 1% are "toxic", what does that tell you about the candidate pool and/or your interview process?
It's my crude estimate based on the places I've worked over the years, and the people I've seen come in when my employers were interviewing.
If I was the boss and had a "toxic" employee, I'd just dump them rather than waiting. I've been forced to work with toxic people because I'm not the boss, and I've noticed that toxic people are really good at pretending to be brilliant.
Over the years, I've also worked with a couple of people who singlehandedly wrote all of the employer's key software. I also worked with several people who wrote a garbage system but conned everyone else into thinking it was brilliant.
If 90% of candidates are "good", then why waste time with a detailed technical screening at all? Just interview a couple and pick the ones you like the best.
I am a self taught PHP programmer. Been working in tech as a freelancer since early 2000s. Built real applications.
I thought I was pretty good so a few years ago I moved to Silicon Valley to get a startup job. Boy was I wrong. Even though I had created real applications and knew MVC, etc. etc. I was basically told I was worthless because I didn't know algorithms, unit testing, etc. etc.
So I moved back home (Tel Aviv) and started my own company (lead gen market). Programmed everything myself. Last year I did over $2 mil (70% margins) and this year I am on track for $3.5-$4 mil in revs.
The technical interview was the best thing ever for me because if I had passed I wouldn't be where I am today.
Both could be right. You might not have had the skills they were looking for, but could build a successful company (lead generation isn't algorithm intensive, is it?). I've seen exactly that happen more than once. Shipping something and running the business matters far more in many cases because the technical challenge in most companies is very little.
Nice story and surprisingly not that rare. The idea that someone can develop fairly complex software but not learn something mundane like unit testing is insane. There is a terrible plug-and-play mentality that acts like smart, hard working people are somehow not able to learn new things to fill in minor knowledge gaps.
I really like seeing this. I also write PHP for my projects, and I know it's terrible. I have no delusions about this. I use mysql_query despite knowing it's deprecated and possibly insecure, and if I'm building something that doesn't store any sensitive data and the worst case scenario of someone finding an exploit is that I have to restore from a snapshot, I really just don't care. I focus on building the product quickly and efficiently. Let the programmers of the world thumb their noses at me, at least I'm building something of value for me instead of some company.
The thing is when the company gets big and its not just you, your going to wish your original language was something more scalable.
For example, you tube engineers probably curse that the thing was made in python, because in a 1000 engineer org, making changes can break things you wouldn't be aware of elsewhere. While something more statically typed like java or go would break on the compile step instead of the run time step.
Python was fine when the project was 1 - 20 engineers, but it became a liability later on.
That is why people want you to spend a week or two to learn something better and more maintainable, so you wont curse your future self later.
But if your making small build utility type things, then it's fine. Contact websites for small firms. Or if it's for your quick test project, etc.
Look, if you want to talk about code, the easiest and best way to sidestep this issue is to have people bring code (somewhere between 200-1000 lines) to the interview. While they may not have a github presence, they've probably written something on a computer outside of business at some point.
I really don't understand why more people don't do this.
It doesn't even matter if it is their own code. The fact that they will have to talk intelligently about it means that it will accomplish its task.
I used to get this issue interviewing with VLSI chip companies all the time. "Do you know Perl?"--"Yes, I do, but I don't know YOUR particular favorite subset of that write-only language" doesn't go over well in interviews.
After getting tired of getting dinged on questions about a language that I had used for almost daily for 5 years and that I abandoned for good reason, I took an old program of mine from Python and ported it to Perl. I bring both.
Now, I have something concrete to talk about, can actually compare the points of two languages, and most interviewers realize that "Gee, you're probably better at this than I am."
This shifts the conversation from "Technical Jeopardy!" to "Ah, you know how to program, let's look.", "I did this. Here's why.", "Oh, that's interesting. What does that do?", etc.
I tried this with the first few people I interviewed. None of them had any code from outside work to bring. (This includes a brilliant and hardworking coworker that I practice-interviewed -- he just hadn't coded outside work in a very long time.)
We wound up switching to giving code challenges instead.
Why not pull down a decent example of code that fits their resume's skill set and having them go through it with you line by line to tell you what they would do differently or why they like it, etc.?
>It doesn't even matter if it is their own code. The fact that they will have to talk intelligently about it means that it will accomplish its task.
I think you missed this part. Anybody can go on github and grab a repo and pick a specific section of code to talk about. This still accomplishes the goal of finding out if the candidate can understand code and communicate about it.
I lose about 20 points of IQ in a technical interview. It's nerve racking (I wear black shirts to hide the sweat) and it cost me a job working for a well known VR company. I passed the 'practical', which was a 2 hour coding test to generate a working game application, but the tech interview probably did me in. I've been programming games for almost 20 years, and as far as I know I'm one of the few who has done lead/senior level work on games that were both #1 and #2 simultaneous chart-topppers, but I still couldn't get the job. Their loss I guess, but it still sucks, as I was really enthused about what they were doing and could contribute significantly. I was about 10 years older than their typical engineer however, so maybe ageism played a part. I'll never know. I just wish companies would look at my past body of work, see that I'm friendly and un-abrasive, and bring me aboard.
Have you thought about jumping into the world of Serious Games?My company is usually looking for experienced game developers with industry talent. The "Serious Games" side of the house is (to me) twice as rewarding as the other side. We don't live and die on the finicky sales cycle of the public and get to focus on delivering cool products that are already paid for by one customer.
If you want to chat more about it, hit me up. me (at) ericharrison.info
In my organization, we start by looking at what the candidate has actually done. We ask for a github profile, and any side projects they might have worked on. If the CV and body of work look interesting, we go on to a phone screen - general questions, clarifying points on the CV, explaining the job, answering questions.
For the on-site interview, we ask candidates to bring their laptop. We advise them to use a typical, comfortable development environment. We've seen candidates struggle with the latest Ubuntu, installed that morning to impress us. So we don't want that.
During the hands-on interviews, we ask to see a side project, or any code they are mostly responsible for, and familiar with. We ask them to explain code, maybe change something, refactor a test, etc.
What we don't tell candidates is that this part of the interview is also about how they use their tools. It's important to see how good they are with their editor, command-line tools, can they type well, do they get easily distracted, and so on.
One of the most effective interviews we do is for project planning. A problem is explained in detail and the candidate is asked to design a solution. Not in code, but to talk it through in detail, drawing or writing docs/stories if needed. This phase helps show us how they break down a project, ask questions, negotiate features, and look for opportunities to reduce complexity. Bonus points for making a pen-and-paper wireframe or throwaway prototype.
What we refuse to do is the "puzzle" problems, whiteboard code (which makes no sense), or tricky technical questions. We instead want to find people that use best practices, don't re-invent the wheel, tackle problems pragmatically, and are good with their tools.
Over time, this approach seems to work well. However, we also discovered that we have to re-train and test our own interviewers. Without that step, the process can change unexpectedly, become inconsistent, or unfair. Don't just assume your staff is interviewing well - take time to check it out and help them get better.
> In my organization, we start by looking at what the candidate has actually done. We ask for a github profile, and any side projects they might have worked on.
Fail. You just killed the quality of your pool.
You've just negatively screened against people with a life (experienced, 30+ years old, generally with a family) and screened for people with no life (aka single, male 20-somethings).
For an example, if John Carmack hadn't been able to release his employers source code (like many people), your process would screen him out (no github or side project presence).
> You've just negatively screened against people with a life
That kind of works both ways, though, doesn't it? As a 30 year old developer with a family, I don't want to waste time interviewing somewhere that has the expectation I'll work outrageous hours. If I get filtered out early in that process because I'm not involved in multiple OSS projects or whatever, all the better for me.
That being said, resumes are a really shitty way to get an idea of someone's experience and talent. If I'm hiring someone and they can show me a github profile or blog entries or a Stack Exchange profile or slides from a local user group presentation or anything besides their resume, it really helps me get to know them better, and -- all else being equal -- will probably set them apart.
I agree with this. I'm a reasonable example - I've spent the last year working on relatively internal applications that don't show up in github (only the most recent year is shown there for some reason). I've also contributed to Rails[1], Rubygems, and a bunch of other projects.
I hope you would promote your contributions when applying for a job, it definitely makes a difference. Looking at a public profile is not everything, you're right.
There are plenty of 30+ developers with families in our organization, and all of them have projects, hacks, or ideas of their own. I would argue that this is part of the culture we have. If someone comes to an interview and can't show any past code at all, that's a worrying sign.
On the other hand, we've also hired people that have not much happening with their public github profile. We still ask to see code they might have worked on (job-related). If they can't show that, we try to have them pair with us on something simple. We have hired people in this situation, but I'm sure we've missed others that may have worked out.
If a young Carmack came through our interview process, I'm pretty sure our team would recognize his talents when asking him to go through someone else's code.
Wow, this is my worst nightmare. I freeze anytime a co worker is near my desk because I worry they will judge how I type and get around the IDE. Of course that just makes the vicious cycle worse.
I've been on both sides of the process and in both cases I prefer the whiteboard.
As an interviewer, I simply don't have an hour or two to evaluate someone's project. I want to see a strong resume and ask a few questions to understand whether the resume is real or not. A simple whiteboard question or two is great for that. You can always see whether the person is capable of writing code and it always brings up questions around the choices they made.
As an interviewee, I rather not be engaged in a live coding session: the whiteboard is a very efficient tool to get an idea across, without sweating on syntax errors, searching for API documentation and whatnot. Test projects - I have a job, thank you very much.
If you don't have an hour or two as an interviewer, you probably aren't getting very good results. This is a pretty small investment of time considering what is at stake.
"As an interviewer, I simply don't have an hour or two to evaluate someone's project."
These are people with whom you'll be working together with for years. If you can't spare a few hours to make sure you are finding the right person, what does that say to a candidate?
The interview is a tool to quickly evaluate the candidate. Not every candidate reaches the point where they could be even considered being a person I'll "be working together with for years". There're more than one candidate at any given time and I still have my day job to do.
I don't know what does this say to a candidate, but I surely hope they're mature enough to understand that there's nothing personal about it. They will be treated with dignity and courteously during the whole process, but that's all I can promise.
The thing is that they're also interviewing you. If I expect a candidate to spend time prepping and then taking time off of work to come in and talk to us, the least I can do is take the time to properly prep for them.
I know that when I'm on the candidate side of the fence it makes a big difference to me if a company/interviewers seem like they've actually taken the time to get to know who I am instead of just walking in a room and that being the first time they've checked out my resume. I realize it's not personal and I don't expect it but if a company/interviewer puts in the extra effort it's a definite positive.
I am going to have to disagree with you. The whiteboard question does not always work. In fact, while I am generally able to do whiteboard questions without a problem, I had an experience 2 years ago where I completely blanked and was absolutely unable to write any code on a whiteboard. If they had handed me a computer (or if I had not been too flabergasted to ask for one), I would have been fine. In the end, the company in question lost out on my skills and I ended up at a different company. Not really so bad for me, as I have never been scarce of job offers, but it made the problem very obvious to me.
It is easy to dismiss something that you have never experienced yourself. Some people are good at whiteboard interviews and some are not. In my experience, the ability to be good at whiteboard interviews is mostly independent of how good the person will be at their job. The questions are either too difficult to avoid the stress-blanking issue, or too easy to weed out the barely-capable-but-not-someone-you-would-ideally-like-to-hire candidates.
We currently do pair-programming interviews with ping-pong. I write a test, you write the production code, then you write a test and I write the production code. We discuss programming issues as we go. This seems to work well for people who are familiar with TDD and real pair programming (as opposed to: one guy codes and the other guy watches them). It does not work so well for people who have never tried it, though. We actually do most of our work this way, so finding people with this experience is good. Unfortunately, I think we've almost certainly missed good talent just because they don't immediately grasp how this process works. We do some hand-holding at the interview, but then you are always wondering about the person's actual ability.
@jimbobimbo, if you don't mind some unsolicited advice, I would recommend that you revisit your stance on maintaining a few non-job projects. A portfolio that shows what you can do on your own is unbelievably valuable (in terms of real dollars) when you are looking for the next job. I have gotten good jobs in the past simply because someone stumbled across my portfolio and decided to hire me. The time required is not all that bad -- say an hour or so 3 times a week for the next year or two would give you a really excellent portfolio. It also allows you to experiment with techniques and tools that would be too risky to introduce into a project without some experience. Building that experience outside of work, also makes you much more valuable even if you don't switch jobs. I think you will find that while it is difficult to find the time, the pay off is excellent. Avoiding doing low priority overtime at work and replacing it by investing in yourself is a good way to start.
Having said that, my portfolio is currently a complete mess and I really need to spend some time on it ;-)
Mike, actually I totally empathize with your feeling towards the whiteboard questions. I would probably be flabbergasted myself if offered a live coding session. Whiteboard is much more forgiving in the interview setting, in my opinion. When I'm interviewing, I keep stressing the fact that I don't care about correct syntax, coding style and whatever have you when you are doing whiteboard - only idea counts. I also leave them for about 20 minutes to give them space to think. Not sure if that helps, but I've not seen people having problems after such intro.
"Non-job projects" != "test projects". I do have non-job projects and showcase them when interviewing, but I would not take a test project as part of the interview process, unless I have really good reasons to do so.
I remember with horror one technical interview I had. I was asked to complete a fairly basic problem, which I had actually just practiced a few hours before (the fact you can practice for these interviews indicates how questionable they are). As soon as the question was asked, I proceeded to start typing the answer and explaining what I was doing when suddenly my mind went blank, I struggled for nearly ten minutes to regain my confidence but by that point it was too late. The interviewer quickly ended the interview and told me to apply again after I had more programming experience, since apparently he was able to conclude the extent of my knowledge based upon my answer to a single question made during stressful circumstances.
Um, you can practice for anything that involves communication or demonstrating your skills. Its just arrogant to think that you can go into an interview cold and rely on the interviewer to tease out your positive qualities. If they're a good interviewer they'll do their best to do that but its also on the interviewee to convince the interviewer that they're a good fit for the job.
I wouldn't present on a technical topic without preparation, because I'm not going to do a good job of explaining it or communicating the ideas - I'd be wasting the audiences time. Same with interviews - you need to be prepared to explain your work and put your experience in as favorable a light as possible.
I'm going to go against the popular opinion here: I don't believe that technical interviews are broken and need fixing. Of course, we know as engineers who deployed code filled with hacks and known bugs to production, everything is broken. The difference is only how broken it is. Let's fix our approach to quality and security, ability to estimate and other things that are much more broken first.
> I don't believe that technical interviews are broken and need fixing
I think one of the reasons so many developers feel the process is "broken" is because it's all they know (i.e. most developers didn't have a previous profession).
For most jobs it's not practical or possible to get any insight into how a person will do that job prior to hiring them. Whiteboarding isn't intended to perfectly mirror "real life coding" - it's intended to give some insight into one's ability to write software to solve a problem. It's not perfect (and it can certainly be done extremely poorly!), but it shouldn't be dismissed as broken or useless any more than one should suggest actors shouldn't have to audition for roles.
I was a bit surprised that the article first talks about how bad it is to pass on good candidates that don't interview well on whiteboards and then suggests to pass on candidates that don't have side projects. Are all candidates without side projects not good?
But I think you advocate doing a project with the candidate, which is also very time consuming. Doesn't that also filter out some good candidates who simply don't have the time? I assume you pay them, but still.
After a couple of job interviews in recent times my personal inclination to invest a lot of time has gone down by a huge amount. For my first application I even took the time to contribute to one of their open source projects (as they asked on their job application page). Didn't get a job - how many times am I supposed to invest that much time for a job application?
Is determining technical skill even the biggest problem? I think my GitHub account shows I can code, even if it mostly contains small projects - but vastly more complex than FizzBuzz. I'm not even afraid of whiteboard interviews (if I had a dollar for every time I was asked to implement Quicksort or compute Fibonacci numbers recursively, I'd probably have ten dollars by now).
Yet I don't get hired. So my conclusion is that there really isn't that much of a talent shortage. Not enough to let companies look beyond my age or my lack of passion, anyway (my answer to the trunk vs branch first development question would be "I don't care much", although I could probably blab about presumed pro and contra arguments).
Edit: just looked up Starfighter again. In a way I am excited, as I have recently decided that only online games that have an API really interest me. However, it sounds like it would require a huge time commitment, too. Wouldn't the time be better invested in side projects for GitHub?
No, we did not ask candidates to do a side project for us. All we did was move time they would have spent either on the phone or in a face-to-face interview to something they could instead do at home.
I completely get that technical interviews aren't great at accurately predicting success on the job.
What I don't get is all the whining that seems to happen from interview candidates who seem to think they are hot stuff and deserve to get hired but blame it on a bad interview process. It's not like the interviewers aren't aware of the shortcomings of the technical interview.
I have yet to learn of another industry where people routinely blame interviewers for their being rejected. I mean it seems like in other industries people are just hired almost solely based off resumes, and while everyone also realizes it sucks, they don't seem to think the interviewer is an idiot for not hiring them.
I live in Silicon Valley so I have tons of Software Engineer friends. I also have a friend who's a heart surgery researcher at Stanford. When we talk about our hiring process he just laughs and thinks its impossible to do something remotely close what we do in Software in their field. It seems Software is the only field that people have to do some serious work for the interviews.
Maybe they should. Studies have found that ratings of a surgeon skill based on a video recordings of surgery, are predictive of patient outcomes[0]. The problem with associating prestige and respect with not stooping to being tested, is that testing can be useful and necessary.
I bet I can put together dozens of people who think OOP and MVC are bad programming paradigms, another dozen who can make war on the quantum of documentation. We already know the solution to this problem: We need to hire people who have the skills to hire people and willinglines to do it too. Just being a senior doesn't cut it. Too costly I guess but is it costlier than making bad hires?
IMO successful startups are the one's where founders had the skill to identify and hire the right talent. What I don't understand is why big companies are not creating a separate division of engineers with primary task of hiring? Is it because that if such a division existed the people there would eventually lose the skills being cut off from mainstream engineering tasks? I am not sure. But it seems to me that this can be solved by looking at hiring task as an engineering task. What's wrong in seeing hiring engineers as another piece of code which needs to be carefully thought and polished?
If your only goal is to measure the ability to code, then doing a whiteboard interview isn't the best way to do that IMO.
But what about positions where you will be expected to give presentations, do pair-programming, or mentor junior developers? I think whiteboard interviews can be a measure of your ability to take technical concepts and illustrate/explain them clearly to a team.
I used to really hate whiteboards, but as I grew into positions that required more leadership, I realized that I personally needed to develop better presentation skills. If a potential employer were to test me on that, a whiteboard test wouldn't seem unreasonable to me.
So I have mixed feelings on this. Not every hire needs to be capable of being a teacher or delivering a solid keynote speech.. some positions would be best filled by someone with those skills.
If you want to measure giving a public presentation....have them give a public presentation.
If you want to measure their ability to pair program ... pair program with them.
If you want to measure their ability to mentor ... have them teach you something they know.
I want to know how you do X, and measuring X is easy, so please do Y as a proxy is never a good approach.
I have never, in my 25+ year career, ever had to whiteboard a problem, where the other person knows the answer (that is laughable - I've given correct answers and been told they are wrong), I don't, and there are hundreds of thousands of dollars at stake. It just has nothing to do with the job or on the job performance.
If you're whiteboarding with a simple pass/fail mentality, I'd say you're doing it wrong. A good whiteboarding exercise lets you see how a candidate explores a problem. Do they ask insightful questions? How do they break a problem down? Do they bluff when faced with something they don't know?
These are behavioral attributes that are important - much more important than a simple binary test.
Finding out how someone reacts to being told they are wrong when they know that they are right is a totally valid interview technique.
Furthermore, I think past basic fizzbuzz questions, complex interview questions are good, but someone failing to get an objectively correct solution to an objectively difficult problem is just one signal among many. The goal of asking the questions should be to gather lots of other signals about how you approach writing software, not to pronounce you right or wrong.
I had an interviewer tell me that I was wrong that DNS ran over UDP. Not sure if the interviewer was trying to run some psychological test or if he was just wrong.
I was polite and stayed until the end. It was through a headhunter, so I decided to be polite. I knew that was a no-go interview after 1 minute, when I saw that everyone working there was Indian.
Excellent answer Roger. Whiteboard is not a silver bullet. It really depends on what the interview really trying to asses and use a better tool to do that, I would personally prefer to write code in a developer environment (IDE etc) as that is the most natural place for me to write quality code.
White-boarding is good for high level design and architecture but not for actual code.
You're right, I've never whiteboarded a problem where the other person knows the answer. A thousand times have I sat down in front of a whiteboard with coworkers looking to solve a problem. That's how I interview too.
Articles demonizing coding interviews always attack strawmen that are the worst kind of interview questions:
1) Trivia questions (i.e. "what is the name of the method to do X in Java")
2) Questions with precisely 1 solution (you either get the one solution or you don't).
These are indeed useless technical interview questions.
But that doesn't meant there aren't good technical coding interview questions. I tend to favor the kind that present a candidate with some example data and a set of constraints or patterns, and ask them to write code that analyses the data for those patterns or constraints and reports them. This sort of problem is ubiquitous in my domain. Importantly, I always choose problems for which there are multiple possible attack angles, not just one. And I don't give a hoot about syntax errors, or what language they use.
This sort of question gives you a good sense about the candidate's analytical capability (breaking down the "word problem"), and their ability to translate their problem-solving thought process into code. Because there are always multiple angles of attack, candidates have some leeway to exercise creativity.
In the end, it's not terribly important to me that they get the optimal solution. I do care whether they demonstrate strong analytical capability in the literal sense, meaning they can decompose the problem and the associated programming exercise into their logical parts and implement them. I also look for good communication skills in the questions they ask when reasoning through the problem - this is something that only an in-person technical interview can reveal, AFAICT.
There are probably many smart candidates that don't do well on these questions because they're just having an off day, or nervous, or don't perform well under pressure. I sympathize with them, because I've been there and felt all of the above.
But if the goal of a technical interview is to assess a candidate's analytical and coding abilities, and their ability to do both simultaneously, there is no shortcut I know of to just giving them a role-relevant problem to work on.
I usually countered FizzBuzz question with an offer to show of the source code of a small project I wrote while studying. While some didn't care, most interviewers were actually happy about it and let me do it. I tell them about the problems I had and how I approached them, usually leading to a question from me on how the company had similar problems and how they solve them. In my experience this "opens up" the interviewer(s) to talk a bit more honestly about development practices in the company.
On test projects. I generally expect that companies (after a pre screening) put similar efforts in me as I put in them. If you let me do a two day project, I'd expect that one person (preferable my future boss or a future colleague) to show me a "company fitting" solution and take the time to discus my work and theirs. If you're not willing to spend that time I'll most likely think you don't respect my time. Which is a factor in choosing a company. I understand you are busy, I hope you understand that I’m too.
> It is time for engineers–especially excellent engineers for whom demand is high–to start to flatly refuse to do whiteboard interviews.
This might be a viable strategy for people who have a well-established career/credentials/references (etc), but for junior-level candidates still trying to prove themselves (such as myself) I can't see this working out too well.
That's one of the harder parts of the process, and it's partially because of poor expectations on the part of those doing hiring.
To me, it's absolutely ridiculous to expect to hire a junior developer and have them come with a fully-developed set of skills. If they do, that's great. But if you're hiring someone fresh out of school, you've got to be approaching it as hiring someone that you're going to train and mentor. For me, the number one thing I look for in a junior developer is the ability to learn.
Here's an example from early in my own career. I was just finishing up my 3rd year of a combined CS/EE program, and looking for a summer job. I got in touch with a biology lab that needed a developer for the summer to build some (very cool) software to support the neurophysiology experiments they were doing. I looked at the job requirements and thought "well, I don't know most of this stuff, but I'm sure I could pick it up."
The interview progressed like this:
> Do you know Python?
I'd heard about it, but have no real experience with it. I downloaded it last week and started playing with it though, and it doesn't seem too different than other languages I've used.
> How about VisionEgg (neurophysiology module for Python)?
Well, I downloaded it at the same time I downloaded Python. I've managed to get a window to open up, and I'm displaying a square that's got a cool animated habituation pattern on it. (Note: 1 week prior, I had no idea what a habituation pattern was) I do have a bit of OpenGL experience from a class I took, and that's the underlying library that VisionEgg uses.
> Well, so far, you're the only applicant who has actually made an effort to look at the specific tools we're using here. I've got one other applicant coming this afternoon, but unless they somehow have more experience with these tools than you do, the job is yours.
It turned out to be a great experience, and they hired me back on the following summer. I went from being a total Python noob to contributing patches back into VisionEgg. I think most junior positions should probably progress like that. Give me a keen junior developer, and let me shape and mould them into a not-junior developer.
The flip side to this: if you're expecting the person to be productive on day 1 or 2, you'd better be hiring someone with experience. Whether or not they have code they can show you, they should be more than capable of going into serious detail about past projects they've worked on (within the confines of NDAs and such, of course).
I've found this technical interview process to work very well. This is done after the initial phone screen.
1. Show them a problem with your product along with the code. For front end developers it could be how form invalidation errors are being presented to users.
2. Ask them to figure out what why the code is doing this and observe them troubleshoot the code.
3. Tell them to fix the problem in the code and observe them apply a fix, test, and debug it.
4. Ask them to architect a better solution to the problem and to explain what makes it better. What would would be drawbacks to their solution.
The benefits of this approach is that you can evaluate directly how someone solves problems, not how well they communicate, nor how knowledgeable they are. It also helps me judge how fast they are. Sometimes I ask people to estimate how long it would take to solve this and time them. The last step helps establish how they think through their solutions and how well they can communicate their ideas.
Additionally, I get to see how quickly they might get up to speed with our codebase and I can hear other solutions to some of the technical problems we are facing which is incredibly beneficial as a small startup.
In my experience, a lot of companies are combining _all_ these things. So you're expected to do a phone interview, a test project/coding test, a whiteboard test, and the management brown-nosing at the end of it where you get to pitch that you've studied a hard technical skill your whole career just because you're so passionate about getting woken up at 3AM on PagerDuty to build _their_ vision, and definitely not because you expect to be paid for your time and skill. The fact that you realize money exists and can be used to pay for things like housing, education, and healthcare means you're not truly passionate about technology.
The worst is the NYC tech scene where they have the exact same standards (which they cargo-cult from the west coast), but they decided not to bring over that aspect where engineers are respected and valued. Instead they borrowed the west coast interview and combined it with the east-coast finance style where programmers are considered clerical workers and cost centers.
NYC is actually a fun city to interview in because since it combines so many different cultures, you can't even study for an interview because 5 different companies will give you 5 different interviews. Finance companies still love brainteasers and they _love_ mutexes, seriously, if you're interviewing at a finance shop just memorize Java Concurrency in Practice and the producer/consumer wikipedia article, because they are reading questions from there, even though when you show up on your first day you'll have been better off having read Spring in Action and Headfirst Enterprisey Design Patterns or whatever.
In general I love how these companies have elite hiring standards but incredibly mediocre interviews. You get asked the same questions over and over again by these companies. Find the biggest sum in a list, find two items in a list that sum to a given number, sort a list of integers/strings but keep integers where integers were and strings where strings were, copy files with ids to all servers with ids, etc. What's the difference between an abstract base class and an interface (by a Python/Node shop), what is a closure, etc etc. I've heard so many of the same questions repeated over and over. They have "high bars" for their candidates but apparently their interviewers can just use the first link off of Google or re-use whatever Facebook was asking in 2010 when they got rejected by them.
And then at the end we have a "tech talent shortage". Whatever happened to that part where we claimed a false negative wasn't a big deal? (glad the article calls this attitude out).
Tech hiring is totally broken and then they claim there's a shortage. There definitely is a shortage of qualified interviewers, not so much a shortage of qualified candidates. I have a friend who I mentored into the industry, she's very smart but absolutely a junior engineer, and yet she starts a new job and was doing interviews with _no_ training 3 months later. They just threw her into the lion's den and expected her to figure it out.
Coding tests are another great one because the exact same people who talk about the importance and value of data throw all of that out the window when it comes to evaluating them. There is no calibration or standards, generally. One person is offended by a hardcoded file path but doesn't care about whether you have tests, and another person is the direct opposite. Many people expect you to write extensive optimizations for the 100Kb input file you were given, another person sees that as absurd premature optimization. Whether you make it through a code screen is entirely tied to whether your coding style happens to jibe with whoever is reviewing you.
Again, the West Coast is better because at least compensation packages reflect the hoops you have to jump through and the monkey dances you have to have learned on cue. On the East Coast, anybody paying attention is desperately trying to get into management by the time they're 30 because to do otherwise is to be humiliated and infantalized during the interview process _and_ during your tenure working there. My simple solution to all this nonsense is to make sure your CTO and head tech managers are being made to jump through all the same hoops with all the same standards. Drop this whole "Oh, the CTO is a _manager_ role, he shouldn't have to worry about all that." (again, more of an East Coast attitude).
Another incorrect thing tech companies claim that benefits them at the expense of labor that engineers brainlessly parrot: "false positive are expensive because firing is hard". No, it isn't, I've seen many people fired very easily the second they're not up to standards. At worst they're instantly let go because it's at-will employment. At best, their given some absurd Performance Improvement Plan that establishes a paper trail so they can fire them without severance. And if your employer tries to tell you people have come off of those things, I have news for you, people lie, and liars are good at reaching senior management positions.
Something else is how you can put in hours and hours of your life, and get _no_ feedback, because telling you how you scored on an interview might expose them to legal liability. Let's be real, the legal liability is when they reject qualified people for things like "culture fit", and if your interview process exposes you to legal liability, maybe it's because it's illegal and unethical.
Another thing tech companies should consider is, instead of paying insane money to recruiters to be pushy sales people who are trying to dupe engineers into their low-paying positions, maybe just redirect that money to the engineer instead, it's 2015, the days that sleazy recruiter types are necessary to try to fast-talk an engineer into a position that's not good for him are over because we have the internet and we can read your terrible Glassdoor reviews.
I'm convinced one huge reason all this happens is to discourage job-hopping, because in this market, liquidity would probably help salaries move up faster.
I know I sound bitter, but again, these are the exact same companies running to the taxpayers to spend hundreds of millions of dollars of middle class Americans money to pay to solve their tech shortage crisis. Yet they absolutely refuse to evaluate their own hiring processes. And a huge chunk of engineers just eat up all this dogma about how hiring is hard and these processes are necessary, and don't think for one second that all these processes are totally designed to please the employer at the expense of engineers being treated well. In a few years, there will be another recession and we're all going to be "rightsized" away, so, demand to get treated well while you still can.
I will express agreement to pretty much all of this. A major issue, though, is one that lingers is referred to in the article:
"Now, this does require one huge prerequisite: every candidate must have a side project that they wrote, all by themselves, to serve as their calling card.
I don’t think that’s unreasonable. In fact, I think you can very happily filter out anyone who doesn’t have such a calling card. (And lest I be accused of talking the talk without walking the walk: I am very happily employed as a full-time software engineer; I travel a lot, and I write books, along with this here weekly TechCrunch column; and I still find the time to work on my own software side projects. Here’s my latest, open-sourced.)"
The usually spoken requirement of having side projects in 2015 is the job posting's equivalent of the bachelor's degree being the new high school diploma/GED. If every programmer did it, then every programmer would in theory be far more qualified for the interesting jobs where more difficult things happen, but then ultimately fall into the same trap where they've still not done enough compared to the people with the side-project programming equivalent of their master's.
This will become a vicious cycle until companies with more experienced programmers realize that life is not about programming, and most of the stuff you're working on is not all that important, with even your average (or slightly below average) programmer being capable of doing the work. In most cases, it's a job like any other. I fear that this may never happen.
Sure. In general, I am not against tough interviews. I just think that if you have elite hiring standards, you better be an elite company making elite offers. But you can't say "we only hire the best of the best!" and then comp negotiation rolls around and you say "well, this is market salary, but since we're a startup....".
I think some of these average companies should recognize themselves for what they are and be more accepting of average candidates. Give some new people in the industry a chance, train some entry-level people, especially if the work you're doing is not really cutting edge tech but just web apps or data analysis stuff that bright but not world-class people can learn with practice.
Also, companies should just focus more on candidate experience. I have interviewed and been rejected by Facebook a few years back. They did expect me to jump through hoops, but in general, they had a few original questions, I felt like they had a great candidate experience with polite recruiters, they put me up in a nice hotel and expensed it instantly, and I knew that if I did get through those hoops they were going to pay me a lot of money. I didn't get it, but I felt fine afterwards. I'm angry at all the average companies that don't do any of that but think they're entitled to put you through the same grinder. My only big criticism of companies like Facebook is they should give more feedback so you feel like you got something to improve upon. Also, I know Zuckerberg campaigns hard for H1Bs, but when Facebook is paying people what they're paying, I assume he genuinely does want to find the world's best and is not just trying to undercut labor. Although most H1Bs are in fact about undercutting labor, and the simple solution is to change it to an auction rather than a lottery and give them more time to look for new jobs before they have to leave. (note, I do not work for Facebook, I interviewed there once and got rejected but had a massively more positive experience with them than most companies. Google is also an excellent company to interview for, I'm sure there are some others but not many).
> I'm convinced one huge reason all this happens is to discourage job-hopping, because in this market, liquidity would probably help salaries move up faster.
Walk yourself through that for a second. You're saying a company that's hiring would intentionally make the process difficult for applicants in order to help their competitors retain their employees? That doesn't make sense. Even if they conspired with said competitors for this purpose, they would simply be ceding the hiring advantage to anyone who wasn't conspiring with them.
Occam's razor suggests a simpler answer: it turns out that identifying good software workers is a hard problem.
Do you realize that there was a class-action lawsuit where the "competitors" Google, Apple, Adobe etc were working together to not poach each other's employees? You're acting like I'm a conspiracy nutjob when there were emails from Eric Schmidt and Steve Jobs where they admitted they were illegally colluding.
Do you realize that many "competing" startups have the same VC firms investing in them who don't want bidding wars between their engineers?
They want new talent but they don't want their current talent bouncing around for more money.
> They want new talent but they don't want their current talent bouncing around for more money.
Of course everyone wants to retain employees, and I'm sure that many will stoop to less-than-ethical means to do so. I never suggested that the possibility of a conspiracy between companies was crazy, just pointed out why it's ineffectual. Google and Apple can maybe make it work, because there's no substitute for those names on a resume. But once you start going down the brand name ladder a bit, it is (a) infeasible for the large number of companies to conspire together effectively and (b) far more likely that some companies will refuse to conspire, and those companies will get the upper hand in the hiring market.
You missed the point of the comment, yes they colluded to not steal each other engineers, but they didn't change their interview process to facilitate it. If they had done that, like the previous commenter said, anyone not colluding with them would have been getting all their good engineers at most likely a discount from their true value because of the other failed interviews. Not exactly the best business strategy.
Raising salaries increase cost of doing business for everyone in the sector--increased competition may only result in decreased realized revenue, and often may just result in further investment in the sector.
To use a crude analogy:
Burger King and Starbucks don't mind if a KFC or whatever opens across the street, because while the newcomer is "competition", it isn't a big deal, and may even bring in more customers overall (because now that area is a "food zone").
They will, however, all fight tooth-and-nail to fight raising the minimum wage, because that results in higher costs for everyone.
I never mentioned a conspiracy around salaries. That would be as ineffectual as the scenario described, but at least it would be cheaper than investing in a hiring process and then breaking it intentionally.
Ok, I agree with most of what you said, but you must admit that there are a large chunk of "software engineers" out there that are just terrible, right? Companies have to do something to try to filter good from bad. So what's the answer? Or maybe you don't agree with this?
Incidentally, I think you're wrong about "one huge reason all this happens is to discourage job-hopping" ... I think it's actually just that people don't know what else to do, and this is what they've always done. It's how they were interviewed, and they got the job.
Almost everywhere I went on interviews the questions were about the very basic things. The things you'd knew if you did what you claimed you did on your resume. E.g. if you are a 3d graphics programmer with 10 years of experience you know what cross product is. If you had been shipping multiple 1M+ LOC projects in C++ you know what the word "virtual" mean and other popular syntax as well.
The places that asked me things I did not know probably had been looking for somebody with different experience than I had so it's fine with me too, even though it would be better if they evaluated my experience from my resume.
So I don't see the author's problem. If the questions on the interview are too hard - you are probably applying at above your level or in the wrong field. I will most definitely not spend my time working on some programming test or do some side project for "show-and-tell" to please you.
Heck, he is appealing to other professions allegedly not doing interviews, yet it's even harder to imagine them doing things he suggests. What kind of side project a doctor will have? A civil engineer? A manager? A pilot? A chef? A lawyer?
I get to interview candidate team members for the malware research team I'm on. We write a lot of code, but we're not really software engineers. Accordingly, I only ask people very basic things in an interview to see if they can use the tools they list on their resume. Often times it's something that can be handled with a single list comprehension (though no one every does that). It is astounding how few people can do what their resumes say.
I'm sure it's stressful if you don't know what you say you do. Then again, if you list 6 programming languages on your resume and can't xor decode a string in the one I let you choose from that list, you deserve to sweat a little.
IMO, having interviewed hundreds of candidates, whiteboard coding selects works very well for hiring a very specific type of fresh-out-of-CS-undergrad student, but it breaks down in almost every other scenario.
For more experienced candidates, we've swapped out most of the whiteboard questions for several other types of interviews, including in-depth technical discussion of one of their past projects, a discussion of how to architect a non-trivial system, and a more soft-skills interview on how they work with others, break down projects, and so on. This gives us a much rounder picture of a candidate, and allows candidates who are very good at something to shine a bit more.
The best interview I've had is where I was given a small project but with a tough deadline and I was asked to come back with a working code/demonstration. The remaining interview was a code review session, and we talked about design, choice of tools, libraries and other general stuff.
I was stunned by the end of the interview because they got out everything one would want to know about how good a person is at their everyday job.
The worst interview I face is when people try to test my algorithm skills, that generally a test of how much I could memorize from careercup.com
I conducted quite a few interviews at my previous job and my solution for the whiteboard anxiety problem is to give a set of coding exercises/questions on a piece of paper to the candidate and give him/her as much time as necessary to answer these in private in a separate office.
When the candidate is ready he/she announces it and we review the solutions together.
It is not perfect, but I find it much less taxing than being asked to solve a technical problem in a white board in front of an interviewer.
I just went through one of these terrible interviews at a local game company, and I have to say I agree with this article 100%. I can code, have been doing it for 30 years, and I can handle myself in challenging technical situations - and I have the resume to prove it.
But put me in front of 3 guys I've never met whose purpose for being in the meeting is to expose every single one of my weaknesses, push me up in front of a whiteboard and ask me to explain the best way to solve a maze puzzle, and I'm going to freeze up and fail the interview. The reason is, this is simply not how I work. I work by sitting at my desk, thinking about the problem presented, and writing code to have the computer do all the work. It may very well be organizationally convenient to have these psychological lynchings occurring, but I highly doubt it gets the absolutely best candidate in the door.
I look forward to new solutions to the problems of finding qualified people; in my case, I feel I unjustly failed an interview at a company I could have been quite productive. Such is life in the modern software world, alas ..
I follow this process for most of tech/product/analyst type roles. Specifically, in the case of tech, to me, it really depends upon the level of coding exposure a candidate has. If he/she is a fresher then I would prefer some real-time exercise (white-board/phone/skype exercises) - it helps me get the idea of a person's thought process. I don't look for right answers in that exercise. If the candidate has some code exposure, I prefer to dive deep into his previous work and figure out how he executed the projects. In both the cases I am trying to asses a candidate's approach to execute something. I also try to figure out the candidate's ability to pickup new things and start executing them quickly. This is measure of a candidate's potential. I assign similar weightage to 'Potential' as to 'Past Work History'.
I think people overstate how much interviews are the problem. Managers/HR are the problem. In most companies the team manager has little to do with finding and initially screening candidates and they (or HR) are too afraid of firing people. I think managers alone should be handling this.
I also don't think hiring by committee decision makes much sense. It reduces the one-on-one evaluation time by the hiring manager. If you have 6 people doing 30-minute interviews of a candidate, everyone gets a few good questions in then the decision comes down to a consensus of gut feelings. Plus the manager can deflect blame for bad hires.
But changing this require re-organizing people and job duties in a company and they'd rather just look for the next hot interviewing trend. (Edit) But as far a trends go, looking at real work is at least better than contrived exercises.
I agree that technical interviews are broken, but not in the way that the author describes. I think the interviews should retain the algorithmic questions, but do away with language gotchas and design patterns etc. More emphasis should be added on the following topics: distributed systems issues, traits of modern hardware (cache efficiency, concurrency issues). There needs to be a part about mathematical competency introduced as well. Knowledge of basic probability, statistics and linear algebra should become part and parcel of the engineering interview process.
> This is why personal references and recommendations remain everyone’s favorite hiring technique…
> …which in turn is a major reason why the tech industry’s diversity numbers are so disastrous.
I'm not buying that argument. I would however agree that recommendations and references create a lack of diverse ideas/approaches. Lack of racial/gender diversity: That would indicate that the people in the minority just aren't reaching out or participating in the community.
You make it sound so easy. Perhaps they've tried, but it's harder for them to connect? Without actual data, I think I'll go with that; it fits better with what people are saying.
IMO: Its more of people being shy and/or drop the expectation that they'll accepted by the community in the first go.
They should come out to the non-discriminatory user groups. (I.e. groups that don't ask for a particular gender i.e. xNUG/xJUG etc)
I've found, in others and myself before, is that I've held myself back from going to meetups because I didn't feel immediately welcome the first go, or that I didn't think I'd know people. However, you can't expect for everyone to welcome you with open arms in the first go. It's easier to blame the issue on racism/sexism, than to admit that you may just be shy. (That being said, I attempt to introduce new people in my meetups to the group and others)
It could also be distance or lack of time or avoiding creepers or having more fun things to do or not knowing about the event in the first place or some hidden factor we didn't think of. The point is that there's a filter and its effects aren't necessarily known to us, so it's hard to deduce what's wrong when some people don't show up.
It seems like figuring out problems like this is what marketing is all about.
> It could also be distance or lack of time or avoiding creepers or having more fun things to do or not knowing about the event in the first place or some hidden factor we didn't think of. The point is that there's a filter and its effects aren't necessarily known to us, so it's hard to deduce what's wrong when some people don't show up.
Except for the creepers issue (which only can be solved by bringing it up to the organizers), thats on the fault of the attendee. You can't make connections if you're not there. Don't blame the organizers for that.
An issue with technical interviews is the shortness of the measurement. Real-life programming occurs over days, weeks, and months. You think about a problem and come up with progressively better solutions while thinking through and understanding tradeoffs. Its more a marathon than a sprint. Technical interviews are like sprints. So its as if you hire runners based on how well they can sprint but then race them in marathons.
Why not require programmers to submit samples of work (most likely personal, side projects due to legal issues) done over a period of time, like graphic artists do when they apply for positions? I would think that would be a much more correct assessment of the programmer.
Interviews should really be limited to check personality, and not much more imo.
This seems like what the article suggested. The issue then just becomes that not everyone has the time to do side projects and some of us are lucky enough that we really enjoy what we do for our job, so even when we have spare time we still work on the company project.
While I agree with you, I've learned no company will stay as loyal to you as you are to them. You always need to plan ahead to fend for yourself. And having SOME kind of side project (no matter how small) is always advised, imho, for future employment.
I have found the most reliable measure for a candidate is looking at their public interactions/contributions. Seeing what projects they choose to work on in their spare time, how they speak at user groups, how they interact on an "issue" online. Blog posts are also great. For me, ability to communicate is such a huge part of being a successful developer, that being able to see how they communicate is always a very strong signal of intelligence, mastery, and interpersonal style.
So this works great for people that partake in such things, but there are clear still many great developers (perhaps the vast majority) that don't have this type of extensive public profile. These are the applicants that I fear false-negatives for the most, and the candidates for which I am not confident of my interview techniques.
Could it be that the proper way to technically interview such a candidate is to offer them a "fellowship" to work on an open-source project? Point them at any project (used by the company or a favorite project of their own) that they'd like to contribute to, and offer them a small stipend and enough time to make a legitimate contribution to open source? And use that as the technical portion of the interview?
Is it maybe time to start allowing software professionals to have a portfolio? I would guess that companies are the main driving force behind not having portfolios that can be shown, but at the same time they are thereby hurting themselves because they can't properly evaluate employees.
This article seems to be written inside a different bubble than mine. Then author lives where all technical interviews are conducted in a similar way, and thinks that is the universal truth everywhere.
In my bubble things aren't perfect, but they're quite different.
I think this would be a fair and effective interview system:
1) College graduate: Keep the traditional algorithm programming tech interview. But allow the use of a laptop (no whiteboard)
2) 2 - 7 years engineer: Require a github side project or give a home coding test. Interview will focus on discussing the project implementation. Also ask them to add a simple feature.
3) 7 - 15 years engineer: Ask them to come to the office and show them some bad code from the company source control. Ask them to explain why it's bad and to refactor it to something better.
4) Express route: Instant offer, no interview. Only "interview" is by a VP and it's more about trying to convince the candidate to join the company instead of the other way around.
The first 3 scale according to typical life situation and industry experience. The amount of free tend tends to decrease from college graduation to 30's and beyond. For example, a college graduate has lots of free time. Someone in their 20's has less, but still a significant amount. 30's and beyond tend to have little free time due to family responsibilities.
But the interview styles also match what they should know by that point. A college graduate doesn't know anything about real industry coding so the typical algorithm coding interview is ok. Someone with 2 - 7 years experience should be good at writing lots of code. But they don't yet have the experience to know that sometimes deleting code is better than adding more. They also don't have enough experience yet to read code well and refactor, their "code smell" sense is not yet developed and they think adding more code is the solution to everything. An industry veteran of 7 - 15 years should be able to read code well, spot all the issues, and be able to refactor into something better. These skills can only be gained after years of experience.
So the 3 interview styles scale well according to the free time they are likely to have and testing whether they've really grown as an engineer. The last one, the "express route" is reserved only by referral from the company's best engineers. For example, if the company has this tech ladder:
junior engineer, engineer I, engineer II, senior engineer, senior engineer II, principal engineer:
Then only senior engineer II or above AND being at the company for at least 3 years would be allowed to make ONE express route referral per year. The reason is that an engineer has to be in the industry for a while to meet other good engineers. And 3 years is enough to learn the company culture. This express route system effectively would give a competitive advantage over other companies. For example, imagine a very good engineer who is already employed. They are busy and don't want to go through any kind of interview process. But if they are given an instant offer, that greatly reduces the barriers for them and they are much more likely to seriously consider switching to the company. The main idea here is that good engineers have a lot of options but not much free time to interview, your good engineers should know a few in other companies, so the express route is a way to get more good engineers who would usually not consider moving because they don't want to spend time interviewing.
I've seen two kinds of motivations: that for career climbing, and that of an engineer who enjoys making things. I personally prefer my engineers to have the second.
I tried hiring the normal way (CV, white board) and had one candidate worth speaking to in over 200 applicants.
We decided to try something else. We stopped reading any cover letter or CV and wrote that in the job ad. We asked candidates to build a simplified version of what the job was about, taking about 4 hours, then to send us the code which we'd discuss with them over the phone at their convenience. Anybody whose code passed would be hired, and if people couldn't be bothered because they were already famous they could send us their libraries. Nobody took the Starcraft best of 7 option :(
Why we did this:
- having to write code eliminates those who can't, or who can't be bothered; not having complex unrelated hoops to jump through eliminated those who are good at career climbing;
- we replaced the hours candidates would normally spend dealing with HR or travelling to a site for something ("cultural fit") that overselects the polished anyway, with something people we want would enjoy doing (solving a problem, writing code);
- no pressure, access to your "external brain" (Google, your own libraries) unlike with a white board; allows for the anxious to pass, and those who outsource a lot of their knowledge to the machine (like me);
- you can tell a lot about how people will approach their job from the way they approach the small version of it; documentation, structure, abstraction levels, choice of libraries, and if you're wrong about the candidate's reasons you can always clear it up in the chat;
- it's fair because everybody has to tackle the same problem;
- no discrimination since it's all ability based: I took to Skype-text-chatting with candidates instead of voice calls mostly because I don't like talking on the phone. It was only when she sent us her passport scan for the contract that we realised one of our candidates was female. Another team member didn't have a CV; it's because he was just finishing high school, as we discovered when we talked to him, but his code was good enough, so he got the offer and skipped university.
I've never had to put out another job ad (not for myself anyway) because I have so many qualified candidates left over from that round, and from the team network. Surprisingly, we didn't get spammed; the requirement to post code seems to have been mostly understood and we only had a half dozen "dear Sir/Madam"s.
So I disagree with you, but YMMV. We were hiring for a small, distributed team solving relatively well known problems. Requirements are probably different at Google or Facebook.
150 people did... and most had pretty good jobs already. 40 on the first day!
Think about it another way. How much time do you spend/waste when applying to a standard corporate job?
I graduated at the height of the Lehman fallout. I really wanted to work in finance. I applied to over 200 companies and did a seemingly endless string of interviews. In one case, a fund interviewed me an incredible 17 times for an analyst position (about half of which were "technical" i.e. "here's a bunch of accounts, tell us what you think of the company"). In every occasion that led to an offer, I spent way more than 4 hours travelling, talking with HR, doing various rounds of interviewing with various team members, having "social" coffees with other same-level folks... hell just the automated screening tests, essays, forms for somewhere like Goldman Sachs would take a good couple of hours if you did them properly. I still know many experienced people for whom a job search during an economic downturn means sitting for months at home doing these stupid screening tests and perfecting their cover letter (after all, putting the wrong company name in your letter is enough to get you rejected, because of course you really, really want to work at this particular bank and none of the competitors even though the reputation, work and compensation are identical).
I think a lot of more experienced people realised the trade-off and thought this was quite a positive signal on our part - at least based on the number of people who were willing to quit incredibly well paid jobs for our pretty average Asian retailer with no equity and a third of the pay. I had one prominent member of the open source community slam us publicly for "wasting people's time with free work" whilst one of his work colleagues was gleefully submitting his solution to the task...
The guy who wrote this is not even a real professional software developer[1] but it seems that he has strong opinions on technical interviews anyway. In fact I looked up few people he mentioned in article and none of them seem to be real full time software developers either. Nevertheless whoever has microphone gets to get heard, I guess.
I have seen a lot of PMish/weak programmers make cries about technical interviews. Yes, whiteboarding interviews are not perfect and false negatives rate is typically high but the companies who want nothing but the best would care less about false negatives and rather more about false positives. Anyone who has worked long enough in software engineering professionally knows the enormous cost of false positives [2]. These sort of companies also typically care less about which language and frameworks candidate knows at the moment. They rather put huge weight on evidence whether a candidate can analyze a challenging problem, apply computer science and obtain a solution. Why? Because these companies are actually working on problems that does requires deep hard computer science.
Most - perhaps 80% - of software development houses are not like that. Programmers in those companies have never bothered to think about collisions in hash tables and they often wonder why anyone would care when libraries take care of everything. They never need to create data structure for trees or graph or even linked lists because simple things have always got job done. They wonder why they are being asked questions on those "arcane" things when nobody really uses computer science stuff that they were taught at school.
That's the culture problem. Those 80% of companies tend to copy interview process at other 20% even though it is meaningless for what they do. Interviewers typically chose their questions from algorithms text books or even internet searches. Then they get stuck on it for a decade and proudly mention it as their "favorite question". That's completely wrong. I almost always tend to ask CS questions that I myself needed to solve to get my job done. I typically retire my questions in month because I probably would have found another problem where applying right algorithms and data structures was the key. When I ask these questions to candidates, (1) it's rare chance that it would have been covered in books like Cracking the Coding Interview (2) no risk because candidate decides to do interview brain dump (3) I know candidate is smarter than me if s/he can solve within hour under pressure (4) If they fail miserably I know this guy could have screwed up our project just last month had he been hired.
Bottom line: Ask questions from your actual day job. It takes a lot of skill and effort to abstract away other complexity and form a good interview questions and that's why being an interviewer is hard. Whiteboarding works if you do that. It doesn't work if you are trying to copy companies which are doing different kind of work than you do. It most certainly doesn't work if you never actually needed to solve the problem you are asking to get your actual job done.
The more I interview the less I tend to give weight to the technical part and the more I focus on evaluating how well I get along with the person, how easily and naturally we can discuss programming topics, and do we happen to converge on building a rapport.
People can learn technical stuff and people can grow professionally but the chemistry is harder to fix. And, unlike some programmers might see it, even code is communication and, a bit surprisingly, is subject to chemistry. You can talk to some people via code. Not all people.
I consider the technical part merely as a way to filter out people who seem to be missing something blatantly obvious. I'm fully aware that my system will sometimes generate false negatives but there's no such thing as a fully objective interview.
I have a few technical things or topics that I usually ask about and I expect every candidate to know at least a few of them but certainly not all. I generally go forward topic by topic until the candidate has at least N things s/he knows about. Of those that the candidate does know about, I ask more and more details until I run out of questions or the candidate runs out of answers. If I'm getting strong signals early that the candidate is likely to be ok, I'll just try to finish up the technical part quickly.
Then I proceed to the most important and revealing part which seems to be asking about a project that the candidate is really proud of, work or hobby. In the best case I get a lecture down to details on something the candidate built that s/he's still excited to explain to someone, on the coolest things s/he could build into that project. A good programmer pours so much passion into some, possibly minuscule, part of what s/he's building that you surely should be able to ooze some of that out back.
Another important part is asking about hobby projects: if one particular candidate doesn't program on his/her spare time, I'll probably go with any other candidate who does unless s/he's exceptionally strong otherwise.
I try to remove pressure from the interview by acknowledging that given some rough preliminary filtering, I could go wrong either way. Maybe I reject a good candidate because I don't know as much myself. Maybe I accept someone who's really nice and who seems to know about things, passing my filters, but turns out s/he just can't produce much code hands down. All that will happen some day.
The trial period is there so that both parties can revert an obviously wrong decision. We haven't needed that yet but I greatly prefer to postpone some responsibility for later and relaxing the actual interviews as much as possible. People don't like to be grilled and I don't like to grill people, not only because it's very consuming but because it doesn't seem to be a particularly effective indicator.
There will never be a silver bullet to interviewing because there will never be a silver bullet to meeting people the right way, but if I were to suggest one thing it might be to listen more and ask fewer questions. By listening, I don't mean letting a babbling candidate take over the interview. By listening, I mean to figure out who is this person, where s/he's coming from and where s/he seems to be going.
Being the fastest and most accurate shooter isn't much of a value unless you know what you want to shoot, what needs to be shot, and why.
Typically, I ask questions that give me an idea about the candidate's depth of experience and awareness. For example, "I see that you've spent X years using Subversion for source control. What are your opinions on trunk-first development vs. branch-first development?" I ask questions that speak to practical experience and design ability, without getting into too much depth. For example, "Pretend that you're using an OO language to build an application for <insert purpose here>. Just off the top of your head, what are some classes that you would expect to see in your class diagram?". Etc.
I find that this level of questioning is a much better screening tool than trick questions about programming language quirks or the minutiae of frameworks, or cliche puzzles about how many golf balls fit on an airplane. However, I groan and roll my eyes when I hear people challenge the need for technical interviews at all. Yes, they are necessary.
Having performed a thousand interviews by now, I am awestruck by how poor the software development talent pool is. I am aware that the Bay Area is overrepresented in HN's readership, and that crowd tends to take for granted the talent level found in the technical equivalent of Mecca. However, I assure you that the rest of the planet is dominated by sleepy line-of-business developers... who have all the passion beaten out of them in the first 5-10 years, and spend the rest of their career just phoning it in and not growing.
I sometimes ask candidates what the letters "MVC" stand for. The successful response rate is around 50/50. I ask candidates to briefly explain the advantages of the Model View Controller pattern, and only 10-20% can field the question. We bring in Java and C# candidates who have been working with their respective language for 10 or 15 years, and they get COMPLETELY EXPOSED during the face to face round when asked a series of basic certification exam style questions. Nothing tricky, just core fundamentals.
People who post here are not the norm. The "norm" is atrocious. So yes, unfortunately we all must endure technical interviews... to filter out people who have enough confidence or personality to excel in the other interview segments, yet are utterly useless.
Let's put it this way: the more experience I've had with interviewing, the more selective I've been in my own job searching, and the more aggressive I've been in my salary negotiations. If you are really good, and are located outside of San Francisco, then you are worth your weight in gold and should value yourself accordingly. You wield tremendous leverage once you make a strong showing in a technical interview.