1.) Everyone is studying these problems all of the time and they finally disappear.
2.) Other outcome is a dystopian field fueled by a race to the bottom where everyone is practicing algorithms problems all of the time. If you read the blind forums, some people are completing 500-1000 leetcode problems before heading into interviews.
I'm putting my money on number 2, which is where we already are. Can only imagine what this is doing to code quality...
I started leetcoding again this year because I want to jump ship and holy crap! I used to solve problems on leetcode 5 years ago (last time I switched jobs) and it was pretty laid back. Nowadays, I'm seeing dynamic programming with 3D memoization arrays like it's something normal. It all started like a way to check if someone knows how to write code or knows data structures and basic algorithms but now it's competitive programming level.
That's what happens whenever there is a competitive exam. Look at math olympiad papers from 1970's, and compare them to the recent olympiad math test. Tougher than 1970s. It is the same about many entrance exams we see in China, India, etc: older exams are easier than the recent ones.
People just master the foundations behind the old test material. Now that stuff has become trivia of the today. So, they need some advanced stuff to test the test takers.
> some people are completing 500-1000 leetcode problems before heading into interviews.
Over the course of one year, I completed, classified, commented 200 leetcode problems. I also taught algorithms to third and fourth year university students not too long ago. I believe I write readable code, I know perfectly the language I'm using (at least for that purpose), I'm totally fine with complexity, and I know most methods involved in these algorithms, including more advanced algorithms such as KMP.
Yet... I failed my round of interviews at Google. After this preparation, I'm still not able to solve quickly any leetcode problem in the context of an interview. On a whiteboard, with an interviewer in my back, in a stressful situation. I need to practice more if I want to get consistent results.
So I agree with your conclusion. We are competing with a lot of people who train using the same resources. Including young graduates who have a lot of free time on their hand.
On a positive side, I'm thankful to Google for giving me a shot. Based on my resume (40+ with little experience in software industry), I'm not sure I would have been interviewed in a more traditional company, let say a bank.
To come back to my interviews, system design went very well. Algorithms quite well too but not well enough. Couldn't solve one problem, and a bit slow on an other one.
The recruiter first told me I passed, and that they were going to find a team for me and make me an offer. But they finally asked me to re-take the algorithmic interviews a few months later, to "make a stronger point to the hiring committee".
Never heard from them since then, about 8 months ago. Recruiter doesn't answer emails. I think she moved to a different position. Not sure what to do now?
There is nothing you could do. Just like when you are new to the dating world, you are just new to the recruitment world: when the company wants, they will send you emails, call you, etc. When you don't hear from them, don't even bother sending emails. That's why you should interview with multiple companies, in the hope that one of them will offer you a job.
Can only imagine what this is doing to code quality...
I don't get why no one seems to consider the possibility that these sorts of interviews actually do get high quality engineers in the door.
I get downvoted for raising the question every time. But isn't it possible this interview style actually works, even though it doesn't resemble real coding and even though many of us hate it?
I have yet to see any compelling argument for why I should believe these interview practices don't work. And yet the fact that so many companies, with so many resources to change things up if they felt it was in their best interest, keep interviewing this way must at least suggest the possibility that maybe it works?
> I don't get why no one seems to consider the possibility that these sorts of interviews actually do get high quality engineers in the door.
That's is debatable to say the least. I'm both a hiring manager and on the market for a new job (so I am still solving leetcode problems in my spare time). After 3 years of hiring based on leetcode for technical competency, I can say that the quality of engineers is a hit-or-miss. I've had people writing brilliant solutions to hard leetcode problems crash and burn when writing production code. Currently on my team, the most technical debt was written by someone who completely aced the leetcode stage and was pip-ed out a couple of months back. We even have an inside joke to look both ways before changing X's code. He easily landed a job at a unicorn and I'm really glad he's their problem now.
I really don't believe there is a strong correlation between competitive programming chops and being a competent engineer in a team environment.
We are currently changing our interview practices to ask questions which touch on more practical issues (like multi-threading, review a piece of code, change a piece of code, make a unit test pass, instrument this with metrics, etc..) because what I personally identified as a better signal was competency in specific types of leetcode problems such as LRU caches, O(1) data structures, iterators for common data structures, etc..
That's an interesting theory. But this problem is actually not confined to the industry, it impacts academia too. Even just as a purely theoretical statistical problem since there is enough data and easy access to that data we should already have some research in this field which sheds some light on the hiring issue.
You have everyone's interview scores, performance feedback and promotion histories in a database at a company with tens of thousands of employees. You also have the interview scores for everyone who failed the interview process. Put a statistician on that for a day and you will get a lot of significant data about your hiring pipeline.
It is not hard to do, the data just isn't public and such data will never become public. Therefore public researchers will always lag behind private ones, since the private ones have access to the interesting data.
Edit: Also it is not a theory, I have seen internal studies on this myself.
I didn't want to call it bullshit but you are not the first who wants to walk me down the bullshit lane.
Here is the thing: this discussion is not about scores or about candidate performance. It's about the question whether the same candidate would be better assessed with a technical interview over a non-technical interview. Since you are not even addressing the question at hand I call it bullshit.
Also, something which has no written proof of and there is no consensus about among at least a group of respectable people and/or institutions is just a theory.
And the third one, you somehow assume that public sector is somehow an inferior player to the market who might not even have relevant data. That's again a theory though and not necessarily correct, here are a few examples of publicly funded software engineer employers: CERN, NASA and the US Army.
> It's about the question whether the same candidate would be better assessed with a technical interview over a non-technical interview.
I don't understand the problem here, all technical interviews are also non-technical interviews since they are still communicating with a human and not just doing problems on a computer. If we didn't care about the human interaction part we'd just put them in a room alone with a set of problems.
Also I think you don't understand how much recruiters hate this process, they'd do anything to remove it since they have no way to game technical interviews. So they work hard to change the interview process at Google to something more soft like we have in other areas, but the evidence points to soft interviews being worse.
And you might not believe me, but I definitely believe me and the people deciding how to hire people at these companies certainly believe in the studies they do, so there is no way they will change the process. These interviews are here to stay, and until we have some new methods nobody has tried yet it won't change. You can complain all you want, but the best companies will be using this process as they scale up since nothing else works at the moment. Some other things might work for small companies, but as soon as the founder can't interview everyone himself it breaks.
> Can only imagine what this is doing to code quality...
Why would studying algorithms and data structures affect code quality? They would similarly be able to learn to write quality code once they're inside the company, no?
This race to the bottom, as parent poster said, is from a generation of developers hyper-specializing in interview-style problems. These problems are tiny and self-contained and have a slick solution which can be regurgitated onto a whiteboard in about 30 minutes give or take.
While that is not a negative skill to have, it is also not a skill that I'd list anywhere in the Top-25 of most valuable skills for productive developers.
Because these problems encourages you to write unreadable code. It makes sense when you write it, because you can fit it in your head, but you never have to revisit it after having passed the problem. It encourages one letter variable names and other quick hacks in the name of speed.
They work against creating readable, understandable and debuggable code which is much more important in general than being able to solve algorithimical problems you'll almost never see in real life.
I've seen this first hand where some were brilliant at these problems but wrote the worst code imaginable. I would rather hire someone who can write clean and simple code and teach them how to solve these problems than the reverse.
Because producing simple and good code is much more important.
It's quite condescending and narrow-minded to say that solving algorithmic problems is thinking, but writing clean and simple code isn't.
Writing readable code is much more than just formatting code and using decent variable names. It's about simplifying your design just enough. Code reviews is not enough to teach someone this.
> Because these problems encourages you to write unreadable code.
First off, no, writing good code in an interview wins you additional points. Second, why do you think candidates can (or will) continue doing that on the job? New employees aren't allowed free rein to check in code from day 1 at most places - trust has to be earned. And I don't think any decent company allows check-ins without review.
Code reviews are good at catching oversights and at giving design & implementation pointers to people acting in good faith. Senior talent doesn't have the time or energy to push back on all of a systematically incompetent person's code until it's good. See the bullshit asymmetry principle. And once you hire two of these people, they'll just review each other.
> giving design & implementation pointers to people acting in good faith
Why automatically ascribe bad faith to people who study for coding interviews? They went to all that trouble to get better at something, so they're obviously diligent and seek self-improvement.
> Senior talent doesn't have the time or energy to push back on all of a systematically incompetent person's code until it's good.
That sounds like a problem with the company's timelines or priorities. If senior talent is so strapped for time that they can't insist on decent designs upfront, then they likely can't hire the right people either. Because even that takes time and energy. Mentoring juniors is part of the job for senior engineers.
The time to review a diff is proportional to how much work it needs. Code reviews that are within the normal bounds of “needs mentorship” don’t cost that much. Productivity is noticeably down across the board during intern season, though.
It’s more than a full time job to push back on all of a bad hire’s output, and people are accountable for their own projects as well. Things inevitably get to “fuck it, good enough.”
Because you get the college applications problem, where you stop getting the people who have a real passion for programming and coincidentally have problem solving and algorithms skills, and start getting people who are really good at problem solving and algorithms and may or may not have a passion for programming.
> start getting people who are really good at problem solving and algorithms and may or may not have a passion for programming.
Why is "passion" so important? And what even is passion? For professionals in every other field, competence, ability to deliver results, and getting along with people, are what matter. Many pros are passionate, in the sense of loving their work, but passion isn't a prerequisite for being a pro.
Oy vey. I didn't mean passion in the "I'll work 80 hours a week for little pay mister!" kind of way. I mispoke. I meant that before, if you based your interview process off data structures and algorithms, you'd get competent professionals who just happened to be good at algorithms. While now you'll get a bunch of people who have specifically trained to be good at algorithms. Which has no bearing on competence.
Why can't people who study to get good at algorithms also study to write better code? They've already demonstrated their aptitude for learning difficult things.
For one, interviewing requires extremely short term knowledge. Almost everybody I know who plays the interviewing game learns just enough to pass the interviews, then immediately forgets it until the next time. So it's not really comparable to learning and refining a difficult topic over the course of a few years, as in the case of software development practices.
Also, it's two completely different skillsets! There's something very weird about interviewing for X but then demanding Y. I don't ask my primary care physician how good he is at surgery, then be like "he knows how to learn difficult tasks, so we're good".
> Almost everybody I know who plays the interviewing game learns just enough to pass the interviews, then immediately forgets it until the next time
Presumably the first time they learned computer science fundamentals it took them some time right? People usually go to college for several years to learn this stuff - or 6-month bootcamps for 40-60 hours/week. Subsequently of course it'll take less time to refresh their knowledge.
> Also, it's two completely different skillsets! There's something very weird about interviewing for X but then demanding Y
I'm not disagreeing here. I just think most companies that use this interviewing style figure that X (easy to measure in 45-minute interviews) is a reasonable proxy for Y (much harder to measure). And Y (writing good code) is easier to train new employees to do as long as current employees maintain standards in code reviews.
> Algorithms interviews are the only thing with an actual bearing on their future earnings.
Do they not want promotions or pay increases at their current jobs before leaving? Usually you have to do good quality work to get promoted.
Please read my original comment: https://news.ycombinator.com/item?id=20727948. People who lack these skills can learn after getting hired. Any software company worth working at has a code review and design review process.
1.) Everyone is studying these problems all of the time and they finally disappear.
2.) Other outcome is a dystopian field fueled by a race to the bottom where everyone is practicing algorithms problems all of the time. If you read the blind forums, some people are completing 500-1000 leetcode problems before heading into interviews.
I'm putting my money on number 2, which is where we already are. Can only imagine what this is doing to code quality...