> an quickly learn new skills. That's exactly what leetcode tests for
I don’t know where people get this idea. Leetcode used to test programming proficiency
I feel the only reason companies continue to take leetcode to absurd levels is because (a) they think harder questions would get them better engineers and (b) other companies are doing it
The discussion is exactly this, that leetcode is used to test programming proficiency, but it's an inadequate bar.
The parent is postulating that it's a lie; that actually leetcode tests for malleability, and we're getting bent out of shape because we think it's a poor test of coding ability.
However it might be a good test of malleability, and that might actually be it's intended purpose, but that's not communicated.
I don't know about malleability, but I know Big Tech uses it to filter down the number of applicants. They get 200 applications in an hour so HR needs a way to pick a few that won't get them sued.
Not quite but almost: Leetcode doesn't evaluate programming proficiency but ability to solve complex and abstract problems using programming. Doing so doesn't require being good at programming but being good at solving problems, and doing so using a programming language. It doesn't require knowing a language well, but only well enough to solve the problem.
This is fine, except when the candidate knows substantially more than the interviewer about how to solve the problem. I was once asked to implement a spsc circular buffer, and I used C++11 atomics rather than a mutex. I wrote up a working solution, but most of the interviewer's followup questions fundamentally didn't map to my wait-free solution. I spent most of the time explaining to the interviewer how atomics and non-blocking algorithms work. The FAANG rejection email came a few days later.
So I don't think these interviews evaluate the candidate's ability to solve complex and abstract problems. They seem to evaluate the candidate's ability to grind contrived sophomore-level computer science homework. Experience solving the real-life version of the problem rather than the academic one is unwanted.
this illustrates the problem very well. your lock free solution is superior yet the interviewer had no clue and you only know about this solution due to experience which is not what leetcode optimizes for.
String problems are rare, though may come up more in search engine companies.
But manipulating sequences comes up a lot in many contexts. Usually it's sequences of more complex things: instructions, packets, files, polygons. It's hard to assign tests directly on those, because you have to explain the objects themselves first. So it's a reasonable shortcut to test ability to manipulate sequences of characters.
I am on the interview team for the company I work for and I wrote what some consider to be our canonical python solution (fastest and simplest) for one of the problems we use. I had to look up the syntax for a "for loop" because I hadn't written python since my first year at university. To me, being able to stitch together simple concepts present in almost all mainstream programming languages doesn't mean I know python, it means I know how dynamic programming works. I don't know the python ecosystem (history, present, or future), know the standard library, know what idiomatic code looks like, deeply understand what makes the language unique, have the mindset of a python programmer, etc. Maybe I just have a high threshold for feeling like I "know" a language.
i did a leetcode interview in c++ and didn’t remember some syntactic details of lambdas and that was the end of it. please stop with this nonsense that interviewers are understanding and that you can look things up or that you will be able to take your time to produce the optimal solution to a problem.
Well I can't speak for other interviewers especially at other organizations, but I try very hard to make it a positive experience. In fact I'll give as much help as necessary to get to a working solution, within reason. I would much rather fix the lambda syntax myself or help the candidate find an example on stackoverflow so we can go back to the interesting part. If the interview ends with a working solution but needed a ton of help, the candidate probably did not pass but maybe they learned something. And maybe they'll tell their peers that it seemed like a place where they could fail without embarrassment, collaborate, and learn.
This doesn't seem right at all. I can kludge together a solution to an arbitrarily complex problem in any language, but that doesn't mean I'm writing it in a way that can be extended and developed upon by other people. The latter is also what takes up the majority of development time.
I don’t know where people get this idea. Leetcode used to test programming proficiency
I feel the only reason companies continue to take leetcode to absurd levels is because (a) they think harder questions would get them better engineers and (b) other companies are doing it