The only problem that I have found is that in some situations it doesn't report errors correctly when adding new dependencies. Refreshing the window or rebuilding the project usually helps.
On the other hand, the compiler feels very solid. It can be annoying in the beginning because of how disciplined the code needs to be, and the error messages can be hard to understand.
After getting used to that, the dev ex is really smooth specially during complicated refactoring, the compiler and the IDE make things almost boring.
I credit any of my "success" to the struggles I had as a child. Our world isn't perfect, and some of us experience different & harsher aspects of its flaws. I think the earlier some experience these flaws, the more they learn to adapt and realize truths of human nature. We learn that many things aren't going to be handed to us and to get what we want we will have to work a bit harder, or struggle a bit more (In the military we called this "Embrace the Suck").
We learn that our families aren't the same we see on our favorite TV shows. It hurts & its sad for young children to experience a struggle, but I think they can ultimately strengthen them -- unless the difficulties are so severe, or misguiding, that they lead children down the wrong path of life (i.e. severe psychology disorders, crime).
Again though, the world/nature is a tough place, and its nice to want to enforce being easier on and conceal realities from children, but I think there is something of value in experiencing "realness"/struggle -- a simple & less-harsh example is the losing team getting no trophy.
I was deeply depressed and anxiety strikken during my entire adolescence. After dropping out of university at 20 I got diagnosed with ADHD and Bipolar. While sometimes I wish I'd had a normal life with parents that taught we good life lessons, in hindsight I feel like I've learnt even more fundamental lessons and have experienced some "truths" that I would not have experienced otherwise.
Depends on the approach, you can use mutable or immutable containers. In fact, the OpenVZ VPSs that were at one time reasonably popular were just containers.
I hate the Cracking the Coding Interview/Leetcode style... studying for these type of interviews is annoying. Trying to find a good video on youtube, where they aren't just naively coding up the bruteForce->optimal possible solutions, especially is irritating. It is literally a landscape of college kids with thousands of viewers who treat these interviews like a standardized test (SAT, GMAT). Even the author of the book produces videos with very little insight or meaningful content.
"Find all the subsets in a set that add up to sum" -- "Okay for this we will use the sliding window technique and here is how it is done" -- WTF is this. I get that they want to see problem-solving skills, but this is on a different level requiring the interviewee to have studied and knowledge of the technique, otherwise we are basically trying to develop efficient algorithms from scratch and in little time. --This makes sense for college interviewees who have only studied the past 4 years, but for a professional with experience, why is this adequate??
Does algorithmic programming matter?-- still yes. But the way it is interviewed is absurd and inadequate. I had a production service centered around the stable-roommate-problem. It took me a week or two (mostly research) to develop something out and fit it into our codebase. It then took 1-3 more weeks to actually make it work for us and cover edge cases (i.e. Irving's algo quits after instability -- this isnt an option in the real-world). I read much material on the subject, other's code, had many deep-thinking sessions where I was mostly in my head, wrote unreadable scratch on paper, collab-whiteboarded (sometimes arguing), tested&failed PoCs, and had many breaks in-between it all. How successful was that project?--very, did I need to know and study techniques with lacking/meaningless basis to do it--no.
Recall when you first learned how to program. Something as trivial as iterating through a list requires some thought. What is the syntax for a "for" loop? Do you start with i = 0 or i = 1? Should you end at i < n or i <= n? At some point you stop thinking about it and write "for (int i = 0; i < n; i++)" instinctively. You can now solve harder problems that require iterating through a list without thinking about it.
All of algorithms and programming is like this. You unlock harder problems as you learn more problem solving building blocks (whether it's an algorithm, a software design pattern, or some API call). A programmer is someone who can learn these on the fly for any problem of any domain. An effective programmer is someone who has a large cache preloaded with these building blocks already.
In that sense I find leetcode style problems to be very fair. They are meant to be solvable in under an hour without thinking once cached into muscle memory. All it is testing is whether you're capable of becoming an effective programmer in some agnostic domain. All you need to do is warm your cache with a small number of standard patterns (which might even be useful for real work). It does suck that even the good programmers need a few weeks to warm their cache. But it weeds out the fakers who can't do it given any amount of time.
> In that sense I find leetcode style problems to be very fair. [...] It does suck that even the good programmers need a few weeks to warm their cache. But it weeds out the fakers who can't do it given any amount of time.
It also weeds out people who have better things to do than cram two weeks for your pretend-meritocratic little exam.
How about requiring that candidates comment their code using quotes from Classical Chinese poetry? They are proven timeless classics that an intelligent person can apply to any situation. This test would weed out the fakers who can't refresh their caches while also honoring an ancient tradition of stupid job interviews, the Chinese imperial examination.
Algorithms and coding are a lot more relevant than Chinese poetry.
Yes, you might need to prep two weeks in order to get a job that pays very well for 2-15 years. And since this test is so common, you don't have to dedicate 2 weeks to juat one potential employer.
If you can't freshen up on standard basic algorithms and coding in 2 weeks for an interview, how can you do it for whatever complex problems you'll face at work?
You can't hire someone solely based on what they claim they built.
I see a lot of people that complaining that they can't take their senior engineer rank from company A and jump into company B at the same level without even justifying themselves, let alone working their way back up. To me that's utterly disrespectful to the the domain specific knowledge and experience that is needed to be a good senior engineer. Someone thinks they are obviously good enough to be a senior employee somewhere, but for some reason isn't good enough to build something valuable on their own, and isn't able to demonstrate skills in a face to face meeting.
> It also weeds out people who have better things to do than cram two weeks for your pretend-meritocratic little exam.
All interviewing techniques have to make precision-recall style trade offs. The mere fact that an interview method has false negatives surely doesn't disqualify it. It has to be compared against the available alternatives. What are the alternatives?
- White boarding? Algorithmic knowledge is often tangential to the actual job.
- Take home assignments/mini projects? High relevance to job, but in my experience takes the most time for the candidate.
- Trial period? Most people can't just drop everything they are doing to come hang out at your company.
- Conversational interview. Like white-boarding, tangential to actual job. My experience on the interviewing side is that it is often hard to learn much about the candidate.
- Read their code on github / blog. Lots of candidates don't have the time or inclination to code outside of work.
- Something else?
So what's your preference? I've done them all and find them all to be lacking in different ways.
> How about requiring that candiates comment their code using quotes from Classical Chinese poetry? They are proven timeless classics that an intelligent person can apply to any situation. This test would weed out the fakers who can't refresh their caches while also honoring an ancient tradition of stupid job interviews, the Chinese imperial examination.
This seems like the fallacy of grey to me [1]. When hiring, for example, a web developer, yes, algorithmic knowledge is a somewhat arbitrary indicator to use, but it is not completely arbitrary. Not all things are equally unlike.
If I were hiring for a basketball team, and had to choose between two candidates neither of whom had experience playing basketball and were alike in all ways except that one was an avid soccer player and one was equally fervent about pottery, I would choose the soccer player. The logic of course being basketball and soccer have more in common (athleticism at the least) than basketball and pottery.
Likewise, algorithmic thinking shares some common points with almost any kind of engineering task.
Interviewing is just a hard problem where you are trying to predict future performance based on a few hours worth of data. I don't think most of the popular the techniques we have are obviously stupid. Companies have strong incentives to make hiring efficient, but there just isn't a lot of low hanging fruit. Of course there are the occasional ego maniac interviewers, but an ego-maniac is going to be able to ruin any type of interview. Let't not throw out the baby with the bathwater.
> If I were hiring for a basketball team, and had to choose between two candidates neither of whom had experience playing basketball and were alike in all ways except that one was an avid soccer player and one was equally fervent about pottery, I would choose the soccer player.
That’s not at all what happens in programming interviews that use algorithmic puzzles.
You have candidates who already have a professional track record in basketball, and instead of focusing on that profile and whether it’s a good fit for your team, you give them a timed soccer workout because it’s somehow a more objective measure of athletic ability.
Any basketball team that hires like that wouldn’t survive for long. The quiz interview format in the tech industry is a form of “anti-Moneyball”. It works for the SV giants because they have an enormous supply of candidates and they need generic competence that can be shuffled around. Smaller companies would do much better to hire for the actual role, not for “Cracking the code interview” memorization performance.
The basketball was just a metaphor to explain relative similarity. Obviously hiring for an actual basketball team is a very different set of challenges as they literally have hundreds of hours of video taped performance history to evaluate before the candidate even walks in the door, and at lower levels, asking candidates to spend several days “trying out” is not considered onerous. But I would still argue that performance on programming puzzles much more closely correlates to programming job performance than knowledge of Chinese poetry.
As for hiring people based on their experience profile, it’s great of course in the case of candidates with lots of open source contributions and such, but this has the issue of ignoring the majority of candidates which don’t contribute to open source. Should being an os contributor be a hard requirement?
But if you are suggesting that a resume with the words “5 years experience web development at company x” means anything, I’m a little incredulous. I worked with people that claimed to have far more experience than that and struggled horribly with even the most basic tasks.
Finally, a little tangential, but memorization gets a lot of flack for being a “stupid” skill. My experience is that it is nearly impossible for adults to memorize something like Chinese by poetry “by rote.” Indeed if you try memorizing some poetry I think you’ll find that it really is a very fulfilling and creative process.
1. Filter candidates based on fairly simplistic early models for personality profile, motivational bias, and metacognitive disposition cues.
2. On a subset, further refine the above filtering further w/ another 1 or 2 interactions looking for inconsistencies and/or stressing facets of the model that seem contraindicating or hard to suss out.
3. Prep the team on what & how to assess and bring a small number of candidates on-site for a few hours, to work directly with the members of the team they'd be joining, and have them work together on exactly the work they'd be doing.
So I put in a lot of work ahead of time as a hiring manager to understand what kind of role I need to hire for, what kind of person would likely be successful in that role, and what kind of person would likely be successful working with the team that exists (or will be built). Then I completely avoid some contrived pile of quizes and weak competence signals by instead directly using an actual work environment w/ the same people, the same meetings (stand-up, design review, etc.), and the same tasks that both we & they would be cooperating on together.
Seems like a nice approach. I've been through similar setups and I think there are still tradeoffs though. Step #3 is tricky as you need a task complex enough for the candidate to show their skills, but it has to fit within a few hours. Unrelated to interviewing, I routinely under or overestimate task size, so carving out just the right task can be difficult. New features and bugs often have unknown unknowns.
The "contrived" puzzles approach has the advantage that each candidate can be given (and thus evaluated on) the same task. The size and perquisite knowledge for the task can be well controlled and since the problem is not new to the interviewers, we know how to present it in an easily understandable way and help them if they get stuck.
I think another reason why the "general cognitive ability" approaches are popular is because employees (especially at small companies) need to be good at such a wide range of tasks that it is not realistic to evaluate even a fraction of them in the span of a few hours.
I don't have any expectation they complete the task. That's not really the point, so time-bounding it isn't something I worry about. The point is that they're able to engage and contribute in some way: insightful feedback, mentoring, reintegrative learning, productive collaboration, etc. depending on the shape of role that's being hired for.
The tasks can be writing docs, writing requirements, writing property-based tests, writing CLI tools for developer ergonomics, formally modelling and verifying a scheduler, designing a DSL for safety-constraints or characterizing the electrical interfaces of a car's steering system. It's not entirely material what the tasks are. There will be opportunities in any of those to get good indicators of the important factors.
FWIW, the purported consistency of evaluating people using the same contrived task is fairly unlikely to be actually consistent, and even if it were, the value of what you can actually meaningfully derive from it is still deeply questionable all the same. As a result the reality is more likely that those situations are producing negative net benefit.
In my experience, this is where the house of cards collapses. For most programming jobs, none of that will ever be useful. And on the rare occasion that it is, you'll have gotten by without needing it for so long that you'll have to look it up and more or less learn it again from scratch anyway.
I find these sorts of things to give far too little useful information during a developer interview.
It's an aggressive filter. We sometimes give a post-interview take home programming problem that involves no trickery or algorithm Discovery/invention. It's just to verify they can code and see their style. You should be able to determine if a person can code from resume and interview alone, but sometimes I can't tell until the exercise. People suck at interviews so they fall back on hard problems. I suppose that works for big companies with lots of applicants.
If you have an interview with complicated tangled questions with too many things to handle, or questions that require the knowledge of a special algorithm, then either you fail because you haven't come across it, or you pass because you know the answer and pretend that you don't. Then those interviews literally become the SAT. I think it is a good strategy when it comes to giant companies (applicant's job demand>>supply). It's just as a high SAT score is relevant when it comes to the top schools. It signals to employers that either you're a genius who have done a lot, or you want the position badly enough to go through the pain to study and memorize them. Either is a good thing to have.
I think for smaller companies, posing those crazy hard questions and expecting the best answer is a bad idea.
I have seen companies that have executed the tech interview very well, though. The questions don't require specific knowledge about an algorithm. The interviewer hinted the interviewee when they needed help. I got asked questions that I never came across but managed to come up with a great answer by the end of the session. That's a sign of companies that know what they are doing.
After interviewing with several companies in different sizes with a mixture of good/bad interviewing processes, I realized the YouTube tutorials and the books weren't wrong. But not everyone has a bad interviewing process. And you probably shouldn't expect a non-tricky one from a giant tech company that everyone wants to get in.
It's like dating the hottest girl in high school -- you know the name of the game. But for a real happy relationship, maybe you don't need to chase the hottest girl. Maybe you should look for smaller companies that like you as much as you like them.
>I think it is a good strategy when it comes to giant companies (applicant's job demand>>supply).
Is it? I can't help but feel if, for instance, Google had useed a sensible interview process and hired Max Howell they might have gotten their act together on golang's package management a little earlier.
I started interviewing people this spring and since I hate algorithmic questions, I didn't look up any on the internet; I just wrote my own questions based on some data manipulation I've done in real work. They are basically data-normalization questions (given a 2d array of data from db, populate this class). But I found these to be a very good filter. You only need to be comfortable with Java Map and List interface, which is bread-and-butter of a Java dev in my experience. Yet still 75% of people couldn't solve these questions. However, the 25% nailed it and we made a couple good hires.
All this to say, if interviewers actually spend half a day and write practical questions that test the skills used in real work, it IS possible for both the interviewer and interviewee to be happy.
Also for the in-person, I just delete an existing integration-test we have and give the candidate our laptop and we pair program on rewriting it. This has worked well.
No memorization or trivia or tricks, just standard development exercizes.
Side note/rant: I hate the Cracking the Coding Interview style... studying for these type of interviews is annoying. Trying to find a good video on youtube, where they aren't just naively coding up the bruteForce->optimal possible solutions, especially is irritating. It is literally a landscape of college kids with thousands of viewers who treat these interviews like the SAT. Even the author of the book produces videos with very little insight or meaningful content.
"Find all the subsets in a set that add up to sum" -- "Okay for this we will use the sliding window technique and here is how it is done" -- WTF is this. I get that they want to see problem-solving skills, but this is on a different level requiring the interviewee to have studied and knowledge of the technique, otherwise we are basically trying to develop efficient algorithms from scratch and in little time. --This makes sense for college interviewees who have only studied the past 4 years, but for a professional with experience why is this adequate??
They are not just testing your analytical skills but also, I believe, your ability to self-study for something, even as "annoying" as algorithmic coding problems.
I kinda agree with you that it doesn't make sense much of the time if you have to specifically prepare for the coding interview; stuff you may never use in your job. But its not a lot of stuff: I bit the bullet and spent some time solving those questions and now can make past mostly any screen.
Its really not that hard, especially if you have a CS degree. Probably would take 1 week of dedicated effort to get better at it.
How is proving self study ability relevant to a job? Doesn't my resume of wildly varying projects and my ability to competently talk about them prove that?
It sounds to me like a way to weed out pesky applicants who have families or who are simply older.
> How is proving self study ability relevant to a job? Doesn't my resume of wildly varying projects and my ability to competently talk about them prove that?
People usually trust their own assessment of a candidate much more than that of others. While your projects might help generate interest in you and get you an interview, the actual interview process is meant to be an assessment by the Company conducting the interview. So you shouldn't automatically assume you have the jobs simply based on your past projects alone. I'm not saying I necessarily agree with how this works; I am simply pointing out why it works that way.
> It sounds to me like a way to weed out pesky applicants who have families or who are simply older.
Perhaps. It seems unlikely since many of the senior developers/hiring managers at most mature companies are older and have families.
While your projects might help generate interest in you and get you an interview, the actual interview process is meant to be an assessment by the Company conducting the interview. So you shouldn't automatically assume you have the jobs simply based on your past projects alone.
That's insane. If you brought me in because you liked what's on my resume, your number one priority should be determining if I really did what I said I did. Your priority shouldn't be arbitrary, unrelated questions or coding challenges pulled off some website.
I'm not aware of any other industry that behaves like this.
> That's insane. If you brought me in because you liked what's on my resume, your number one priority should be determining if I really did what I said I did. Your priority shouldn't be arbitrary, unrelated questions or coding challenges pulled off some website.
There generally is a component where you're asked about your past projects in detail. Its just not the only component.
I don't have a week to dedicate to crap like that. I do have 15 years of experience building stuff people actually need and i am reasonably good at it.
Exactly. 95% + of the work out there is more about plumbing and a doesn't design to plumb together. On the rare occasion when I have had anything algorithimc to do I Google for similar problems.
The last time I had a proper algorithmic problem, I was in an academic institute and we had a guy researching algorithms so I asked him (though he didn't come up with mush). It was basically a variation the stable roommates problem. I came up with a solution - brute force plus a load of optimization and it worked well enough for the purpose.
When I get that crap in interviews some of the time I get the answer, some of the time I don't. Its pot luck. Its says very little about my skill for the jobs I am applying for. Ironically I was way better at answering those things 15 years ago when I was fairly crap at building reliable robust systems.
OP of the comment here... Actually, I had a production service centered around the stable-roommate-problem. It took me a week or two to develop something out and fit it into our codebase. It then took 1-3 more weeks to actually make it work for us and cover edge cases (Irving's algo quits after instability -- this isnt an option in the real-world). I had many deep-thinking sessions where I was mostly in my head, writing scratch on paper, collab-whiteboarding (sometimes arguing), or testing PoCs.
The success resulted from deep research and much trial & error. It was no magical "algorithmic skillset" that they expect in those type of interviews (I wonder if those are even a good filter for actual production algos).