The key is having interviewers that know what they are talking about so in-depth meandering discussions can be had regarding personal and work projects which usually makes it clear whether the applicant knows what they are talking about. Leetcode was only ever a temporary interview technique, and this 'AI' prominence in the public domain has simply sped up it's demise.
You ask a rote question and you'll get a rote answer while the interviewee is busy looking at a fixed point on the screen.
You then ask a pointed question about something they know or care about, and suddenly their face lights up, they're animated, and they are looking around.
You know, this makes me wonder if a viable remote interview technique, at least until real-time deepfaking gets better, would be to have people close their eyes while talking to them. For somebody who knows their stuff it'll have zero impact; for someone relying entirely on GPT, it will completely derail them.
A filter could probably do it already. There are already filters to make you appear to be looking at the camera no matter where your eyes are pointing.
That’s an interesting idea. Sadly I think the next AI interviewing tool to be developed in response would make you look like your eyes are closed. But in the interim period it could be an interesting way to interview. Doesn’t really help for technical interviews where they kinda need to have their eyes open, but for pre-screens maybe…
This is the way. We do an intro call, an engineering chat (exactly as you describe), a coding challenge and 2 team chat sessions in person. At the end of that, we usually have a good feeling about how sharp the candidate is, of they like to learn and discover new things, what their work ethic is. It's not bullet proof, but it removes a lot of noise from the signal.
The coding challenge is supposed to be solved with AI. We can no longer afford not to use LLMs for engineering, as it's that much of a productivity boost when used right, so candidates should show how they use LLMs. They need to be able to explain the code of course, and answer questions about it, but for us it's a negative mark of a candidate proclaims that they don't use LLMs.
> The coding challenge is supposed to be solved with AI. We can no longer afford not to use LLMs for engineering, as it's that much of a productivity boost when used right, so candidates should show how they use LLMs. They need to be able to explain the code of course, and answer questions about it, but for us it's a negative mark of a candidate proclaims that they don't use LLMs.
Do you state this upfront or is it some hidden requirement? Generally I'd expect an interview coding exercise to not be done with AI, but if it's a hidden requirement that the interviewer does not disclose, it is unfair to be penalized for not reading their minds.
I would say as long as it is stated you can complete the coding exercise using any tool available it is fine. I do agree, no task should be a trick.
I am personally of the view you should be able to use search engines, AI, anything you want, as the task should be representative of doing the task in person. The key focus has to be the programmer's knowledge and why they did what they did.
One client of mine has a couple repositories for non-mission critical things like their fork of an open source project, decommissioned microservices, a SVG generator for their web front-end, etc.
They also take this approach of "whatever tool works," but their coding test is "here's some symptoms of the SVG generator misbehaving, figure out what happened and fix it," which requires digging into the commit history, issues, actually looking at the SVG output, etc.
Once you've figured out how the system architecture works, and the most likely component to be causing the problem, you have to convert part of the code to use a newer, undocumented API exposed by a RPC server that speaks a serialization format that no LLM has ever seen before. Doing this is actually way faster and accurate using an AI, if you know how to centaur with it and make sure the output is tested to be correct.
This is a much more representative test of how someone's going to handle doing actual work knocking issues out.
Well, the challenge involves using a python LLM framework to build a simple RAG system for recipes.
It's not a hidden requirement per se to use LLM assistance, but the candidate should have a good answer ready why they didn't use an LLM to solve the challenge.
Why is it a negative that the candidate can solve the challenge without using an LLM? I don’t really understand this.
Also, what is a good answer for not using one? Will you provide access to one during the course of the interview? Or I am just expected to be paying for one?
It's not negative that the candidate can solve it without an LLM, but it is positive if the candidate can use the LLM to speed up the solution. The code challenge is timeboxed.
We are providing an API key for LLM inference, as implementing the challenge requires this as well.
And I haven't heard a good answer yet for not using one, ideally the candidate knows how to mitigate the drawbacks of LLMs while benefiting from their utility regardless.
A good answer in this situation would focus on demonstrating that you made a conscious decision based on the problem requirements and the approach that best suited the task. Here’s an example of a thoughtful response:
"I considered various approaches for solving this problem. Initially, I thought about using an LLM, as it's great for natural language processing and generating text-based solutions. However, for this particular challenge, I felt that a more algorithmic or structured approach was more appropriate, given the problem's nature (e.g., the need for performance optimization, a specific coding pattern, or better control over the output). While LLMs are powerful tools, they may not always provide the precision and control required for highly specific, performance-critical tasks, so I chose to solve the problem through a more traditional method. That said, if the problem had been more open-ended or involved unstructured data like text generation, I would definitely consider leveraging an LLM."
This answer reflects the candidate's ability to critically assess the problem and use the right tools for the job, showing maturity and sound judgment.
Ah so you expect mind readers who can divine something from your brain that goes against 99.99% of interviewers' practices and would get them instantly disqualified from an overwhelming majority of interviews. Nice work good luck finding candidates.
Indeed, looks like it is just an unspoken rule and an interview trick after all. I would not want to interview with this person, much less work with them.
> as it's that much of a productivity boost when used right
Frankly, if an interviewer told me this, I would genuinely wonder why what they're building is such a simple toy product that an LLM can understand it well enough to be productive.