Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My worry is that we become too reliant on tools and outsource our thinking to them before they are ready to take on that task

Personally I have never tried any of the AI assistants, but I have noticed a large uptick in developers attempting to secretly use them in remote coding interviews. I'm curious how the larger companies are dealing with this.



I see it fairly often when doing code review, because sometimes a line of code or a function stands out that just doesn’t seem in line with the rest of the PR. So I add a comment like “what is this doing exactly?” because it’s usually something that’s difficult to understand, and the answer is usually “It’s what GPT/Copilot suggested shrug”. It’s not really something I approve of because it’s actively defying codebase standards that are intended to help the team. At least make the effort to clean it up so it meets basic expectations.

I imagine it’s quite easy to ask the same question during a code test because you shouldn’t have to stop and think about code you consciously wrote, and you wouldn’t have to wait for GPT to feed you an answer.


Hopefully by less lazy interviewing tactics and trying to hire via nuanced understanding of candidates instead of hackable metrics like memorizing leet code.

The traditional engineering interview is more fuzzy and is basically an engineer asking you about how you'd solve a problem they are currently working on or recently did. The interest is to see how you think and problem solve. It's inheritably unmeasurable but I think it is better than using a metric that ends up not meaning much. If it is explicit fuzziness vs implicit, I'll choose explicit every time because it is far harder to trick myself into thinking I'm doing the right thing when I'm not.


> attempting to secretly use them in remote coding interviews.

We have from time to time simply asked people to write pseudo-code in something like Etherpad or Google Doc. I'm sure that you can get an AI to type in your answer, but I feel it going to be pretty obvious what's happening.


I ask them to share their whole screen.


Yeah I thought about that too. I suppose it could still be a problem if they have a second monitor.

I guess there's the opposite perspective. By not actively trying to prevent it, we can weed out people who would choose to cheat in a remote coding interview. Those same candidates would likely do fine if they were physically not able to cheat, but may have ultimately be a net negative for the team.


İf you practice an open book exam you will have to ask much harder questions, and the actual becomes fishing for chatgpt's mistakes. This lacks repeatability because you don't know if and how it's going to hallucinate on any given day. And the level of questions you'd need to ask would be beyond many candidates. İn a closed book setting I can ask to implement a basic dynamic data structure and get all the signal I need.


I think the signal there is going to be how developers perform with assistance. The goal of the software is to solve the problem, after all. If they do it faster and better than everyone not using it, well, I guess we’ve figured out who to hire.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: