Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’ve been doing interviews like this for a few years now. Leetcode sucks.

Bonus points the next time a candidate (correctly) uses ChatGPT or Copilot. Let the machines do what the machines do well (grinding leetcode), good riddance.



Yeah exactly. Since these tools commoditize repetitive skills, the goal should be to see how creative and awesome the project is within the parameters given. Being creative is what you should be testing for since all these other skills are now commodities. It's the one thing we have that AI can't replicate (yet?).


Often one of the parameters given is a timeframe that no sane person would request in a real-world setting. The justification I've heard for that is "we don't wan't people spending too long on it."


Discussion: what does correctly programming with chatbots look like, specifically?

Probably easier for most to list what it doesn't look like.

I'm interested in both.


Asking a chatbot a question rather than Google. I never minded if candidates said they’d use Google for looking up something in the docs. But now you’ve got something a heckuva lot more efficient.

For example: What’s the method to select a random item from a range in Ruby? (ChatGPT used to get this wrong.) I don’t mean to say that I give trivia questions in interview, but if the need to know this came up during an interview or code pairing, having the candidate know when a chatbot response was invalid (and where to look for a correct answer) is a good sign.

I’m also open to a candidate stubbing out an idea with a chatbot/copilot and then checking the solution and adapting it to fit a given context.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: