We haven't seen major issues with AI with candidates on camera. The couple that have tried to cheat have done so rather obviously, and the problem we use is more about problem-solving than it is about reverse-a-linked-list.
This is borne out by results downstream with clients. No client we've sent more than a couple of people has ever had concerns about quality, so we're fairly confident that we are in fact detecting the cheating that is happening with reasonable consistency.
I actually just looked at our data a few days ago to see how candidates who listed LLMs or related terms on their resume did on our interview. On average, they did much worse (about half the pass rate, and double the hard-fail rate). I suspect this is a general "corporate BS factor" and not anything about LLMs specifically, but it's certainly relevant.
This is borne out by results downstream with clients. No client we've sent more than a couple of people has ever had concerns about quality, so we're fairly confident that we are in fact detecting the cheating that is happening with reasonable consistency.
I actually just looked at our data a few days ago to see how candidates who listed LLMs or related terms on their resume did on our interview. On average, they did much worse (about half the pass rate, and double the hard-fail rate). I suspect this is a general "corporate BS factor" and not anything about LLMs specifically, but it's certainly relevant.