I expect any candidate to be hired to be able to produce code for the brute-force cases, and do the runtime analysis on them.
It's a whole other debate that most software engineers, even at BigCo end up writing CSS code to move pixels only and lose their DSA knowledge over time...
It's fair to expect any candidate to hire to produce the brute-force code and do some rudimentary/rough analysis. That's not why it's a bad question.
It's a bad question because _how well_ a candidate does past that minimum threshold has _no correlation_ with their ability to do the job.
You can treat it as a binary question for that minimum floor. To use it for rating, you're doing your org and the candidate a disservice and you're wasting interviewing time that could be spent more productively and humanely.
That’s probably also what happened with your ‘disappointing’ PhD student.
They probably did know how to code, but they’ve spent the last 5 years working on something quite a bit different from string wrangling in Python. Add in the stress of a (first-time?) interview and a reasonable candidate can look like a total fool…
> even at BigCo end up writing CSS code to move pixels only
Is this common at Google?
If by DSA knowledge you mean leetcode puzzle knowledge, yes, quite certainly. But from my experience fresh grads are less effective engineers before they mature by working with industry scale systems for a few years. And at that point they would have been less able to do your problem but a far better engineer.
Which means your interview question is aimed backwards, maybe if you only used it against fresh college grads then the gain function would be positive, but you applied it against everyone.
It's a whole other debate that most software engineers, even at BigCo end up writing CSS code to move pixels only and lose their DSA knowledge over time...