Unfortunately, I just checked and ChatGPT gives the correct (slow) answer to your question:
def is_concatenation_of_two_dict_words(word, dict_words):
"""
Returns True if the input word is a concatenation of two words in the input list of dictionary words,
and False otherwise.
Args:
- word (str): The input string to check.
- dict_words (list[str]): A list of dictionary words to check against.
Returns:
- (bool): True if the input word is a concatenation of two words in the input list of dictionary words,
and False otherwise.
"""
for i in range(1, len(word)):
prefix = word[:i]
suffix = word[i:]
if prefix in dict_words and suffix in dict_words:
return True
return False
In general, I don't think that LeetCode questions have any particular advantage over work sample tests when LLMs are involved. Your questions will end up on LeetCode, where LLMs will index them and will be able to recite the answers.
def is_concatenation_of_dictionary_words(s, words):
if not s or not words:
return False
n = len(s)
dp = [False] * (n + 1)
dp[0] = True
for i in range(1, n + 1):
for j in range(i):
if dp[j] and s[j:i] in words:
dp[i] = True
break
return dp[n]
It doesn't find the trie (or regex) solutions to parts 1 and 2, though. It also finds the right complexity for its solution for part 1, but when asked for an O(n) solution, it first repeats an equivalent algorithm, and suddenly claims it is O(n) because the hash is O(1).
That said, I believe an engineer with the answers it gave, would easily figure out from its output what the right complexity is. (Figuring out the trie, may not be so easy.)
That said, meanwhile, ChatGPT is not yet at a point where it can write out a full working repository solving a nontrivial problem. It can help, but it is not autonomous; and realistically, if someone can get that help to reach a good solution for a test, they can do so for a salary.
> I don't think that LeetCode questions have any particular advantage over work sample tests when LLMs are involved.
I agree. We do work sample tests, and in addition to the code and docs the candidates hand in, what really matters is the walkthroughs we do with them. Why did they do it that way? What alternatives did they consider? What are the pros and cons? What past projects did they draw on? How did they research this?
Candidates usually enjoy this - most programmers enjoy talking about a just-finished project, especially when they feel good about the result - and you get to learn a lot about them.
If someone turned in an LLM-assisted work and lied about it, I doubt they'd fare well. And if they did use LLM assist - could be an interesting conversation all the same. What did you reject? What did you correct? Why?
You're writing "work sample tests," and I'm hearing "take home assignments." Is that right?
If so, how do you solve the problem of the assignment just taking way too much time compared to a more typical process, where you do a phone screen and 3-5 hours of technical interviews with a behavioral or 2 thrown in there? I know that if you send me a take home that's any like most of the ones I've gotten in the past, and your competitor says I can do a recruiter chat and technical phone screen to qualify for a 4-5 hour onsite, I'm going with them just on the sheer amount of time and effort needed from me for each process.
How do you get around the fact that if you give such a test to enough people, some of them will eventually put it up on Github for the world to see and crib from? Even for those willing to accept the time commitment, how do you account for the bias involved in giving the same assignment to working candidates and people just out of school? There's a good bit of implicit age discrimination going on there, since folks who have been working a while are far more likely to have families than those just out of school. And then there's the old "fraudulent candidate hires someone else who's actually good to do the assignment." Granted, that person won't succeed at the job, but it'll take at least some weeks before most companies will give up on someone they hired, even if they're completely hopeless at the actual job.
I could go on and on, but I think you get the idea.
A) you have developers you already have hired take the tests regularly and make sure they are properly calibrated for time
B) you replace time in interviews with the test you don’t just add it on as another thing.
C) you change the test regularly and implement basic plagiarism filters on the tests you do have.
As a person who has been in the industry a fair bit I think the flexibility of take home work samples gives me a leg up over day long in person interviews (and the last time I interviewed it was multi day not 4 or 5 hours).
I don’t have to take time off of work or not pick up my kid from school or anything. Just do it in my free time.
Yeahhh, I'm gonna need you to fuck right off with all this reasonable attitude you've got about this... lol, (kidding of course.)
Seriously though, you've basically outlined the only reasonable way to go about it. The problem is really that that's a lot of fucking work you're opting into with 3 little sentences, and after all that, you actually end up with a non-repeatable process, which is not really ideal. I'm still highly skeptical that it's not biased toward people without families, either.
But my real issue here is that at least 75-80% of companies that do this crap obviously don't implement any of that stuff, and most of the rest don't do all 3. You really need all 3 in order to maintain any integrity to the process, and that's, unfortunately, one of the strengths of whiteboard hazing from the employer side. I know I'm never doing another 2+ hour take-home again, and the fact that employers generally don't do any of that stuff are more or less my reasons.
I agree that most companies don’t do work sample hiring well. But most companies don’t do any technical filters well. They’ll waste your time with dumb whiteboarding, or leet code or god forbid personality tests.
To me it’s a strong signal if a company has a work sample test because it shows they have at least one part of their process that has a chance to be a good filter. It’s an even stronger signal if they’ve replaced the majority of their pipeline with it.
I won’t say I won’t accept a job that didn’t do a work sample component but it would definitely jade my thinking against the firm. And I’ve certainly told companies I wouldn’t continue with their hiring process after hearing what it entails.
When I asked, it first gave the for loop answer. I asked it the complexity, it answered, then I asked it if it could find an algorithm with lower complexity and it came up with an answer using a trie, although not the reversed words trie trick.
Better than the candidates I interview would do, and probably better than I would do, I would probably need a hint to use a trie and I haven't analyzed algorithmic complexity in a quarter of a century.