Hacker Newsnew | past | comments | ask | show | jobs | submit | more Stoids's commentslogin

I use coc-nvim [0] for my development environment and love it. The VS Code plugin is a great start to get used to vim movement, but eventually you'll run into some limitations that end up being slightly annoying (albeit this was a few years back and those might not exist anymore!).

Most of my development is in TypeScript, which has a pretty good language server, so I don't feel like I'm really missing out on any of the "IDE" features that I get in VSCode. I am not a "pure vim" zealot, so I have a decent amount of plugins to bring my environment closer to an "IDE" than "just vim" (eg. NerdTree, fugitive, surround, sneak).

I haven't really touched my dotfiles in a year or two outside a few custom keybinds for new workflows I find myself repeating.

Getting good at whatever editor you choose to use for your daily development is a good investment. If you're going to choose one, vim is probably a decent choice for how long it's stayed around.

[0] https://github.com/neoclide/coc.nvim


I miss reading the 100 page threads on What.CD about which of the 40 different rips of the White Album was best. Alternatives have popped up, but it saddens me that we lost all of that collective comment history.


I was part of What.CD (and its successors), but if I'm wondering which edition to snatch, I usually search the Steve Hoffman forums [1] and look up the Dynamic Range DB [2].

[1] https://forums.stevehoffman.tv/forums/music-corner.2/

[2] http://dr.loudness-war.info/


Jonathan Blow and no one else clearly.


Yes, almost any routing library will expose a Link component wrapper that properly sets those attributes.


I like the idea that the 200 line event dispatching state atom is more over-engineered than a framework that works off wrapped "observable" objects and uses ES6 proxy magic (MobX). Do proxies work in IE11? Didn't think you could just polyfill them.


I'd love to read something about your experience. We are currently using Apollo for one of our newer projects. We've written Redux apps in the past and understand the tradeoffs and limitations that come with that approach. We really love the component-level declarative data-fetching of Apollo client, but were pretty hesitant around the apollo-link-state stuff.

How are you integrating GQL in your Redux code? Do you still use Apollo Query/Mutation components? Do you use Redux in lieu of the Apollo cache? How do you deal with the slight mismatch of denormalized GQL data with a normalized Redux store?

I understand the problem that the Apollo cache is solving, and I feel like I do a lot of low value handwritten code when I use Redux instead--but the visibility and transparency of Redux still feels worth it?

I think leaning too heavily into apollo-link-state is going to draw the same boilerplate complaints of Redux. The amount of client code generation needed, plus all the schema details bleeding into your code (__typename)... doesn't feel like we are at the "solution" yet.

Just a few of my rambling thoughts. Kudos to the Apollo team for continuing to push the community forward. These are hard problems and we won't solve them without people trying to innovate.


Perhaps I'm just a bit dumb here, but a few of these comments confused me.

> However, this is a common source of mistakes if you’re not very familiar with JavaScript closures.

> The problem is that useEffect captures the count from the first render. It is equal to 0. We never re-apply the effect so the closure in setInterval always references the count from the first render, and count + 1 is always 1. Oops!

Perhaps I'm not understanding the second comment fully... This behavior is completely counter to how Javascript closures work in my mental model. I'm trying to figure out how a state hook differs in execution from a normal variable in module scope.

How does it differ from this [1] example.

[1] https://jsfiddle.net/xuz09cow/1/


The difference is that closedVariable would be created inside intervalFn (the component). In addition, the intervalFn may run many times, but the interval will only be set up once. This example illustrates the problem: https://jsfiddle.net/ov2msa4e/

Your code avoids the closure problem, but closedVariable would be more like a static variable — it'd be shared by every instance of the hook. Probably not what you intended!


Thank you for the example, Jake! It was very helpful. You and Jason's answer cleared it up for me.


The difference is subtle (and took me a bit to figure out!). In the article's example component, the component's "main" function gets re-run on every render and a new `count` variable is created each time. The effect function is only run once at the very first render, and only captures the value from this first render.

I'm having a little trouble explaining the difference well, I think this jsfiddle illustrates the incorrect case the article is talking about representatively: https://jsfiddle.net/7fLnvz5c/


Thanks Jason, this was very helpful for me. I think I was having trouble making the mental jump from reading the code as a normal function with state vs how a component instance is called and handled by React at runtime--your example helped me make that hurdle.


Yikes... Interviewing sucks but half the reason interviewers ask these types of questions is to see your attitude and how you respond. Writing an Instagram clone in Angular doesn't really tell me much about your problem solving skills when faced with a unique problem.

> How many people can actually write BFS on the spot, without preparing for it in advance?

Ughhh, should I tell him?

> why would they ask me this question, what does breadth-first search has to do with front-end development

Tree data structures are really common in front-end.. like the DOM or JSON.


This feels condescending.

I've been developing for well over a decade as full stack engineer. I've worked as a successful software dev at some big corporations (like Intel, and yes it was fulltime for several years) and almost always outperformed my peers. I work in finance now where everything is about managing portfolios for people - lots of numbers and heavy calculations.

I have no fucking clue how to write a BFS. I've never needed to know how to write a BFS. I will probably never need to know how to write a BFS. Coders like me write business applications where these things are literally never an issue. If there's something I need to know and don't, I simply research it. That's the point he's trying to make. Don't interview people on algorithms they will never ever use. It's as simple as that.

There are more productive ways to interview someone; for example take a bug in your product and see if the candidate can fix it. Or if you're worried about sharing proprietary info, then make a sample program that simulates a bug or feature that your application will use and observe the candidate working on that.


A breadth-first search is a pretty basic algorithm, but even if you don't have it "memorized", the question is simply: here's a data structure, we need to traverse it in a certain order. Traversing data structures is so common that, even without a formal CS education, I'd expect an engineer with years of experience to have the strategies in a sort of unconscious memory, like a rock guitarist who plays the right chord progressions without knowing what they're called.

A candidate who freaks out upon being asked that question is not a candidate whom I'd trust to be a good problem solver. Solving a problem with simple constraints is something that happens all the time. If you don't know how to approach that, you could just "look it up", but how would you even know what to look up?


> Solving a problem with simple constraints is something that happens all the time.

I disagree. My experience has been that these problems are exceptionally rare.

They're so uncommon that programmers get excited when presented with such a problem. It feels like "real programming" as opposed to fixing bugs, writing prototypes, or writing business logic.

> I'd expect an engineer with years of experience to have the strategies in a sort of unconscious memory, like a rock guitarist who plays the right chord progressions without knowing what they're called

This is fantasy. You're comparing the average programmer to musical savants? What?

You happen to have memorized a bunch of algorithms for traversing data structures. That's great, but nothing about that is "unconscious memory". The fallacy is expecting everyone else to have memorized the same thing you have memorized.

> If you don't know how to approach that, you could just "look it up", but how would you even know what to look up?

Google for "tree traversal" and spend 10 minutes researching common algorithms. Pick one that fits. Spend 20-30 minutes writing it (hopefully with some tests). Worst case you've spent an hour.

Someone who memorized the algorithm could probably write it (with tests) in half the time. And that's okay because how often are you doing tree traversal? And even if you're doing tree traversal all the time then you'll have memorized all these techniques in 1 month anyway.


> This is fantasy. You're comparing the average programmer to musical savants? What?

They didn't describe a musical savant. They described someone who learned to play the guitar without any background in music theory. The analogy was not someone playing a chord without knowing the name of the chord, it was someone playing a chord progression without knowing the formal musical basis for that progression.

That kind of intuition is very common in guitarists, and I thought the analogy was rather apt.


> They described someone who learned to play the guitar without any background in music theory.

I have no idea what guitarists and other musicians you know. I know quite a few and they've had to put in serious work on music theory (most from a young age). Many also started out with instruments like early piano lessons and transitioned to guitar.

It's quite literally a language you need to learn (memorize); kinda like maths actually. Some things may seems intuitive but only savants can compose actual music without training in music theory.


I would guess that the _vast_ majority of guitarists learn that "G D Em C" sounds good _long_ before they understand that it's an I V vi IV progression. I started off playing guitar with three open string power chords, and I didn't have any understanding of theory for years of playing.


No one is talking about composing music without a classical background. We're talking about playing music without the background.

Composing music is to playing a chord progression as writing a naive algorithm implementation is to designing an optimal algorithm.


> The fallacy is expecting everyone else to have memorized the same thing you have memorized.

I don't see why any memorization should be necessary. I have never written a BFS algorithim. Yet I can immediately think to two different ways of implementing it. In fact, when reading the article I googled BFS to make sure it wasn't referring to something else because BFS seemed so simple to implement. All you have to do is place every node's children in a FIFO que and process the nodes in the order they are added. You could also use two arrays, one for the current level of nodes and one for the next one. It is only slightly more complicated if you are traversing a graph, rather than a tree, as you need to track visited nodes to avoid repeats.

Also, the utility of BFS and DFS in frontend seems hard to ignore, the majority of the data structures are trees (html, xml and json) and these are the two ways to traverse them.

> They're so uncommon that programmers get excited when presented with such a problem.

What about "process these things in this order" is uncommon? If anything, the constraints are far simpler and easier to understand than most of the business logic I implement.


> I don't see why any memorization should be necessary. I have never written a BFS algorithim.

Neither have I, but we both have prior knowledge that accounts for solving most of the problem. We know how trees are implemented. We clearly know what BFS is and what's "tricky" about it.

Someone hasn't dabbled with tree traversal has to grok a lot of information in the heat of an interview.

> Also, the utility of BFS and DFS in frontend seems hard to ignore, the majority of the data structures are trees (html, xml and json) and these are the two ways to traverse them.

I'm not 100% sure about frontend, but I haven't heard of anyone writing tree traversal algorithms for a DOM tree. That's all handled by either browser APIs (document.querySelectorAll?) or lower level libraries (ReactDOM?).

> What about "process these things in this order" is uncommon? If anything, the constraints are far simpler and easier to understand than most of the business logic I implement.

That's exactly the point. Think about it... there are literally dozens of ways to implement a BFS. The currently accepted standard way of writing a BFS is the most concise and efficient.

A minuscule amount of your business logic is as concise/efficient. Most of the time your business logic will balance finding the best solution with meeting milestones (milestones usually win).

So let's say someone doesn't have any experience traversing trees (why would they? you rarely have to, if ever). Let's also say that they haven't memorized what BFS is and what the constraints are. You're presenting them with a new problem and expecting them to come up with the most efficient/concise solution to that problem. The only people who get rewarded here are people that simply memorized the solution or the solution space. In effect you've validated nothing.


> So let's say someone doesn't have any experience traversing trees (why would they? you rarely have to, if ever). [...] You're presenting them with a new problem and expecting them to come up with the most efficient/concise solution to that problem.

I did the some of those interviews (not specifically with BFS, but with similar problems) and my answer to that is: Nope, I do not care about "most efficient/concise solution to that problem". I just need some evidence that you are trying to solve this problem. I am here to help them -- after all, for the interview to be useful, I need to see your work.

So I'd formulate the problem (if the person forgot what those 3 letters mean), draw a diagram if candidate is having difficulty, give increasing hints for the solution ('How would you do it manually? Which node would you visit first? What about next one?' and so on...)

Even then, a surprising number of people would just give up. It is kinda weird -- it looks like when they hear some words they just think "OH NO THIS IS DIFFICULT" and stop even trying.

Those get rejected. We have often have difficult problems. If you stop at one thought of algorithms, you get 0%.

And if you have not memorized what BFS was, and I had to draw a diagram of the tree and trace the path, and give you a hint about maybe using some sort of queue for unprocessed problems and you did not even finish the solution at the end -- you'd still get 80% and might very well get an offer.


> So let's say someone doesn't have any experience traversing trees (why would they? you rarely have to, if ever).

Ummm... what? Basic DFS of trees with nested foreach loops is incredibly common. It is not a generic, depth blind traversal, but it most certainly is tree traversal.

> You're presenting them with a new problem and expecting them to come up with the most efficient/concise solution to that problem. The only people who get rewarded here are people that simply memorized the solution or the solution space. In effect you've validated nothing.

If somebody gives up on solving such a simple problem, why would I ever want to hire them?

> The only people who get rewarded here are people that simply memorized the solution or the solution space

Says who? You are making a bunch of assumptions about why the question is being asked.


I think the point was, the algorithm is so basic that a moderately experienced software engineer could invent it - it doesn't need to be researched.

> as opposed to fixing bugs

bug: "takes too long to find a <file / json field / html node>"

It's a real problem.


> I think the point was, the algorithm is so basic that a moderately experienced software engineer could invent it - it doesn't need to be researched.

Right. My point was the problem is perceived as "basic" because the commenter memorized solutions in the problem space. If it's not something you deal with regularly (or have interest in) then it isn't as basic as it seems.

People that haven't memorized the most efficient/concise solution also freak out because they know that their whiteboard solution is going to be judged on the merits of the industry standard solution.

> bug: "takes too long to find a <file / json field / html node>"

This is almost always an issue with an underlying library. The better question is why your programmers are wasting time writing slow BFS code when they should be using well tested solutions.

Example: I recently needed to write some AST transformation code. Did I write my own AST node traversal code? Hell no! I picked a library that did the job. I spent about 20 minutes catching up on state of the art AST traversal (so I'm not blindly importing a bad solution) and let the library do the work.


> Google for "tree traversal" and spend 10 minutes researching common algorithms.

The point is you wouldn't know what to search for. If you don't know anything about cars, you wouldn't be able to search for "alternator fault". The best you could do is "engine problems" which won't help you solve the actual problem.


You're implying that someone who can't invent a BFS on the spot also doesn't know what tree traversal is. That's not true at all.


If you can program [1], you can do BFS - no need to "invent" anything. Given a tree like this [2], asking you to "print the numbers in order" is not a difficult question, it's a trivial one.

I would expect any interviewee that calls themselves a programmer to talk me through a reasonable answer to that question. Forget about what BFS means - you don't even need to know what a tree is. Looking at the image is all you need.

[1] https://blog.codinghorror.com/why-cant-programmers-program/

[2] https://upload.wikimedia.org/wikipedia/commons/3/33/Breadth-...


What drives me a little nuts is all the people on this thread saying BFS is simple, as simple as counting, as if that somehow strengthened their argument. It does the opposite: BFS is indeed simple, so simple that you can simply explain it to a candidate and ask them to implement it, just like you would do with an actual developer (or, more likely, that they'd just do for themselves based solely on the Wikipedia page).

I'll go a little further and say that if you've qualified a candidate as capable of busting out the routine Javascript required to wire up typical front-end code, if you can't teach that developer how to implement BFS within the confines of a single interview, you the interviewer share some of the incompetence. Certainly that would be the case for a manager or team lead!

The reality is that BFS is for most jobs (frontend AND backend) a trivia question, and a status-seeking mechanism for interviewers.


THANK YOU. 16 years I've been writing code and I can count on one hand the people I've met who realize this. I get a little emotional about it because I've seen so many exceptional programmers crumble in interviews with trivia questions.

If an interviewer wants to test problem solving skills with a BFS, no problem:

- Draw a tree on the whiteboard (interviewers writing the whiteboard is highly undervalued; it calms the candidate)

- Explain what a BFS is. It's easy to draw out each step in a BFS algorithm.

- Ask the candidate to write some pseudo code that implements it.


I do, every time! I even drew a little diagram a couple of times and traced the path!

And they still failed it! It is not about trivia or what BFS is, it is all about can you reason about algorithms.

(Disclaimer: our actual interview problem is slightly different, but it it still a very basic CS one)


> And they still failed it! It is not about trivia or what BFS is, it is all about can you reason about algorithms.

Okay.. so now this is about you and a bad candidate rather than the person pointing out that interviewing is broken. Got it.


Well, if at least one person reads the discussion, and then starts inventing the algorithms instead of giving up right away, I'd be happy :)

Because someone saying "How would I know? If, and when, I need to know how tree-shaking is implemented, I will go look it up." makes me sad. The tree-shaking algorithm was not given to us by ancient gods. As a programmer, you job it is to solve problems no one has solved before. Interviewing may be indeed broken, but the original article was not showing any evidence of this.


> the question is simply: here's a data structure, we need to traverse it in a certain order

Because BFS/DFS are so easy to interchange, I'd strip ordering out entirely: here's a collection of stuff where each element may also point to other stuff in the collection, starting with this element search through the stuff to see if you can find a certain element.

If I was going to ask this problem, I'd help a nervous-looking candidate by contrasting the problem (or leading with) a simpler still one: here's a collection of stuff where each element can point to at most one other thing, starting with this element see if you can find a certain one in the collection.

Linear search is, technically, an algorithm. Graph search just extends it. All these people railing against algorithms in interview problems as non-applicable surely have at least written a for loop over an array and had an if statement somewhere inside, right? Linear search. When I've given interviews I don't try to make them based on "algorithmic knowledge"[0] but at the same time I really don't like the trend of thinking of algorithms in general as "oh, that's someone else's job, I never use those" or "I can only implement algorithms by memorizing them."

[0] When I was just starting to get asked to give some I lazily asked a colleague for a reference problem, they sent me https://leetcode.com/problems/jump-game/description/ and I read the problem, immediately recognized that the general approach of graph searching would solve it[1], and coded up the 12-15 lines or so it took, made some tests, fixed some minor mistakes... Total start-finish time was 10-15 minutes, not the fastest. There's also a solution to that problem that doesn't require thinking of it as a graph, too, but I think it's harder to get the insight. Anyway, nice problem, I thought, I'll only give a problem if the candidate has 2x the time it takes me to solve it as a buffer for interview nervousness/whatever, but after giving it to a few people only one of them managed to solve it in like 50 minutes after creating a pretty verbose and pseudo-coded BFS system. (My initial approach used DFS, but it doesn't matter here -- though for the jump game 2 sequel problem which asks to find the shortest number of jumps, BFS makes sense, and it's easy to take the solution for the first problem and make minor alterations. General algorithms like "search" are great and widely applicable.)

[1] In retrospect, before I saw the problem I was writing a simple game of go client as a side project, and had recently implemented a function to, given a point on the board, determine if there's a stone there and if so determine if it's part of a larger group of stones and return their coordinates. So in some sense the problem was already 'primed', and I've written a lot of graph search skeletons for both fun and profit. It's interesting to hear that some people go decades without doing so...


This problem can be solved with my favorite algorithm in the whole world, union-find [1], yaaay! :-p

It is hard to say if U/F would be more efficient for this problem. U/F can perform two `union(a, b)` and `find(a, b)` operations in O(lg n) runtime complexity, or, with some more effort (path compression), nearly constant time (a, b representing node indexes). But for this problem a first pass over all indexes would be required to build the disjoint set from the input.

I think for small inputs DFS or BFS would be more efficient, and have the plus of not needing extras storage (U/F would require a second array the same size than the input). For large arrays, probably U/F would win since the input array may encode potentially a lot of edges (say, if each index contains a number large enough) and DFS runtime complexity is O(Edges + Nodes).

Anyway, I think U/F can be really useful in the context of job interviews, there are many problems that can be reduced to connected components, after some massaging.

Here's an implementation I did a while ago [2]. Even though I love the algo, I don't really remember the details about path compression... time to refresh my memory I guess :-D

[1]: https://algs4.cs.princeton.edu/15uf/

[2]: https://gist.github.com/EmmanuelOga/bcafad14715a3f584e97


Neat structure! I consulted my copy of Skiena's Algorithm Design Manual. He says: "Union-find is a fast, simple data structure that every programmer should know about." Well... Now I do. :) It is simple, a 'backwards' tree with pointers from a node to its parent, which lets you union two separate trees together by just taking the shorter one's root and pointing it at the taller one's (or vice versa, but this way preserves log behavior). The path compression optimization seems to just be an extra pass in find(), after you have the result, to re-parent each item along the path to the found root parent so that any future finds() of any of those items will only have one lookup to reach the component root.

For the jump game problem, I'd expect DFS would still win almost all the time even when there are many branches because it doesn't have to explore every element of the input array except in the worst case (and you can greedily explore a neighbor without having to discover your other neighbors first), whereas to build a complete union-find structure would. Still, the fastest solution in general is probably just the single pass: if you have the insight that you can keep track of a 'max reachable index' and update it at each step to max(current-max, i + A[i]), bailing with failure if you hit an index > the current max and having no more jumps, or bailing with success if your max becomes >= the final index. (I never had this insight myself.) It could be slower of course, e.g. if the array was something like [bigNum/2, lots of 0s, bigNum/2 again in the middle, lots of 0s, end at index bigNum].


> I have no fucking clue how to write a BFS. I've never needed to know how to write a BFS. I will probably never need to know how to write a BFS.

If I give you a description of what a BFS is, you as a programmer should be able to write one. A BFS is rather trivial. You shouldn't have to know how to write it, you should be able to figure it out. The same goes for implementing a linked list or the FizzBuzz problem.

As a programmer, you need to make up algorithms on the spot to solve business problems. You don't know the solution beforehand, you figure it out, that's your job. It's the key skill that basically separates you from a typist.

> There are more productive ways to interview someone; for example take a bug in your product and see if the candidate can fix it.

This is a different skill. It's an important skill, but finding errors in an existing structure and fixing it is different from being able to create that structure in the first place.


Depends on the algorithms. Some people can brute force something basic, and the brute force answer will usually be slow and stupid at scale.

I'm working on something at the moment, and one of the key features is that while there's a fair amount of data, none of the structures are particularly big - and are very unlikely to ever become big.

So my thought process is that stupid brute force algos are absolutely fine for this domain.

If I have performance problems I can look into optimising them.

If I had hundreds of terabytes of data to start with, I'd take a different approach, and I'd research - not invent, because I don't care to reinvent a wheel if someone has already produced a much better solution - more efficient algos.

If none of the above work, then I'd consider improvising something and testing how it performs.

Would this pass your interview process or not? Do you want someone who's going to brute force an answer to your toy problem and think they've solved it with something that works but is inefficient, or someone who has memorised a collection of standard answers but maybe can't improvise something new, or someone who is going to check what's already available to save time, or someone who can improvise a super-efficient answer and do it even if it's not needed?

Who would you pick?

Do you really think this question has a simple answer?


> Would this pass your interview process or not?

Yes, that sounds very reasonable.

> Do you want someone who's going to brute force an answer to your toy problem and think they've solved it with something that works but is inefficient, or someone who has memorised a collection of standard answers but maybe can't improvise something new, or someone who is going to check what's already available to save time, or someone who can improvise a super-efficient answer and do it even if it's not needed?

First of all, let's talk about what I don't want. I don't want someone who can't solve the most basic problems. I'm not interested in all these details at this point in the process. Don't get stuck in analysis paralysis.

> Do you really think this question has a simple answer?

No, but it's not the question I am asking. My question would be, can you solve basic problems? I'm not looking for 100% accuracy in testing, that's impossible. Surely some kid fresh out of college will get an "unfair advantage" with something that's still on their mind, while some genius may have a bad day and fumble the test. I wouldn't personally pick BFS as a test either, but the fact that you should be able to solve it remains. The fact that "you may never need it" is irrelevant.


You're asking the wrong question.

For all software businesses there is only one question to ask candidates: "Can you help us ship working software that solves the problem of our customer"

What I can tell you is that there are a number of books written on this subject if you need help identifying the correct interview questions to ask.


But this question does not really tell me anything. It only has one possible answer, which is "Yes!"

If I had a magic truth-detecting wand, it might be a good one, but in practice, candidates may be lying or honestly overestimating their ability.


That's just the type of answer I look for.

You're not someone who would look at the screen and just say "I don't know" (or thank me for my time and storm out).


The problem is that here you are optimising your test for https://en.wikipedia.org/wiki/Chutzpah.

Which is great and all, but it it's not a quality which correlates particularly well with the stated problem you are hiring for.

Now if you were hiring for the marketing department…

Repeat after me:

A good hiring process is one that will not be affected by the luck of the draw nor the personality of the candidate.

I've been doing this for a quarter of a century and I will tell you right now that the best engineers of my generation would indeed thank you for your time, never come back and quietly advise the extensive network of young people they mentor to avoid your company like the plague.


Yes, I'm starting to realise that finding a candidate who's able to problem-solve something basic really is "luck of the draw".


I feel like you misunderstand me entirely.

You're going to have a really hard time finding good hires if you restrict yourself to the pool of people who can "problem-solve something basic" according to your definition of "something basic".

Maybe ask: "How can this person help the team ship working software that solves the problem of our customer?"

It's less adversarial and opens your recruitment up to that pool of folk who have literally decades experience answering that question.


It's more like: There are so many candidates with impostor syndrome. I don't want to hire another turkey.

So, I will come up with a little problem which is something related to the work they'd be doing - like, a real thing they would have done if they where hired a month ago. (hmm, I wonder how they would have fixed jira-123)

I'm not looking for "the answer" - I'm actually hoping they don't know it (if they do, great: here's another...), but.. do they have what it takes to solve a problem? How much hand-holding would they need? Am I able to have a technical discussion with them? Even, where are they on the passive<->arrogant scale?

This, among other questions, will answer "How can this person help the team ship working software that solves the problem of our customer?"

Job history and qualifications don't tell me this.

If a candidate feels they're too important or experienced to do this, then by all means walk away.


> I have no fucking clue how to write a BFS. I've never needed to know how to write a BFS. [...] If there's something I need to know and don't, I simply research it.

The problem with that is when the best solution to a problem is a BFS, you may never recognize or realize it, because you know nothing about BFS. You won't know what to research for.

I see it when people who don't know what calculus is go to enormous efforts to develop workarounds that sort of half-assed work.

I see it in my own work when I didn't know about a class of techniques, so I invented some crappy solution. For example, reinventing bubble sort when I could have used quicksort.

BFS is awfully basic knowledge. How do you know you've never needed a BFS? Maybe you never needed a BFS because a linked list is your go-to data structure? Maybe you've been reinventing bubble sort. (I'm not the only programmer who incompetently reinvented bubble sort, not even close. I just didn't know any better.)


> I see it when people who don't know what calculus is go to enormous efforts to develop workarounds that sort of half-assed work.

cf. https://escapethetower.files.wordpress.com/2010/12/tais-mode...


At least in my case I'd like to think I absorbed the habit of recognizing sub-optimal traversals of data (for example, yesterday I was writing a parallel iterator and made the intentional choice to "waste" effort duplicating the map result than having to synchronize threads on mutable boundaries) but I don't remember all the buzz word jenga that was in my data structures class in uni.


I've often been able to dramatically improve other peoples' code by selecting the right data structure (array, list, tree, hash, whatever) where the original programmer clearly did not know about them. They were able to make the code work, but not very well.

It's hard to do research when one doesn't recognize there's a problem, nor know what question to ask.


> I have no fucking clue how to write a BFS. I've never needed to know how to write a BFS. [...] If there's something I need to know and don't, I simply research it.

>The problem with that is when the best solution to a problem is a BFS, you may never recognize or realize it, because you know nothing about BFS. You won't know what to research for.

Just because someone doesn't know how to write BFS code doesn't mean they don't know what it is.

> BFS is awfully basic knowledge. How do you know you've never needed a BFS? Maybe you never needed a BFS because a linked list is your go-to data structure? Maybe you've been reinventing bubble sort. (I'm not the only programmer who incompetently reinvented bubble sort, not even close. I just didn't know any better.)

Never had to write a BFS in all my years of programming. I am one of those programmers that write glue code and are not "real" programmers by the definition of some here in HN.


> Just because someone doesn't know how to write BFS code doesn't mean they don't know what it is.

Actually, it does mean they don't know what it is. BFS stands for "Breadth First Search". If a node in a data structure has two links, one going down in the data structure, and one going sideways, breadth first means going sideways first. "Depth First" means going down first.

That's the algorithm. It ain't rocket science. There's no trick involved.


You think every programmer has a CS degree?


Begs the question, if they don't, shouldn't they still get familiar with CS? Honestly, how hard is it to read Grokking Algorithms?

Look, I think the dev hazing ritual that is current hiring processes sucks, but Algos and Data Structs are the language that we speak. We have to be familiar with them.


I don't have a CS degree. But I know the freshman algorithms. It's the basic tools of the trade.


I don't. The ability to figure out how to write BFS still seem pretty basic to me.


The theoretical interview was never about writing a BFS. Its about how you approach answering the question.


The fallacy here is that you are actually getting to observe people's problem solving approach when asking them questions like this (vs their ability to talk) and that your subjective perception of someone's problem solving ability in those situations maps to actual job performance at all (vs your own biases).


The entire point is that I _am_ gauging their ability to talk with me through something they may or may not be uncomfortable with. This has worked well in the past and most people deal with this favorably.

If their response is "this is stupid, no one has to know how to do this" and storm out with the same indignation that is present in half of these comments then they are not the ones who dodged a bullet.


Is that a fallacy with basic algorithm questions, or with interviewing in general? How does another category of question provide objective feedback about problem solving ability instead?


I hear this a lot, but I've been in interviews were, yes, it really is about writing the BFS (or whatever algorithm they were asking for). I can remember one, where I was asked to write write an algorithm that, given a vector of points, calculates the euclidean distance between every pair of points. I wrote out a one-liner in MATLAB that accomplished the task in about 2 minutes, but then they wanted to see it in C++. Then I wrote it out in C++, which I don't know so well, so it ended up being in pseudo-C++. Then we spent the remainder of the time quibbling about syntax errors and missing import statements. It was very clear to me the interviewer only wanted to see perfect, compiling C++ on the white board, and had no interest in my problem solving or thought process.


> It was very clear to me the interviewer only wanted to see perfect, compiling C++ on the white board, and had no interest in my problem solving or thought process.

It depends on the position you apply for, I'd say.

Some jobs require C++ and algorithmic skills. If you don't know how to write a basic algorithm in C++, then you might not be the best fit - and you might not want to join the company as a junior developer.


FWIW, this example was for a company that refused to tell me what the job description was and what the position was. They didn't tell me ahead of time which languages they preferred or what kind of work I would be doing. It's a company everyone knows and is renowned for their secrecy. While they refused to tell me what they were looking for, it became clear during the interview they were looking for something very specific.


I'm fairly certain I know which company you're talking about - I'd recommend you redact the specific question they asked, because I'm aware that they do check online (including HN) for interview questions that were asked. You have plausible deniability as long as you don't name the company, but given the NDA they throw at you it's probably better to be safe than sorry :)

That being said - sorry to hear that was your experience. That shouldn't have happened, but it does.


Well, at least you knew upfront there was a high chance of it being a waste of time.


> Some jobs require C++ and algorithmic skills. If you don't know how to write a basic algorithm in C++, then you might not be the best fit - and you might not want to join the company as a junior developer.

If it's down to syntax errors, and the logic is correct, what the heck does it matter? Almost everyone writes code in an IDE that will deal with those syntax errors.


Let's say you have 2 people in front of you and they both design the same algorithm and come up with the same solution (sometimes that's really the case).

One writes code that cannot be compiled while the other does. Who scores higher?

Now, let's say you are a big company and have 100 applications for that position and do the math...


No I got that point. You missed my point; it's completely unnecessary when there's a more productive way to observe someone approach a problem.

EDIT: I noticed you edited your post so it looks less inflammatory. Not cool.


Not cool that he made it less inflammatory? I think you're missing the goal of HN here.


Disagree. Editing a post like that without leaving a message about why it's edited appears disingenuous.


It does make the reply look more brash, though. Good that [s]he made it less inflammatory, but it would have been courteous to add "edit: made the tone a bit nicer" or something.


That all depends on the intention. If the intention is to make my response look more hostile, than I would say it goes against the conduct of HN. They could have replied or added a remark in the edit. The poster could have said "editing, sorry for getting too heated" or something if the intention was to deescalate, but this was a ninja edit.


The edit button exists for a reason. Your regard for what is and is not cool is of little consequence. No one owes you an explanation.


And no one owes you a white boarded BFS implementation which bears no resemblance to how they would approach a problem in the real world.


Let us all be grateful that you were never asked for one then.


Oh I know you weren't asking, I was merely responding to the incongruence apparent in your own expectations of the world.


If your first contact with a potential hire is asking them asinine pointless riddles meant to try to imbalance them you are going into the process with an adversarial relationship.

Which is probably the root problem. Almost all tech hiring is carried out like a war between companies that want results and candidates who don't want to be treated like shit.


I never understoood this. What does that even means and what is good and what is bad way to approach the question?

Because BFS has two approaches - 1.) I know BFS or similar example and will write it with no thinking 2.) I am figuring it out in head while saying things to keep you pleased.

You don't need to know employee internal thought process. Practically, BFS tests whether you are able to BFS, which is cool by me. It is simple enough so that people who were really unable to learn it should really be filtered out.

(It is ok to not know name, obviously. Just ability to find a thing in datastructure should be question.)


I'd like to say that I appreciate this comment even though I disagree. It's true that you can be a perfectly good or even above average developer without ever using BFS, and interviewers shouldn't pretend this is not so. It's also true that software development is a problem solving job. You are paid for your ability to think for yourself and solve a problem. BFS is artificial but also a nice self-contained exercise that any developer worth his salt should be able to figure out.


Yeah, I've seen some real horror stories about bad interview processes in the past, but this seems more like the guy outing himself as a defensive and impulsive person who immediately shuts down when he doesn't "know" something.

I can't remember Djikstra's algorithm either, but I would happily try and write a 15 line brute force recursive maze solver in an interview.

Of course it's not something I expect anyone does day to day, but it's also the kind of simple problem that I'd expect almost anyone with a CS 102 level of knowledge to be able to reason out by just taking a few minutes with a pencil and paper. As an interviewer, even a brute force solution would demonstrate a good willingness to look at problems using your fundamental skills and reason about something you don't have a ready made solution to.


> I can't remember Djikstra's algorithm either, but I would happily try and write a 15 line brute force recursive maze solver in an interview.

You'd fail the interview. The type of interviewers that ask this style of question are never interested in seeing the the naive brute force approach.


My experience has been different. Usually the best thing to do is to write the naive approach first and then let the interviewer guide you towards what they think you could improve.


This.

Implement -> Evaluate -> Refactor

I'd rather have someone who could caveman the first approach, recognize the inefficiencies, and improve the execution than someone who rattles off a memorized algorithm to a common problem.

If the interviewer is wanting the eloquent solution right out of the gate, then the manager might be hiring for the wrong position.


This works best when you've formed a rapport with the interviewer. If you haven't then you've already bombed.


Have you ever tried to implement the KMP (https://en.wikipedia.org/wiki/Knuth%E2%80%93Morris%E2%80%93P...) pattern matching algorithm yourself?

I tried doing it -- even with the knowledge of how it works, it took me at least a day to get to an acceptable implementation.


OK, so you are just going to assume that any interview you can't pass is broken?

There have been interviews I have failed and instead of blaming the interviewers I took the time to actually study the problem I was given. This has helped me become a better programmer.


No, i was just presenting a counter argument. I never had to actually implement KMP outside of uni though.

Interviews that require extra knowledge outside of what the job actually asks for are definitely broken.


That has not been my experience as an interviewee. Implementing the naive solution for small n has been well received in my interviews, and from there (imperfectly) optimizing it is a collaborative experience.

I don't doubt you've experienced differently. But if you have, I think it's likely the interviewer, not the question. Lots of interview questions can be abused, not just basic algorithms and data structures questions.


Its incredibly frustrating to be in a room with someone who will pop whatever random trivia they want at you and watch you struggle against artificial constraints to appease them.

I'm loathing the potential need to ever eventually interview for a tech position because even on trivial things like Error trait impls in Rust I end up opening 3+ pages of documentation to verify I knew what I was doing at least, or often figure out better solutions with methods not on my brains hot path of solving the problem.

Theres nothing pleasant about having your tools taken away from you and then being looked down at when you are like a fish out of water. The OP even talks about it when describing the drain of technical interviews - nobody wants to be judged upon at their worst by strangers. Its humiliating.


> > How many people can actually write BFS on the spot, without preparing for it in advance?

> Ughhh, should I tell him?

Yeah, that's where I stopped reading. This isnt specialized knowledge. If you know how to program, you can write a BFS. The entire algorithm structure is in the name. The only extra details you need to remember or figure out are "use a FIFO for tracking what to check in the future" and "make sure you don't go backwards". Neither is some great secret. They're obvious if you sit down and think through (or talk about with the interviewer) the structure specified in the name of the algorithm.

This person is either not a good programmer or has a very poor attitude about solving new problems.

Hiring isn't broken. I'd reject this guy too. Knowledge about a specific technology is a lot less valuable than general knowledge and willingness to explore.


My sentiment would be: "hiring is broken, but I'd reject this guy too".


you know what is broken , empathy. Every individual have certain skills and strong and weak points. to that effect, have empathy and understand person being interviewed is not in best comfortable position as the interviewer. In an uncomfortable situation, where everyone is peeping up what you do makes few people thinking messed up.

Programmers need a zen place to concentrate and focus. "Homework" assignments should be a good measure to test skills and questions on decisions made on his and how would he improve a code given the use cases would be more comfortable and now we are level playing field. my two cents.


Unfortunately, homework assignments are being abused to the point of being absurd.

Around here, the trend is requiring a 16-20 hour assignment just after a brief 15 min phone call, almost before starting the process at all.

Not only it is a ridiculous amount of effort to ask someone who may already be doing 8 hours a day programming but it's also a problem for the company. Either they have to spend significant effort evaluating each candidate's submission or -more likely, sadly- they only give it a perfunctory look and discard many on first glance. And considering that often the person doing the evaluation has his own tasks to do and is doing it on some spare time, they will tend to not make much of that effort.

The result is the company still needs to do significant effort and the candidate gets frustrated because they have had to spend a significant amount of time and, after having to wait for a couple of weeks -at the very least-, they get generic and useless feedback saying simply that they did not meet the expectations or whatever.

I would not mind at all doing a two-three hour on-site assignment. If you're there, they will at least have to make a similar effort as you are. I find this much more fair both as a candidate and as an employer.


> I would not mind at all doing a two-three hour on-site assignment. If you're there, they will at least have to make a similar effort as you are. I find this much more fair both as a candidate and as an employer.

I've had this exact thought. I got interview homework last year, was told to spend "no more than 4 hours" on it, etc, was mildly interesting. The next interview we'd have a discussion about the code and I'd present my rationale for various design decisions etc. But then I got the email that the interviewer had reviewed the code with an engineer and they decided not to move forward because there was a mistake and they "thought there would be more testing"! I felt that it was quite imbalanced to have spent 4 hours on this task, only to have it rejected without discussion after a short review.

So in the future I'm going to suggest that I'll be happy to do whatever homework assignment as part of a pairing exercise. That way I can see what it's like to work with them too.


If a recruiter/hiring manager sends one of these nonsense assignments to me, I refuse them outright and thank them for their time. It only took me one frustrated night trying to bang one of these out after a long day of work to learn my lesson. This method of testing a potential hire is lazy, ineffective, and frankly rude IMO.


The strategy here is to set aside one hour to code, write the code, send what you have after an hour and say what remaining work needs to be done, along with reasonable estimates of remaining time (noting that you're too busy to spend actually doing it).

Your success rate will be no less than someone who spends 15 hours doing whatever bullshit assignment there is.


I did exactly what you just suggested. I never heard back from that particular recruiter. Bummed me out at the time, but now I just laugh at it.


I find empathy hard here. If the author had said they froze or panicked and blanked on BFS, then that's understandable. But that's not what happened here.


> I find empathy hard

"If only the author did these very precise things that they just so happened not to do, THEN I would find empathy". I'm pretty sure that's not how empathy works.


> This person is either not a good programmer or has a very poor attitude about solving new problems.

He probably isn't a good "programmer." He is a web developer, and has projects that are basically CRUD input and output.

He can stitch together various different components to build a functioning end product. This is a fine skill for most agency type jobs, and he will likely have a long prosperous career as a contractor.


You don't have time to design algorithm on site, you MUST write it down exactly right now without errors. That's the problem.


That's not how interviews work. You are not alone in the room being judged by an all-seeing eye. You are talking to a human, and each of you is evaluating whether you want to work with the other.

As an interviewer, I'm always happy to talk to the candidate about the question I ask. I'm perfectly happy to answer "What is a breadth-first search? Why does it solve this problem?". I'm perfectly happy to talk about algorithms with them, and ask leading questions if the candidate explains where they're having difficulty.

As a candidate, I'm not interested in working with an interviewer who doesn't approach the interview the same way. Remember, all interviews are bidirectional.

This person's stated reaction to the question was anger that it was asked. Most likely masked in person, but even masked tells us a lot. It tells us he didn't actually talk to the interviewer. If he had talked to the interviewer and got garbage back, he would have been angry about how awful the people were, not that some technical question was asked. If he'd talked to the interviewer and got useful information back, he wouldn't have thought it was a bad question. The only options left are trying to stumble through it silently and getting it wrong, or sullenly saying "I don't know" and moving on.

And that's the problem. A good programmer isn't measured by being able to make things. A good programmer is measured by being able to work with others and make a positive contribution in a group. Everything this person is describing is how to make a bad first impression during an interview. Maybe he's a great programmer, but he's utterly failing to actually convey that in the important aspects during the interview.

I wouldn't hire him, and this writeup is exactly why. He needs to demonstrate that he's someone people want to work with. This article does the opposite.


>As an interviewer, I'm always happy to talk to the candidate about the question I ask. I'm perfectly happy to answer "What is a breadth-first search? Why does it solve this problem?". I'm perfectly happy to talk about algorithms with them, and ask leading questions if the candidate explains where they're having difficulty.

This is something I feel a lot of interviewees aren't aware of, you DON'T have to know EVERYTHING.

I didn't know what a BFS was (I am primarily a front end web developer), so in an interview, I'd simply ask "Could you explain what that is? I'm sure I can find some way to do it."

As an instance, I looked up "Breadth First Search" on google just now, and saw that its just a way to search a tree one generation/level at a time. Once I knew that, the naive approach is simple (I'm sure there is a better way). Start a queue with node(0,0), and loop until the end of the queue, if you don't find the correct value, add the node's children to the queue, and keep going.

I totally feel for the guy, I've been rejected more times than hired, that's for damn sure, and each time is a HUGE blow to the ego, but you have to pick yourself up. I feel he is a great contract developer (not a dig, I am also a contract developer), but not necessarily someone I would hire full time.


> Once I knew that, the naive approach is simple (I'm sure there is a better way).

No, that's it. That's why so many are ragging on him; it really is that simple.


I have asked HR persons (in large corps) and got clear direct answer: you MUST solve two tasks on the interview, it's a necessary condition. If you don't solve within the hour you are rejected. In small companies conditions are much more relaxed, but in large corps where interviewer screens hundreds of applicants he/she just don't want to waste time.

This system is corrupted but looks like it suits big corps just fine. I have never evet been asked about my real experience on large corp interview. They just don't care about your github. Small companies asks about experience, but unfortunately some small companies copy FANG approach to interviews, cargo cult is magnetic.


I've had a successful programming career since the 80s. Including a fair share of browser front end programming.

As far as I remember, I have never written any BFS code, and I would also find it absurd to be asked to do so in an interview. There is great library code that does this in any reasonable situation.

If you're the kind of shop that would rather write your own code than use third party libraries, then that's a really great sign that I shouldn't work there.


If you're the kind of shop that would rather write your own code than use third party libraries, then that's a really great sign that I shouldn't work there.

Do keep in mind that you're eliminating the places that wrote that "third party library" in the first place. Libraries don't just grow on trees (though they can help you navigate a tree).

Just a few weeks ago I had to knock out a test framework, creating an API from scratch (basically an object model on top of some websockets messages), because there's no library that's going to do what we need. Oh, I friggin' abused Python's import keyword to hell and back, but there was a lot of stuff that just needed to be written from scratch. Given there are a few tree-like structures, I'm sure I briefly weighed depth-first vs. breadth-first before hammering out the code. And this is nothing exotic, just a small company working in the industrial controls industry. And such a task is nothing exotic, it's why they hired me. But I think to effectively create something like this in an efficient manner, having at least a rough idea of something like BFS is table stakes.

One is not being asked to implement a red-black tree on a whiteboard, just something that I think a competent software developer could come up with from first principles. A company asks you, "which direction will you be searching, and how would you do that?", and you'll reject them. I mean this with due respect: you're probably right, neither party will want you to work there, and the filter worked. Hurray?

I'll tell you what grates me, though: the companies that insist they need someone like that, and then the new employee finds out that the culture is so borked, that said employee will never get to actually do that stuff. "We need test infrastructure." Turns out the reason they don't have any isn't for lack of someone to write it, it's lack of culture to do anything with it. As just one example.


If I had to write a fresh impl of BFS I for sure wouldn't just go off what my algorithms book said a decade ago. I'd reference how other languages and frameworks have implemented and matured the problem in actual deployment and use their versions as a basis of mine (assuming license compatibility).

Writing code in a vacuum is humongously dangerous even for simple things like piping strings. Its why so much C software ends up chock full of overflows, segfaults, and myriad security vulnerabilities.


All good reasons for not getting stuck in front of a whiteboard redoing the unprotected unoptimized core of just the basic algo.

Wouldn't it be nice to be able to use references / existing implementations while at the board being watched and timed.


At least then it wouldn't be purely artificial. I'm not surprised at all that there are so many people "bad" at technical interviews when they are fundamentally designed to leave you a fish out of water. Nobody likes to flounder, but some people it shakes much more than others.

But I wouldn't think for a second those people would be incapable of being good team members or productive coders. You just let them stay in their comfort zone and approach uncertainty in the ways they are comfortable with. They will know the best way they can approach solving a problem, and if you were testing for that instead in these interviews, they wouldn't be so harmful to participation in tech by "minorities" such as women or people of color.


>> I have never written any BFS code,

I'm primarily a backend dev who did a little js a couple of weeks ago that required me to populate a dynamic tree with the results of an api call.

Although I think I went depth first, so I guess technically you're right....


I can't write BFS off the top of my head, but I can probably do it after reading the Wikipedia page on it. The last time I interviewed (for the company I'm working for now), I faced a similar question, and said up-front, "It's been a decade since I've had to implement that, so let me Google for a quick sec." I proceeded to do so, skimmed the Wikipedia article, and wrote the code in the shared editor we were using. I got the job, so the interviewer must have seen my candor and quick comprehension as positives. But I know (based on the parent comment here, as well as discussion with colleagues) that this sentiment is not universal.

If I had claimed a lot of algorithm-heavy experience on my resume, I would have expected my response to be met much more harshly. But, as my experience was focused more on API design and interactions with business stakeholders, it wasn't a useful question to gauge my competence. However, it was useful for gauging my personality. Like everything, context is vital.


Absolutely this. The better question is, given several data sets, how would you approach traversing them. What is your intuition, and what are the limits of your awareness of how to approach the problem.

That is actually informative on your programming ability, not regurgitating buzzwords (albeit BFS is a light one) or rote memorization.

Like just hearing these kinds of questions is infuriating because I often immediately ask things like "is the data processing complex enough to justify threading it? what are the synchronization points? if the data processing is variable, we probably want a job pool, etc". The performance of code is almost always noninutitive until you have an implementation done in order to optimize it, and questions about searching graphs are almost always these "optimize light" problems where they want you to really know how to do the navigation right because of my precious 10 cycles per branch but don't want to even consider the operating environment that could influence the decision in anything but a purely academic setting.


I was taken aback by the BFS thing at first as well, but after thinking about it, I'm not sure I would be so confident about writing one if I didn't already know the basic structure due to the 50 or so times I had to implement some version of it in college. I don't feel like I can judge because I was forced to implement most of the basic data structures and traversals so many times that I can usually re-create them based on knowing 3 or 4 steps and filling in the gaps myself. I think a lot of software engineers also had this experience, and so they expect it of others, but if nobody forced you to write a path finding algorithm inside a composite pr quad trie in college then you are at a huge disadvantage for interviews, but probably less of a disadvantage for actual practical programming.

Be honest, how many of us implement our own data structures these days? I sure don't. I just build on or use whatever version of map comes with the language 98% of the time.


> Tree data structures are really common in front-end.. like the DOM....

99.999% of front end developers do not solve those kind of problems.


Also there are font-end languages (XPath, CSS selectors, etc) specifically for these types of structures, so you don't have to traverse them.


Yeah, just pile on another layer of code you don't understand. And then wonder why it's slow or doesn't work the way you expect.


I know, so crazy, that's why Facebook was written in x86!

Fun fact: I'm lying.


Software developers always have to work on top of some sort of abstractions throughout our career, correct?


That doesn't mean you shouldn't have a reasonable understanding of what those abstractions are doing, though.


For the purpose of selecting a DOM node in a performant manner, you really don't need to know how CSS/XPath traverse the DOM. You just need to know that querySelector exists.

In general, I think it's enough to know what your abstraction does, rather than how it does it.


There's a huge difference between using an abstraction with vs. without understanding what it does. Often the latter leads to more layers of abstraction piled on top to "fix" the problem, until everything sorta works and is barely workable at the same time.


"All non-trivial abstractions, to some degree, are leaky."

https://en.wikipedia.org/wiki/Leaky_abstraction#The_Law_of_L...


It's build on top of the abstractions; not depend on someone else's implementation of all the abstractions...


And if the candidate will be writing a new DOM traversal API, this is a valid exercise.

As it stands, any of the libraries that do DOM manipulation for me also traverse the tree for me ... so, again, why do I need to implement a toy BFS in an interview?


To debug the traversal algorithm when it breaks in production?

Also, if you know that all the companies do this, why don’t you just study before hand? Tree traversals come up multiple times in the article. I understand the author’s frustration but not their (early) insistence on not studying for an important interview. When they did study later I wonder how seriously they took it based on the tone.


>To debug the traversal algorithm when it breaks in production?

And what's the likelihood of that happening with a well-used framework? Maybe the interviewer wants to see my ability to debug problems - great let's do that instead. You want to see how I work? OK, let's pair on something. Pinning this to "implement BFS at the drop of a hat on a whiteboard or in this project" is bullshit.

>Also, if you know that all the companies do this, why don’t you just study before hand?

Fine. I'll implement a few solutions to the problem in the languages I'm familiar with and put it on GitHub. Now the interviewer can check it out and we don't have to waste time at a whiteboard.


> what's the likelihood of that happening with a well-used framework?

In my experience, higher than you'd hope.


You have heard of the monkey - ladder - banana experiments?

Just because the interview culture has evolved to this awkward unrepresentative generally unhelpful filtering method, is no justification for "well just do it that way and you'll be fine".

If I'm the new monkey, and someone smacks me for touching the ladder, I'm going to ask why and push back if there's no good answer.


Because one day you might need to, and you'll write a 100 lines of code to do it. If you had known what a BFS is and how to write it, you'd be able to accomplish the same task in 10 lines of code.


Counter argument: I could google it, read about it, and implement it. All of it in 1/2H with a computer with internet. Not on a whiteboard. Which is a fucking stupid interview process.


Very unlikely in my opinion. Either you’re told about it beforehand, inherit a project that’s using it, or introduced to it during your PR. Especially in places that ask about them during an interview...

I don’t think it’s about the BFS, it’s about attitude.


Mostly because it is simple enough and does not require special knowledge except knowing what tree is. Comming with original situation is harder and would put candidate to more random position.

The maze solving could be solved by breadth first too. It is algorithm that can be used in many different contexts.


Yikes... Interviewing sucks but half the reason interviewers ask these types of questions is to see how your attitude and how you respond. Writing an Instagram clone in Angular doesn't really tell me much about your problem solving skills when faced with a unique problem.

How many software developers work on "unique problems" that involve hard computer science problems compared to the number that are just using an existing framework to solve business problems?

Tree data structures are really common in front-end.. like the DOM....

And most of the time you're going to end up using a pre-existing implementation....


Most projects require you to solve small simple unique problem once in a while. And any long term position requires you to regularly learn not so simple new things. Depending on your existing knowledge, the BFS question tests either whether you was able to learn BFS in the past or whether you can solve small unique problem.

There are positions that does not involve anything harder then BFS ever, but I would not say they are majority of them.


In 20+ years of development, including 12 doing cross platform C work, two maintaining a bespoke development environment for Windows Mobile (compiler, VM, IDE, etc.), I've only once had what I consider a problem that needed any type of complex CS algorithm. That was to evaluate an algebraic expression in a string in C using the "Shunting Yard Algorithm".

And the 20 years of professional development was after I had been a hobbyist writing 65C02 and x86 assembly language....


Do you really consider breadth first search complex algorithm? It literally is "you have structure of related data, find the object with given id by systematically going through the structure". That is all what it does. Find whether directory contains file bigger then x is example of breadth first search. That sort of thing.

It makes perfect sense to not remember the name and that should be ok. But it really is not something complex - it is pretty much simplest algorithm used to explain the concept of algorithm to students.


I’m not saying that given a scenario, I couldn’t figure out a way to do it, but at the level of a job that I would be applying to after all of these years, would a company really be more concerned about whether I could write a breadth first algorithm or how well I could design a highly available, fault tolerant, redundant system, write maintainable code, my mentoring and leadership capabilities, my experience with Domain Driven Design, my years of experience designing and developing “cloud first solutions” (yeah I grown a little bit when I write that), and certifications?

My current employer asked me no programming questions even though I supposedly came in as a “Senior developer” in my small company.

He was more concerned about everything I mentioned and how could I help mature the organization.

Heck, he didn’t even care that I didn’t know any $cool_kids front end framework.

Whenever my last day on my current job comes and I don’t expect that to be for a few years, I’ll probably end up working for a consulting company as an overpriced “digital transformation consultant”, “implementation consultant”, or “solutions architect”. Do you really think they are going to ask me about my leetCode capabilities?

The same should be true for anyone at a certain stage of their career.


I would kind of expect that if you can design a highly available, fault tolerant, redundant system then you would figure it out. I can see how you could find the question easy and not allowing you to show what you can do. But that is opposite situation. A lot also depends on the position. If the position is not leadership position and does not require specialized knowledge you have, I don't think the company does wrong by not asking about it. Then those easier questions have place. I assume that overpriced "digital transformation consultant", "implementation consultant", or "solutions architect" does not write code or only little of it. As such, code focused interview makes no sense. It is however different for position that writes code a lot.

I don't get insulted over basic questions. Mostly because I worked in a company that did not asked programming questions. As a result had to work with few people who could talk design and maintainability such, looked like great socially and turned out they could not write the code except in simplest situations. It was not good and harm was long term, Largely because of resentments etc that build up in team who had to do someones work while that person was treated as superstar. Such person needs strategy to mask inability and those are all toxic - masking own inability by blaming others etc.

What are alternatives out there? Take home assignment, fizzbuzz, simple algorithm, trivia questions, requiring you to already know exact technology they use. Someone complains about every one of these. There is no hiring process that make everyone happy and fit everyone, but imo, as long as company does not go to some crazy extreme somewhere it should be fine. If candidate have to balance red-black trees then it is very clearly too much, figuring whether string is palindrom is not too much.

There is also something good to be said about repetitive hiring process where company can compare how people did on interview and then how they did in real life. As such, it will contain some generic or easy questions.


If you are getting paid to solve business problems, then why not have them solve simple version of business problems?

My interviewing process for developers is giving them a skeleton of a class with corresponding unit tests that they have to make pass.

Then they get another set of unit tests that they have to make pass without breaking the first set.

The code models a simplified version of real world types of problems we had.


That sounds good, really. It is better then asking about breadth/depth first search. Added bonus is that potential employee gets some idea about what he will do daily too before making decision.

Another way to screw hiring is to select for algorithms loving geek who seeks algorithmic challenge and can recite obscure edge cases - and then put him on position where he has to solve normal business problems or maintain unit tests because that is what job is. And then wonder why genius is demotivated and does not seem to produce.

But, I still don't think either of these search questions is outrageous or a deal breaking question. It is within acceptable questions range - partly because it is so common that I would expect a person to quick google it if they heard it twice already and did not figured it out in stress of interview.


After 20+ years I'd expect you to be able to make it up on-the-fly.


I've only ever implemented BFS for coding challenges. Do you find yourself reaching for it on a regular basis in front-end development?


I've used a version of it before, to traverse a cached client-side hierarchy of domain objects.

The inevitable response to that is "well that's an exception, it's only occasionally needed." True, as far as it goes. But if you exempt all of these "rare special cases," you've suddenly exempted 10% of the work. That 10% is among the trickier bits (though the hardest is always organizational and product).

I also hate to say this, because it's snotty, but as far as obscure algorithms or tricky, complicated algorithms go... BFS and DFS don't really fall into those categories.


>But if you exempt all of these "rare special cases," you've suddenly exempted 10% of the work.

I don't think that's the issue though.

The issue, as I see it, is not that you had to figure out how to implement BFS. I think many of us are confident we could do that under professional working conditions. If I had a day and some data structures to poke around with, I'd write a killer BFS supported by tests if I needed to (I've had to do similar things in the past).

The issue is that interviews expect us to recall this kind of specialized and rare problem solving like it's a day-to-day thing. We as candidates are being judged on how well we can come up with a novel solution on the spot to something that many of us will need to do once or twice in a career, and will be able to solve under completely different conditions.

In short, for most candidates, these kinds of questions don't test anything realistic, and a lot of people have issue with that. It's like judging whether you want to go to a chef's restaurant based on how well they did on Chopped! They might do well under pressure because they have practice with it because they're always in the weeds because their restaurant is poorly run. A terrible dining experience might translate to a win on a cooking competition show, because the show isn't testing the chef on the experience they provide you. Just as these kinds of interview questions don't test you on how you'll actually be interacting with code and solving problems day to day.


I don't want to hold up whiteboard interviews focus on algorithms as the be all, end all. They're a poor proxy for day-to-day skills: the hard stuff is organizational, knowing all the things that can go wrong, and technical design that's both flexible to changing conditions and easy to work with. None of those are really amendable to probing in a 45 minute interview, or even several hours of pair programming.

But it's about friction: I want my coworkers to be focused on the genuinely hard problems, not spending a day writing a BFS. The current interview process does manage to probe that.

Going a bit deeper, the whiteboard interview process is a good proxy for ability to prepare over the medium term (a month or three of consistent studying should give you as good a chance as anyone to get into a generalist position at a prestige company) and of IQ. The latter is controversial and most organizations can't test for it directly (owing to legal concerns), but a relatively high IQ is a core requirement of technical roles, and whiteboards provide a solid proxy for that when coupled with the opportunity to prepare for them beforehand.

That said, I'd always go for someone who has the ability to deliver on complicated, large projects over someone great at whiteboards or who has a high paper IQ. It's just that it's pretty much impossible to evaluate for that in a way that works on a general application pool.


This is a very acute answer. I actually do think well-structured algorithm questions that are variants of common ones might be strongly g-loaded. E.g. They test knowledge + IQ.

I would say big companies that hire new people or generalist programmers benefit greatly from them. These types of questions test for intelligence at scale without outright being an IQ test. In fact, I think I read somewhere that Google engineers' performance is strongly correlated with how well they do on their interviews.

Sure, these questions might not test for conscientiousness, curiosity and general agreeableness, but that's why you have the other portions of the interview.

All this being said, I don't necessarily think these questions work at most companies where the engineering teams are significantly smaller and you can really spend a lot of 1-1 time probing knowledge and experience and reviewing coding exercises. However, I would still say basic DFS/BFS and using a hash to solve a problem is still a must. I use them on occasion, even as a generalist.


I think the goal of hiring is always: "We have a particular business problem, and need to determine: if we pay this person will they be able to help us solve it?"

As much as I think certain companies shoot themselves in the foot by spending their interviews asking trivia questions about e.g. Rails (because that's what they use), it's not unreasonable that they might just more highly value having someone who can be productive in their Rails world immediately against someone who, while excellent, nevertheless needs a week or several to onboard the ecosystem. Or in other words, they think the goal question is answered in the affirmative more strongly for someone already in the ecosystem. A dangerous assumption, but possibly valid. There are always tradeoffs to be made in the specificity and category of your questioning, but interviewers need to keep in mind how they contribute to the overall goal's question.

For larger companies, it's more likely that there are more problems to solve, and more likely a need to get people working on them sooner rather than later, so the incentives start pushing for addressing the goal of hiring (if not for all positions, at least for many positions) by answering a simpler goal question of "is this person smart and gets things done?" (which is just high IQ + conscientiousness) and hiring en masse. If you put such people to work on any of the problems you can rightly expect they will advance them some amount, even if they have to ramp up on a programming language / framework / whatever feature of the problem's environment first, and as time goes they can shift around the company to where they're even more effective. In addition to being fast it's also very fair with respect to people's backgrounds, suddenly it doesn't really matter if you have a thousand widely used github repos or not, have a 10 year experience headstart or just graduated a bootcamp, if you have a degree or not, if you know the framework already or not; people from the "wrong background" can still be hired if they can demonstrate they are sufficiently smart and conscientious to start working on one of the various problems you need work on yesterday. The fact we have to proxy this with whiteboard hazing sucks, it's expensive for everyone involved compared to just asking for ACT/SAT scores or the results of a previously taken IQ test when available and giving an IQ test when not. In my own interviewing I try to proxy for "smart & gets things done" (because that's all my teams at my current company have needed) in a less asymmetric/aggressive way while answering other questions since we can't hire everyone. But even the aggressive whiteboarding is better than every job interview screening for highly specific backgrounds and/or trivia knowledge...

And I agree that BFS/DFS is appropriate, though needlessly so (i.e. there are even simpler questions that take less time), as a first step in answering the business goal which is simply verifying: can this person who says they can program, actually program? Unless you're willing to train people to program, you have to start the cutoff somewhere. It's modestly appropriate to answer the question of: does this person know their craft's basic tools?


> Going a bit deeper, the whiteboard interview process is a good proxy for ability to prepare over the medium term (a month or three of consistent studying should give you as good a chance as anyone to get into a generalist position at a prestige company) and of IQ.

This is where I have a problem. Why are we the only industry that has this expectation of months of preparation? Engineers don't do it. Doctors don't it. Professors don't do it. Why?


I did use BFS/DFS or things like find/union in my job. But I'd say it's mostly about how quickly you can wrap your head around abstract concepts.

An analogy from finance: if you take a pen and a paper, and think about it for a few minutes it'll be clear that buying a stock and a put option is equivalent to buying a call option (and a bond, since you're also locking in some capital). This is fine, and most people can do that. But the brilliance happens only when such things are immediately obvious to you, in the same manner as you don't need to consciously decode English while reading this sentence. It's another question whether this or that particular company really needs brilliance.


No, but its a really easy algorithm, it really shouldn’t be too hard to implement given a description of it.

(not accounting for the people who are just bad at writing anything on the spot in a high pressure interview situation, of course)


It doesn't matter if it's a really easy algorithm if it's trivia or unrelated to the job.


How do you know that implementing a feature very close to how BFS works won't be necessary in the job? You can't. At this point asking if a candidate can make BFS is akin to asking if they can count. Same thing for inverting a binary tree or doing fizzbuzz. They're general and easy enough that anyone with a programming background should be able to figure them out.


Given that the hiring process has time constraints (however you do hiring, you can’t use infinite time, so you need to pick what you’ll include in the process), why use time testing the candidate’s ability of something that might be necessary in the job. Wouldn’t that time be better served testing something you know will be necessary?


You ask about things you know will be necessary as well. In an interview the time shouldn't be spent entirely on academic programming questions. They're there only to gauge problem solving skills in real time in a stressful environment, which work sometimes is. Who says you can't fit any of the above mentioned programming problems in 10 to 15 minutes?


Wouldn’t asking real-world questions that the person will definitely run into during the job also gauge problem solving skills? Unless problem-solving isn’t part of the real job, in which case it’s not worth testing for.

If your point is that you do a mix of questions that are definitely relevant and some that are academic and might be relevant, why not just do 100% known-relevant questions? What’s the value add of asking questions that are less than 100% relevant?


Here's a simple way to quantify the argument: go find the most popular Node or React front-end projects on Github, and then find out how many of them --- in their own source code, not in the vast, unending dependency tree NPM generates --- contain a breadth-first search.

My guess is: not very many.


I’ve had to write tree searching algorithms at multiple companies, although I usually find myself reaching for depth-first search instead. It comes up often when dealing with hierarchies (think things like company org charts for HR software, for example).


I stumbled a bit on "write BFS" at a FAANG interview. The specific question was to write it for the Facebook friend graph. I just blurted out my reaction, which is that this was a terrible idea (given the space requirements). His response, "Well, pretend it's not.". Ugh.


Huh, what's terrible about it? BFS is exactly what I'd want.

Let's make up a synthetic problem, like "the closest friend which has property X". Then you'd first check all of your direct friends for X, then check friends-of-friends for X, then check friends-of-friends-of-friends for X, and so on. You'd go on forever until you either find a friend with property X, or run out of time/space.

This is the classic BFS problem, and I can see how it would make a good interview question.


It depends a lot on the specific question, and the connectivity of the graph, but in general, BFS can use space proportional to the size of the entire graph, which for the FB friend graph is huge. Even on a machine with a lot of RAM, you shouldn't assume this will work.

DFS or IDFS can generally use space proportional to the diameter of the graph, which is far smaller.

That caveat with BFS turns out to be so bad in practice that I've never seen the algorithm used in practice, outside of a classroom. And indeed, I first thought the point of the question was to elicit this complaint. The interviewer wasn't on that page, though.

The problem being asked was considerably more complex than "closest friend with property X". I don't recall the details, but perhaps it was something more like "find the ten shortest friend paths to (a unique but unknown) someone with property X, where those paths share no nodes".


Agree re "depends a lot on specific question", but the problem you specified still sounds very much like BFS, especially "shortest path" part.

My assumption for the Facebook graph would be that there is basically no way we can traverse it all, so your only hope is to find the path without expanding all the nodes. DFS will not work for that at all, but both BFS and IDFS may give you practical results.

This leaves the question of BFS vs IDFS, and that depends heavily on the details of the problem. For example, if the graph is already in RAM, then IDFS would be the best. But if the graph is not already in RAM, and you have to fetch it (from database or remote API), you'd definitely want the caching between successful IDFS rounds. And if you do that, then you might as well do BFS -- approximately the same memory performance, and much easier code.

As for usage, while BFS itself is not used this much, it's more advanced versions, Dijkstra and A*, are used all the time in graph traversals. For example, in many computer games, navigation apps and robotics planners.

(And back to original topic: if we had conversation like this during the interview, then you would likely get good score from me, even if I was fully convinced that BFS is the only way to go. After all, I am not testing for the specific bot of trivia -- I am testing for the ability to reason about algorithms)


> DFS or IDFS can generally use space proportional to the diameter of the graph, which is far smaller.

Maybe, but for "find the closest friend with property X" basic DFS is completely useless. It's likely to give you some distant rando with property X. Fast, sure, but not useful if you're looking for a friend.

Besides, if you have cycles in your graph, DFS also needs to keep track of which nodes you've already visited, or it's never going to end. Or use iterative deepening, which you probably want to use anyway to prevent ending up with some distant rando. Slower than BFS but consumes less memory.

> "find the ten shortest friend paths to (a unique but unknown) someone with property X, where those paths share no nodes".

Ah, but that's a completely different problem.


As above, "IDFS" == "iterative deepening DFS".


Yeah, I figured that out some time after I succeeded in digging through some old memory for the phrase "iterative deepening". It's been a while since I've had those classes. I do remember that at the time I didn't see the point in iterative deepening. Surely breadth first was faster than redoing the same work every time? But if you see it as a more memory-efficient way to simulate breadth-first through depth-first search, it makes sense.

Still, intuitively I'd expect that it depends a lot on the shape of your search space, the cost of your memory and the cost of traversing your network, which one is actually more efficient.


If you're generating nodes in an abstract graph (e.g., moves in chess), iterative deepening can be incredibly space-efficient, whereas BFS can rapidly consume an exponential amount of space.

As you point out, IDFS (or IDA*, etc) work far better if you have a means to avoid re-exploring the same nodes repeatedly.

Nonetheless, run-time can theoretically be greater, since you're iterating. Theoretically, though, each iteration will take N times longer than the prior, which means that the running time up to the final iteration doesn't matter that much (because it's such a small fraction of the total). The extra bookkeeping required by BFS can easily outmatch that in running time.

Definitely, it all depends.


Getting around the DOM is really simple: querySelector and querySelectorAll do the work for you.

Accessing properties of a known object is better handled with the selector pattern. I guess you might use BFS to implement a search feature for user-generated content on the fly? I don't think it's relevant to most FEDs roles making CRUD apps with backend searching APIs.


Ughhh, should I tell him?

I agree with the sibling commenter: responses like these are very condescending -- and basically prove the main point of the original article.

And BTW:

Writing an Instagram clone in Angular doesn't really tell me much about your problem solving skills when faced with a unique problem.

Neither do your made-up puzzle problems.


There's always the interviewing apologists out in force whenever someone brings up how broken the process is; this post is no exception.


Have you been on the other side -- too optimistic interview? You know, your had great-sounding candidate, had nice, non-technical interview, and hired them.

Then the person could not really pull their share. They did a few simple PRs, it all looks good. You gave them the more complex task, and they just could not make it to work. After teaching them basic CS concepts for a while, you give up, and try to move them to backend -- and they do not do better there. Then to do data analysis -- no luck. You really do not want to fire them, but this seems the only way forward.

The resulting experience is painful and time-consuming for the team. You wasted many weeks trying to teach that person and nothing good came out of it. You promise that in the future, your interviews would always contain technical questions, and no one who does not know about big-O complexity would be hired.


And asking people tricky questions doesn't tell you about people's problem solving skills either, it just tells you that person is a good talker. Justifying tricky tree traversal interview questions by comparing it the DOM??? That is a stretch. In any case, beyond _solid_ coding skills, the thing that makes someone a great member of a dev isn't coding skills, it is a whole pile of professional best practices, social skills, and general passion for continuous improvement. It is simply not possible to test for those things in an interview.

The reality is that interviews are broken. Not because of this guys subjective perception that it is so but because for employers you just aren't getting the SNR to justify typical interviews. People who cling to that approach anyway are largely, IMO, motivated by ego and cargo cult (lack of understanding and creativity). In addition, in that context, interviews are also broken because in being worthless, they are demeaning to the candidate who isn't great at tap dancing on your command.

What does work is past performance and actually working together. Both are problematic data to get at so I don't think there are easy answers here. We do a resume review to see if they even claim to have the expertise we're interested in, a very short and simple "gut check" coding exercise (not tricky, just checking they can actually write decent code and tests), a 30min phone conversations where we check that both parties are aligned on what we're looking for, contract to hire, then exercise extreme discipline in parting ways with people that aren't great before converting to W2. Our SNR is pretty good. A lot of people don't want to contract to hire so this system has cons. YMMV of course.


I think my current company does a decent job of this. Yes, we'll ask you design questions on a whiteboard, but any algorithms or code implementations are done on a computer, and Google is encouraged.,


To add onto this, using lodash + TS is not the most pleasant experience. A lot of that has to due with current limitations of the type system (mostly variadic types [0]), but I find myself having to provide lots of generics rather than relying on inference. The overloads are not great.

I say this all with recognition that lodash greatly predates TS, and the maintainers have done an absolutely wonderful job in keeping up with the overall ecosystem (ESM, typings files, etc). I can't even begin to comprehend the amount of work that has gone in to keeping lodash so modern.

That being said, I wish the interaction was slightly better, since a lot of people just immediately bring in Lodash to any JS project.

[0] https://github.com/Microsoft/TypeScript/issues/5453


I have the same experience; I wonder if anyone is working on a TS-friendly utility library, or if our best hope is the work on TS itself on [0]?


I wonder if ramda is any better in that regard?


I love Ramda and use it everywhere but sadly, it's somewhat lacking when it comes to type definitions. Everything in Ramda is curried and the types just don't reflect that very well so you get quite a few incorrect compiler errors.


Yeah, I remember liking the auto-currying, that makes sense it has a downside here.


Shouldn't tuples in rest parameters from TypeScript 3.0 [0] fix that?

[0] https://www.typescriptlang.org/docs/handbook/release-notes/t...


Not entirely.

For example, a function which takes a property path (in the form of an array of strings) and an object as parameters and returns the value at that path inside the object still needs overloads for every single tuple length.

  type Key = string | number | symbol;
  
  function get<T, K extends keyof T>(path: [K], obj: T): T[K];
  function get<T, K1 extends keyof T, K2 extends keyof T[K1]>(path: [K1, K2], obj: T): T[K1][K2];
  function get<T, K1 extends keyof T, K2 extends keyof T[K1], K3 extends keyof T[K1][K2]>(path: [K1, K2, K3], obj: T): T[K1][K2][K3];
  function get<T, K1 extends keyof T, K2 extends keyof T[K1], K3 extends keyof T[K1][K2], K4 extends keyof T[K1][K2][K3]>(path: [K1, K2, K3, K4], obj: T): T[K1][K2][K3][K4];
  function get(path: Key[], obj: any) {
      if (path.length === 0) {
          return obj;
      }
      const key = path[0] as Key;
      const rest = path.slice(1) as [Key];
      return get(rest, obj[key]);
  }
  
  
  interface Foo {
      foo: {
          bar: {
              baz: {
                  val: number;
              }
          }
      }
  }
  
  declare const foo: Foo;
  const num = get(["foo", "bar", "baz", "val"], foo);
In this case, num is implicitly typed as number, but if the path would be longer, you'd need to add more overloads.


GQL and gRPC are absolutely complementary. Having resolvers fan out to gRPC-backed microservices has been great at our company. We initially used protos for all of our service contracts (including server <-> web UI communication). While this was nice for all the reasons other people have stated, protos kinda ended up sucking to work with on the front-end.

Protos are a serialization contract and should remain such. Too much proto-specific logic ended up bleeding into our web codebase (dealing with oneofs, enums, etc). GQL's IDL on the other hand ended up being a perfect middle-ground. It gave us a nice layer to deal with that serialization specific stuff, while letting the front-end work with better data models (interfaces, unions, string enums, etc.). GQL's IDL and TypeScript are a great match, since GQL types are ultimately just discriminated unions, which TS handles like a charm.


Can you comment more about your comparative experiences with "logic bleed"?

I find this fascinating, because it seems that a lot of the bleed should be the same (isn't 'oneof' roughly equivalent to 'union'?)... but it sounds like something is different in practice, and I'd really like to understand what the root cause of the difference is.


I appreciate you calling me out on my wording, because you are spot on. If anything GraphQL “bleeds” even more into my front-end code. I think the appropriate way to frame it is that GQL is a more targeted, holistic solution to the problem of fan-out data aggregation from the perspective of a UI client. Every UI codebase I had that depended on protos had significant chunks of code transforming the data more appropriate to our UI domain objects (that lived either in the front-end code base or one abstraction higher in a "BFF" [1]). Our UIs usually wanted to work with denormalized data structures, which was obviously in conflict with the proto models owned by small individual microservices.

GQL simultaneously addresses those two specific problems: resolution of normalized data, and giving UI consumers the power to declaratively fetch their desired data shape. It also has first-class TypeScript support through Apollo and the open-source community built around that.

I think it’s important to stress the tooling support, because you are correct… oneofs and union types are conceptually the same thing. A lot of it comes down to ergonomics in how you consume those types. In code generation GQL unions represent themselves as actual TypeScript union types, which means I can write type guards or switch on their discriminant to narrow to their correct member, whereas proto oneofs use a string value to access the correctly populated property. Small things in the day-to-day, but in how it manifested itself in code, it definitely felt like an improvement.

GQL unions also give you the power to do some really cool projection declaratively in queries [2]. Once again because of the nice compatibility of TS’ type system and GQL, the types returned from those queries code generate into really nice structures to work with.

I’m getting a bit rambly and don’t feel like I adequately answered your question, but it’s a bit late and I wanted to give you some response off the top of my head. I don’t want to knock on grpc-web or anything. A good deal of it has to deal with code ownership and team communication structures, and GQL ultimately felt like a better seam for our UI team to interact with our services.

I probably should write a blog post, because I have a lot of disconnected thoughts and need to have a more coherent narrative here. I think some code examples would better illustrate what seems like non-problems from how I've described them here. I’ll follow up once I’ve let it settle in my head.

[1] https://samnewman.io/patterns/architectural/bff/ [2] https://graphql.org/learn/schema/#union-types


I didn't mean it as a call-out at all! :) I've had similar intuitions but been struggling to put a finger on it and wordsmith well on the topic. Thank you for writing!

And yes please do continue to write more, will eagerly read :)


Damn, thanks for that write up.


This deserves a blog post :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: