Its an interview question. If you draw a blank on that, rephrase the question for yourself to something like this: "When you think of bugs you've recently fixed in your code, what comes to mind? Which bug had the most interesting story?". If that feels illegal to you, you can check with your interviewer: "I can't remember many weird old bugs, but can I tell you about a bug I fixed the other week?".
Interviews aren't like tests in school. The point isn't to demonstrate knowledge. The point is to demonstrate to the interviewer that you're someone they'd want to work with. What are they looking for? They want someone they can trust technically, and someone they would enjoy working alongside.
This question is great because its an opportunity to demonstrate both of those skills, by asking you to tell a story (social skills) about an experience you had programming. Use it!
Can you think of any bug from that last five months that you could talk about for five minutes in a way that makes you look smart for solving it? That's all you need to do here.
No, my mind goes completely blank. There's nothing wrong with my memory - I can tell you how I patched the timed Grand Theft Auto 1 demo over 25 years ago with SoftICE as a kid, but I assume my brain finds no value in tracking bugs.
You don’t even need a bug where you look smart for solving it. Eg, as an interviewer I also love stories that go like this:
“There was this memory leak in some C++ code that took us weeks to track down that I’m still kicking myself for not finding earlier. It was totally my fault. Let me tell you what happened … … So the moral of the story is -fsanitize=address is smarter than me. But wow did I learn a lot about my tools from that.”
Exactly, I've been programming for a long time, almost 13 years since I started working on complex software and that question just makes me draw a blank. There's been bugs in my career but none that have "traumatized" me or been all that complicated to get through. Maybe I just don't put that much emphasis on negative experiences. I don't remember losing poker hands all that accurately either, I just don't care.
Also if you've only ever worked in web dev or something easy like that are there even that many really "weird" bugs anymore? Maybe if you work on embedded or low level code bases with lots of race conditions.. those could qualify but if you're doing high level work.. that's rare
Web dev still has tons of weird bugs, tend to be in the interactions between multiple frameworks and edge cases in the assumptions they make. Like, I had one issue with intermittent test failures in Ruby on Rails, not reproducible with any fixed seed, and not affecting production. The overall source of it was:
1. We had a table from a third-party API that used their ID as a primary key, I believe stored as a string.
2. Ruby on Rails believed that it should make sure that primary keys aren't a too-large-to-store integer before attempting to save a row.
3. Our test suite would generate fake hexadecimal strings for the primary key for testing purposes for the table.
4. Sometimes, the hexadecimal string would have enough leading 0-9 digits that when Rails coerced it to an integer for their own internal checks, it would exceed the integer limit it thought the table ought to have.
Part of the process is going over your own history and making a collection of stories about what you've done and having that ready. I've shipped over a dozen products in my 25+ years. I can't recall much of any of it off-hand.
If you don’t have at least a couple of weird enough debugging war stories to still remember years later, you probably don’t have very broad debugging experience.
(Not to say you aren’t a good developer, but you haven’t seen a lot of weird edge cases.)
You're generalising how your brain works to how everybody's brain work. My brain doesn't respond well to questions out of nowhere. I don't maintain a "weird bugs" index. If we are just talking and something comes up there is a good chance I will have a related story, though.
Me to ChatGPT: reply to this interview question "What's the weirdest bug you debugged? What made it weird?" in first-person, make it a personal story.
ChatGPT: It was early on in my career as a software developer, and I was working on a web application that was designed to allow users to upload images and apply filters to them.
One day, a user reported a strange issue where some of the filters were producing unexpected results. Specifically, they were seeing images that appeared to be completely random and unrelated to the original image. At first, I assumed it was a problem with the filter algorithms, but as I dug deeper, I realized that the issue was actually caused by a very strange bug.
It turned out that the user's browser was caching a previously uploaded image and reusing it for subsequent filter operations, even if the user had selected a completely different image. This led to some very bizarre and unexpected results, as the filter algorithms were being applied to an entirely different image than what the user intended.
To fix the bug, I had to implement a workaround that forced the browser to always load the most recent version of the image from the server, rather than relying on its cache. It was definitely one of the weirdest bugs I've ever encountered, and it taught me the importance of always being vigilant when it comes to caching and browser behavior.
Me to ChatGPT: reply to this interview question "What's the weirdest bug you debugged? What made it weird?" in first-person, make it a personal story, make it using C++.
ChatGPT: I was working on a project that involved developing a custom compiler. One day, a user reported a very strange issue where the compiled program would crash whenever it encountered certain input files. At first, I assumed it was a problem with the input files themselves, but as I dug deeper, I realized that the issue was actually caused by a very strange bug in the compiler code.
It turned out that there was a subtle error in one of the class definitions that was causing memory corruption in certain circumstances. Specifically, the class had a virtual destructor, but the derived class failed to declare its own destructor as virtual. This led to undefined behavior when deleting objects of the derived class, which in turn caused the program to crash.
To fix the bug, I had to carefully analyze the affected class definitions and make sure that all derived classes properly declared their destructors as virtual. It was definitely one of the weirdest bugs I've ever encountered, and it taught me the importance of paying close attention to class hierarchies and virtual functions in C++.
Me to ChatGPT (since I haven't worked on a compiler): reply to this interview question "What's the weirdest bug you debugged? What made it weird?" in first-person, make it a personal story, make it using C++ about the time when I ported PyTorch from Python to R.
ChatGPT: One of the weirdest bugs I encountered was during my work on porting PyTorch from Python to R. One of the major challenges was ensuring that the R implementation matched the performance and functionality of the Python version. As I was going through the codebase, I noticed a particularly strange issue that was causing crashes in the R implementation.
The issue turned out to be related to differences in how Python and R handle memory allocation and deallocation. PyTorch relied heavily on dynamically allocated memory, but the R implementation was not properly managing this memory, leading to crashes and other unexpected behavior.
To fix the issue, I had to go through the entire codebase and carefully analyze all memory-related operations. This involved a lot of debugging and experimentation, as well as close collaboration with the rest of the development team.
Ultimately, we were able to resolve the issue and deliver a stable and performant port of PyTorch to R. It was definitely one of the weirdest bugs I've ever encountered, and it taught me the importance of paying close attention to memory management in C++ and other low-level languages.
Note: Likely ChatGPT can also generate a fake resume, etc. so yeah.
The C++ experts here will be able to confirm, but if my memory serves, I think the story about virtual destructors doesn't make sense:
> Specifically, the class had a virtual destructor, but the derived class failed to declare its own destructor as virtual.
The derived class doesn't have to declare its destructor as virtual. What matters is that the base class does so.
> This led to undefined behavior when deleting objects of the derived class, which in turn caused the program to crash.
No; the undefined-behavior case arises if the base class destructor is not declared as virtual, and you then try to delete a derived class via a pointer-to-base.[1]
So this is a nice example of ChatGPT producing something that looks plausible, yet on examination, betrays the fact that it has no actual "understanding" of the topic.
I know it's well-known and because it's a statistical model, but I like how ChatGPT basically fills in a Mad Lib with these replies.
> ChatGPT: I was working on a [project] that [description].
> One day, a user reported a strange issue where [issue description]. At first, I assumed it was a problem with the [feature], but as i dug deeper, I realized that the issue was actually caused by a very strange bug.
> It turned out that [bug cause]. This led to [problem].
> To fix the bug, I had to [solution]. It was definitely one of the weirdest bugs I've ever encountered, and it taught me the importance of [lesson].
Third example deviates from this template a bit but still has the key components ("strange issue", "To fix the issue, I had to" "It was definitely one of the weirdest bugs I've ever encountered, and it taught me the importance of")
I would 100% accept that as an answer since it tells me the candidate is not good with subtext, and that's really needed to solve a lot of interesting problems.
After all, if I ask "what's your favorite food you've ever eaten", there's an unspoken implication that it's a food you remember eating. I am not in fact asking you to recall every single food you've ever eaten and choose one...
-
From my comment below since every reply seems to be bent on ignoring the subtext even in a theoretical discussion about picking up subtext...:
Again, the subtext is "interesting example we're going to discuss".
If there's one you can't discuss for any reason (doesn't even have to be you forgot: could be an NDA) then it's already excused from the discussion.
An even half-decent interview is not adversarial: just like day to day work, it requires interpreting some level of useful subtext and some level of open communication
-
I mean, you forgot the details so it's not like it's not like you're going to start monologuing if you just touch on it: "You know, there's a real doozy from X years ago where Y but the details escape me, more recently Z happened"
If there are none that you remember that are interesting: "There aren't many interesting bugs, but there was this really interesting product requirement, could we go over that?"
> I would 100% accept that as an answer since it tells me the candidate is not good with subtext, [...] if I ask "what's your favorite food you've ever eaten", there's an unspoken implication that it's a food you remember eating.
Will this filter cut many of the best engineers?
Our field is full of people who pull 'engineering' out of our behinds, to various degrees. I'd assert that the engineer who doesn't assume an unspoken implication, but instead qualifies their answer, or tells you when they cannot answer, or asks for clarification... is more likely to be the one who can make a system that works, and tell you when a system will not work.
> Will this filter cut many of the best engineers?
It won't cut out a single good engineer, let alone the best.
> I'd assert that the engineer who doesn't assume an unspoken implication, but instead qualifies their answer, or tells you when they cannot answer, or asks for clarification... is more likely to be the one who can make a system that works, and tell you when a system will not work.
You grouped the one option that a bad engineer would take, with several that a good engineer would take. "Tells you when they cannot answer" is not what a good engineer does.
They may say "I cannot answer question as-is" as a jump off for clarification.
In fact in my response to your sibling comment I explain that even if there were no interesting bugs you can give an answer that isn't lying, or pulling engineering out of your ass, or dishonest.
-
But flat out refusing or immediately jumping to "well but I can't remember everything!!!!" is still you interpreting subtext... except you've now interpreted the most negative possible subtext. You've assumed your interviewer is asking you to recall things you can't recall and that there is no further room for discussion.
A poor engineer is one that shuts completely down at the first hint of a broken invariant, rather than trying to surface that there is an invalid invariant, or learn more about the broken invariant.
That kind of curiosity to go further than shutting down is what the question is meant to tease out, so you're not beating the system by deciding not to engage, instead you're sending the exact signal being looked for as something to avoid bringing into your organization.
If you need to settle for technical correctness that's fine.
Fortunately we have the resources to hire for technical correctness and a bit more than the minimum when it comes to being well rounded with your ability to understand problems, communicate, etc.
We don't want people who jump to conclusions like "the interviewer is asking me to recall things I don't remember" under the guise of "precision" instead of just asking
It takes commensurate pay/interesting work/an attractive workplace/etc. which are out of a single interviewer's control, so I never hold it against those who don't filter to any of that.
Ironically the "shutting down" part isn't the real test because it's not a trick question: Discussing an interesting bug really is a great way to dive into technicals and learn about the person's soft skills and learn about their background amongst a bunch of other stuff.
But specific to the "shutting down because the requirements weren't 100% totally perfect" I see it all the time, and it's even what we're seeing people attribute to Google's slow decline
On one hand many hardcore engineers think we're seeing the slow and steady decline of software because of bootcamp kiddies ready to hack together any mess with a ball of Leftpad inspired libraries.
But on the other, so so many engineers struggle to see past the tip of their nose in larger organizations. There's this antagonistic co-existence with those outside of engineering where little effort is put into disseminating requirements if they don't agree with them to start.
Which ironically we're watching unfold here! People jumped to the conclusion the interviewer is in fact asking you to select from "every bug ever", but in doing so refuse to interpret that the interviewer might be asking "things you remember"... because that would be jumping to conclusions?
-
For example: when estimating how long tasks take and finding that there's a disconnect between what the larger org expected and what an engineer produced, there's rarely any deep inclination of many otherwise brilliant engineers to find out why because it's assumed "non-engineers just don't know."
They might try to shave some time here or there, they might try and bake in some crunch time because they seem themselves as being that brilliant and dedicated that they can make it work.
But rarely will they try discarding the notion that there was a disconnect on the non-engineering side, and self-directedly throwing out their entire proposed solution to try something that fits on the assumption that their solution was what was wrong in the equation.
Because when they made the design: they designed it with all of their intelligence and skill and experience. And that's what they were hired for, to make brilliant things. So why should they cheapen all that? If that's what management wants they should go hire some junior devs or something.
And unfortunately, if the reality really is that majority of the business value could be produced with orders of magnitude less effort, it's the engineering side that has to enable that kind of discovery. The engineering side is source of the plays in the playbook.
-
The reality is not every engineer can ever reach that. There are brilliant brilliant people who will never have the communication skills or the inclination, or the patience for any of this, and a good interview process doesn't require 1 person to ace every single signal.
Also some people will jump at me for implying engineers should need to zoom out, because in their minds management should be enabling them to stay complete heads down writing code.. but to me that mentality is not generally compatible with being a top of field company for the long haul.
Yes you might catch lightning in a bottle by just enabling very smart people to do build marvels in their silos, but business is more than having marvels to stare at.
I personally worked at a company that essentially succumbed to exactly this. A culture of exceptional engineering, hiring technically brilliant people at all costs... and dying a slow death because the engineers wouldn't leave room for business in their engineering.
-
I guess the tl;dr of all this is: A CEO will say "It's no use if we take 10 years to make a perfect product, if our competitor makes it to market with a decent product next year". And engineers will expect as much from business types.
But what they often forget is that the same is true for customers. No one benefits from your engineering if it never reaches the field. No one benefits from your answer if you willingly get stuck on every single speed bump.
Being a good engineer is being able to efficiently categorize which speed bumps are "just" bumps, and which ones are chasms that will swallow the ship whole if you don't change direction.
If the engineers at Boeing had the mentality that I see often in our field, each 727 would have cost a billion dollars, and would no one would fly today.
I've just been assuming that this kind of product/customer-driven engineering in a business environment can be learned, if it's not already known. And the only questions are whether the org can teach it (with culture, onboarding, consistent messaging) and whether the candidate would be happy with that.
If a candidate came to me with no product/commercial experience (e.g., recent grad, or from a research environment), I'd try to characterize the nature of the work, and see whether I could get an honest discussion with them about how they'd feel about that (and whether they really understood what that means). I'm not wise enough to have figured out tests that will tell me.
And I'd have to hit some team-oriented discussion, too, since that's my biggest concern lately, even more than product-oriented. And it's something a lot of companies seem to do badly (e.g., people focused on their own appearance in sprint tasks or metrics or promotions, rather than the whole of the team's work coming together).
Again, the subtext is "interesting example we're going to discuss".
If there's one you can't discuss for any reason (doesn't even have to be you forgot: could be an NDA) then it's already excused from the discussion.
An even half-decent interview is not adversarial: just like day to day work, it requires interpreting some level of useful subtext and some level of open communication
-
I mean, you forgot the details so it's not like it's not like you're going to start monologuing if you just touch on it: "You know, there's a real doozy from X years ago where Y but the details escape me, more recently Z happened"
If there are none that you remember that are interesting: "There aren't many interesting bugs, but there was this really interesting product requirement, could we go over that?"
If you can remember, 5-10 years after you solved it.