Hacker News new | past | comments | ask | show | jobs | submit login
Why don't schools teach debugging? (2014) (danluu.com)
276 points by joeyespo on April 25, 2017 | hide | past | favorite | 168 comments



It is a task that shouldn't be learned in university but in middle school. And it's not just useful to stem, it's useful to many things in life. E.g. a disagreement between two educated, smart people usually can be resolved by debugging how everybody got to their point of view and then flashing out the details. It's also how you figure out why the toaster isn't working any more, or why your wife left you.

It's also actually two skills:

Analysing a state. What do you see, what is there, or isn't. What was expected.

Backtracing. Getting from point y to point x by figuring out how y happened.

I'd say at least in my country we also actually learn these skills. But when we learn the methods its in a completely different context, totally unrelated to life. In real life, you hit a problem, you get overwhelmed by the complexity, and then your brain needs to start applying the method. In school you learn the method, no idea why. Then you apply it to totally theoretical problems, no idea why. Then you get tested on how well you execute the method, no idea why. Then nobody ever talks about it again. Now, how should your brain in moment of being overwhelmed remember that method A it learned 20 years ago may apply here? There's no way.

Teaching shouldn't be: theory->method->application->test. It should be theory->enforce_problem->wait_for_questions->explain_how_theory_can_be_applied->review_of_application->goto:enforce_problem.


Your comment couldn't be more on point. I viscerally disagree with the direction that a lot of "coding" initiatives are taking in schools nowadays. The claim is that the intentional move away from actual computer science is that it is a way to attract more people.

Fine. But when the course is just about making "whatever you want," then that really valuable time of learning these debugging skills gets missed, 'cause you can always pivot and do something else. The most valuable lessons I've learned about coding -- and the ones which, like you wisely stated, are applicable elsewhere in life -- had everything to do with needing to work through a tough computational challenge.

That taught me how to know I don't know something, how to poke at an issue in such a way as it would reveal something about the system (or my own understanding), the oh-so-currently-hyped skill called "grit", etc.

Every thinker or scientist I deeply respect has a treasure trove of examples of being stuck and working through hard and challenging problems even as little kids. That somehow we think an avenue for a more educated populous consists of presenting fewer experiences like that to youngsters is beyond me.


"populace", not "populous" (an adjective meaning "highly populated").


Won't let me edit, but thank you - I had no idea!


An important part of analyzing state, and specifically a mismatch between the state and your expectations, is to ask yourself whether your expectations are valid. Not every bug is an error in coding what you intended, and we naturally tend to be somewhat blind to our own incorrect assumptions.


I agree that these are the same skills. Outside of the programming context I would not call it debugging but rather just rationality or problem-solving. Some call it the scientific method, though that's not exactly the same thing.

In the debugging context there are some more specific applications of the general principles, which should also be taught.


Agree in principle. Another way of putting it is, debugging is another word for applying the "scientific method." E.g., experimental vs control group is a useful concept in debugging


Yes! I've been saying this forever -- debugging is a science.

The best way to find and fix the toughest heisenbugs is to a) form a hypothesis as to what might be happening, b) figure out what would falsify your hypothesis, c) run an experiment that must make the bug happen reliably iff your hypothesis is correct, and finally d) figure out the solution.

Steps a-c are precisely the scientific method.


Exactly. Debugging is just the application of the scientific method (or rationality) to a specific domain. I have been beating this drum for years now...


I also think that the scientific method and debugging have a lot in common. One coul say scientists debug what god coded a few thousand years ago.


Whatever "coded" the universe did it more than a few thousand years ago.


I never got military education, but I have to say I admire how well military personal can respond to problems. Anybody have insights how they get educated and if that could be applied to programming/debugging as well?


I have some insight, but not as much as someone who's actually went through formal training.

It's a mindset issue. In the military, a soldier is a tool to get a specific job done. The soldier must understand this and shed away his ego and any ideas he has about his "self."

He must understand that he is a cog and his only job is to be the perfect cog.

This could be applied to programming to increase efficiency, but I'm certain most programmers (and the HN crowd in general) won't be happy with what's to be expected of them.

The success of the mission rests on appointing a competent commander (CEO/Manager/Etc.) that'll make all the final decisions and having soldiers (employees) that can zero-in on doing exactly what the commander tells them to, even at the sacrifice of their comfort.

There's a lot of ego and self-identity in the programming world which collides with the very essence of the "military way of doing things."


Once an officer hits their unit, they have someone with 5-10x more experience to supervise them for their first 5-10 years. It would be like a new manager getting a senior engineer to work with 7 days a week to help guide their bad ideas.

Additionally, your software engineers aren't indentured servants. In the military, you often can't go home until a problem is fixed and if you quit, you go to jail.

The military burns through a lot of good people, most get out after their initial term.


That doesn't matter, though. What I'm talking about is that you can throw a granade next to a soldier, and he'll still be able to think and make decisions. That means he got trained to not let his amphibian brain take over when under stress. That's in some regards what you need if you want to learn debugging. Don't the stress of the too complex problem stop you from thinking. Go step by step.

I bet you can teach that without all the bad associations people have about the military.


One of the first computer class lessons that I remember--in maybe 4th grade?--taught us to use Yahooligans and Altavista and such to research a topic. Incredibly valuable to ingrain at that age the idea that the best way to figure something out is to start running vague queries until you understand enough to synthesize an answer or a theory.


fantastic pov.. thanks for sharing


> why schools don’t teach systematic debugging. It’s one of the most fundamental skills

And herein lies Dan Luu's fundamental misconception: that universities see themselves as places where students come to learn skills, i.e. as places of certification producing skilled workers for industry. Most universities see themselves as academic places of learning for the pursuit of knowledge for knowledge's sake, which only incidentally graduates people whom industry happens to find their knowledge at least somewhat useful. If the university professor never needed to debug, use version control, or write clean designs in order to succeed in academia - why should he consider it worthy of being taught? If the university professor never benefitted from remedial review on his academic path towards becoming a professor, why should he consider it worth his time? Of course he's going to dismiss the notion to the tone of "some people just can't hack it in engineering." It's academic survival bias.

People talk about the STEM pipeline like some kind of demigod figure designed it from on high specifically with the intent of producing skilled technical workers, albeit with various flaws related to sexism and skill relevance etc. Of course, this is a ridiculous way of looking at the STEM pipeline, so it shouldn't surprise anyone that different actors in the pipeline have contrary motivations.


I respectfully disagree with your argument in relation to this article.

Dan Luu is describing a method that would help people achieve a better success in their exams and classes, he's not even pointing their usefulness in the workplace. Therefore, your point about him thinking that universities are here to produce skilled workers is a strawman.


His point is that university is here to produce/teach knowledge. Skills should be learned elsewhere, e.g. in schools prior to that. The university's official language is also a skill that is very important to the student's success, but the university is not responsible to teach this language skill. You must have it to enter university.


Then I bieleve the argument still stands. Universities would teach knowledge better if they spent a comparatively tiny amount of time teaching appropriate skills, as it would get students ready to learn.

The author demonstrated this point by showing how his classes could help students learn better.

We can then debate about whether having successful students is a positive thing to have for universities, but Dan Luu clearly seems to think it is.


That's a valid additional point. I'd also say that they should at least offer optional courses with skills. That also applies to language. While universities don't offer a major in English speaking/writing, many have opportunities for second language learners to improve their English skills.


> places of certification producing skilled workers for industry

In Portugal that place belongs to Polytechnics.

Universities are to learn to learn.


But wouldn't the debugging process give a better understanding of the whole learning process?


On my CS degree we learned to use debuggers, and how they are written as well.


Debugging is useful even from a merely academic perspective if engineering is an academic discipline (I think it is). Reality pushes up against you, so you figure out what it's doing by looking at what happens and guessing at what might have caused it. You aren't circumscribed by your model (as you are in a purely mathematical simplification; consider the inutility of Navier-Stokes for many engineering applications) when the 'bug' is smacking you in the face. This isn't just Popperian falsifiability; it's also just (more fundamentally) abduction.

The article doesn't press this point very hard, but I take the central argument to be in favor of 'systematically approach[ing] problems [of the sort that debugging instantiates]' which is just as much 'academy' as 'industry'.


In school, you're always told to show your work. I'm sure a lot of us here tended to be among the smartest in your class, at least before university, and we often thought having to show our work was silly. That's why a lot of us get tripped up when it comes to the type of debugging the author is talking about (it certainly still happens to me sometimes).

Showing our work reminds us that solving problems is an iterative work, and when we get the wrong answer, it lets us (with or without the help of the teacher) step through and figure out what we did wrong. For teachers and professors, it probably becomes second nature, and maybe they forget who explained this to them, or when they realized it themselves.

Even once we're out of school, I think we need to remind ourselves that this is the same reason we don't write a 1000 line function, rather we break it up into smaller parts, each designed to solve a specific part of the overall problem.

Now, what I really wish we had had in university was a CS course on using gdb and/or WinDbg - not just to understand how to analyze a core/dump file or step through running code, but also to help drill it into our head what compilers do to our code, and how processors really work. --It's amazing how often I have to remind people that debuggers will often lie to you, and you often have to look into the registers for the real data!


I just think debugging is art and parcel of programming. Debugging should come up in the first practical exercise in a Graphics 101. It should come up the first time you do an ML assignment in Programming Languages.

Its absence in any course is conspicuous.

But at the same time more than half of the professional web developers I know don't really use a debugger. Equally problematic.


Is an ML course now a common part of an undergraduate degree program, though? --I genuinely don't know, which is why I'm asking.

These days it would certainly be much easier to teach students how to use a debugger - most students probably aren't doing their work on mainframes anymore where resources might be limited (anyone else remember getting emails asking you to release your semaphores because the server is running low?), but professors still have to decide which topics are the most important to cover in a limited amount of time.


Edit: skim-read GP, got the wrong ML.

I touched on the PL ML in a type systems course. I can't speak for how common it is though. It's certainly much more likely for someone who takes a course on type systems or PL theory though, presumably given that choice it's pretty common.

--

> Is an ML course now a common part of an undergraduate degree programme​, though?

There were several available to me. It's​ not really a particular interest of mine, but I took an introductory course anyway, since it would have seemed a bit remiss in today's world to have graduated without knowing an NN from an SVM.

Unfortunately for my grades, performance usually reflects interest... I still don't think I actually regret it though.


Yeah, I was referring to MetaLanguage, not Machine Learning.


It was to me, as a "Artificial Intelligence II" optional lectures, in the mid-90's.

I rather focused on compiler design and graphical programming options instead, but I remember that all the basis for neuron learning algorithms were already part of the content.


But at the same time more than half of the professional web developers I know don't really use a debugger. Equally problematic.

I wonder how much that has to do with tooling? When programming in python I run everything in a debugger basically all the time. When doing JavaScript front-end work it's back to print debugging.


I almost always start with thinking about where the bug is likely to be, then doing some print/log debugging to confirm or reject my suspicions. I find it pretty effective and simple to do.

Debuggers are complicated and create dependency. How many developers do you see every day just sitting at their computer tapping "step" ... "step" ... "step" for hours.

I'll break out a debugger if all else fails, but it's not my first choice.


> How many developers do you see every day just sitting at their computer tapping "step" ... "step" ... "step" for hours.

Er...very few? A bad programmer will be a bad programmer with a debugger and that bad programmer will be a bad programmer who prints out state after every line of code.

Breakpoints and watches are strictly superior to print statements for understanding the system you're working with. You drop a breakpoint where, as you yourself put it, "the bug is likely to be," you look around, you kill the process and you make a change. It doesn't "create dependency", it's just better at solving the problem. Anyone can fall back to println debugging if necessary. It's not some difficult thing. But it's wasting your time.


I find a good logging library will help me narrow down the issue before I enter the debugger to watch what the system is doing.

If I'm 90% sure I know what the bug is and it's a system where it's easier to add a print than to enter the debugger (yes, those types of systems exist), I'll do that first.


Systems where debuggers are difficult to access totally exist, of course! But, IME, they're pretty rare by comparison (unless you've footgunned elsewhere). Whether it's something like `pry` in Ruby where you can open a REPL on command or a JVM where you connect with an external debugger, most systems are flexible enough to work with you.

If you've written something in a language where you don't have a debugger...that's your own damned fault. :-P


> Systems where debuggers are difficult to access totally exist, of course! But, IME, they're pretty rare by comparison

It's often hard to run a stepping debugger when you have an embedded hardware at hand. It's equally hard to run such a debugger when you have multi-threaded code. And the same stands for virtually any system that works with network directly, where timeouts are plentiful.

Also, good luck with a debugger in this modern architecture of microservices that happens to be hype nowadays.


> It's often hard to run a stepping debugger when you have an embedded hardware at hand.

Totally true. But that's a pretty rare situation.

> It's equally hard to run such a debugger when you have multi-threaded code.

Maybe your experience is different, but Every piece of multi-threaded code I've ever written has fallen into one of two buckets: task-pooled parallelism or interlocking features that shouldn't block one another. The former is trivial; reduce the task pool size to a single thread. The latter is definitely harder, but modern tooling (I'm partial to VS2017's Threads window) makes this a much more achievable thing; you can break across all of them and step between them effectively.

> And the same stands for virtually any system that works with network directly, where timeouts are plentiful.

I find myself debugging in environments where I can control timeouts, but if you can't, yeah, this is totally a problem. Leaving a service paused for ~3-5 minutes while I poke at it doesn't drop connections with my dev configs, though.

> Also, good luck with a debugger in this modern architecture of microservices that happens to be hype nowadays.

"Doctor, it hurts when I do this..."

If you can't effectively stub out an environment around your microservice so that you can run it in a debugger-friendly environment, you don't have a microservice--you have a monolith that communicates over pipes rather than function calls, and that's strictly worse in the first place.


>> It's often hard to run a stepping debugger when you have an embedded hardware at hand.

> Totally true. But that's a pretty rare situation.

I know it's rare to work with embedded systems if you don't work with embedded systems. Though when you do work with them, this "rare situation" suddenly becomes prevalent.

>> It's equally hard to run such a debugger when you have multi-threaded code.

> Maybe your experience is different, but Every piece of multi-threaded code I've ever written has fallen into one of two buckets: task-pooled parallelism or interlocking features that shouldn't block one another.

It all works well until it doesn't. That is, until you need two threads to communicate with each other (otherwise, why are they even running in the same process?).

>> And the same stands for virtually any system that works with network directly, where timeouts are plentiful.

> I find myself debugging in environments where I can control timeouts, [...]

All of them? My simplest cases have dozen timeouts one behind the other, at different levels of the stack. Even though I potentially can set them all, it's all very tedious. Not to mention that the timeouts can interact in non-trivial ways, so it's often hard to track all of them.

>> Also, good luck with a debugger in this modern architecture of microservices that happens to be hype nowadays.

> If you can't effectively stub out an environment around your microservice so that you can run it in a debugger-friendly environment, you don't have a microservice

https://en.wikipedia.org/wiki/No_true_Scotsman

Also, I haven't said that "you can't effectively stub out an environment". There are other difficulties around debugging microservices, like mentioned before controlling timeouts everywhere.


With frontend JS, I usually find the friction of using a debugger lower than the friction of adding logging since I don't have to reload to add/remove it. You can even add a conditional breakpoint that logs and always returns false to capture the effect of adding a log statement.

I rarely use the step functionality, though, because it just takes too long. More frequently when I hit a breakpoint, I examine the current state, figure out the next place that I am interested in stopping and then add a breakpoint there before resuming.


Exactly. Have fun debugging a microservice in production. All you can do is increase the log level, and even that is sometimes not possible because the program would spew out so many logs that it would bring down either some disk or the network.

In these cases, the Feynman method is sometimes all that you can apply. I have already spent plenty of hours with a stacktrace on one screen and the source code on the other screen, stepping through the source code in my mind (usually backwards from the end of the stacktrace).


In many ways, the tooling around web development is far superior to other languages (e.g. golang with delve). Most browsers ship with excellent debug tools.

The caveat obviously being that debugging code written using some JS frameworks, like Polymer, is a nightmare.


That's it. Declarative asset compilers like Webpack break debugging. Transpiling breaks debugging. Convention-over-configuration frameworks and tools like Ember break debugging. Other declarative APIs like promises and and virtual DOMs break debugging.

PHP and JavaScripy are great for debugging. Even Node is awesome. Rails and mainstream JavaScript frameworks... not so much. Nothing about the languages.... The people who design the frameworks and build tools just don't seem to value it.


and in many ways the tooling around web development is absolutely terrible (eg not a brand new language eg go but an established language eg any of them)


Which is so odd to me because the JavaScript debug tools are some of the most complete and easiest to use. They're right there!


But I can't get them to integrate well with my dev tools. If, for example, I'm using something like babel/webpack/Typescript there is no easy way that I've found to for example just set breakpoint on a line of code in my editor/IDE and have the web browser debugger break when it gets there.


True you probably can't set the breakpoint in your IDE but thanks to source maps you can still view the original source in the debugger.

https://developer.mozilla.org/en-US/docs/Tools/Debugger/How_...


Not as convenient, but you can write the statement `debugger;` and the debugger will stop there.


I find it incredibly convenient because I don't need to go through the browser trying to find the same spot in the code.


VSCode provides this functionality(https://code.visualstudio.com/docs/editor/debugging).


When I was attending Harvey Mudd College I worked on an engineering clinic team with a significant software component and was shocked to find that none of my teammates knew how to use an interactive debugger. This really slowed the team down and made the work far harder than it needed to be. At the end of the project I wrote a letter to the department head suggesting that they add a unit on debugging to the core curriculum. But I don't know if they ever acted on it.


> debuggers will often lie to you, and you often have to look into the registers for the real data

What do you mean by this?


Well I don't know what OP meant, but I work in games programming, and the PS4 debugger will frequently lie to you in visual studio. You hover over a variable and it's just giving you a completely wrong value, which is correct if you look directly into memory window or assembly view.

I wouldn't consider this to be a very common use case though, it's a niche within niche.


Its that way in all the niches, which makes it a set of all niches ==> mainstream thing.


Optimised C++ sure but you don't see this issue with C#. If you are using a high-level language that does not have a reliable debugger, time to change language.


One of the things that happens here is the more static and optimized a compiler output is, the harder it is to do source level debugging. Conversely, more dynamic, less optimized languages are easier to write and verify debuggers for. This isn't a strict ordering, of course, but a general trend. Unfortunately, the very aspect that make some languages easier to debug make them poor at some of the applications you might want the former for (cf C# and c++ as you bring up).

Luckily debugging is not merely the act of using a debugger !


I imagine this situation might surface when looking at optimized native code, either from JIT or AOT, for example via WinDbg sos plugin for .NET.

Still not a reason enough to avoid using a debugger.


Debuggers sometimes simply write wrong values into those nice gui-windows- do not update the values they show, or just dont inform the reader that the value ceased to exist- at all.


Oh, so a bug in the debugger basically. Speaking of the registers it sounded like he was talking about some weird cache incoherency issue that shouldn't conceivably exist (on x86 at least).


I think the Feynman Method is equally applicable to debugging; I've also done some teaching before, and seen far too many beginners open the debugger at the first bug they find and start stepping aimlessly through the code, seemingly unaware of what their code should be doing. They eventually reach the point where the program outputs the wrong result, and have no better understanding of what they did wrong, because they were too focused on what the code is doing to think about what it should be doing. I call it "debugger-induced tunnel vision".

My advice on this is to, before even thinking about using a debugger, think carefully about and write down if necessary the conditions and values expected at certain points in the code. Then, and only then, should you use the debugger to confirm or deny those hypotheses. In practice, with those I've taught, guiding them through this procedure, they often realise the source of the bug before using the debugger.


I completely agree. In fact, I think running a debugger should never be the first option. My process:

1. Read the code and think about what it's really doing.

2. If that leads nowhere, write (more?) unit/integration tests.

3. Only if the above two fail, run the debugger.

"Contemplating" code has another benefit: since you're looking at the whole picture rather than jumping line-by-line it develops your intuition about why things fail.


I think avoiding what you call "debugger-induced tunnel vision" can be done by using a debugger to step through the code just to read it, debuggers can also be tools to better understand what the code is really doing because you not only see the code but also the state the system is in while debugging.

The added benefit is that when you actually break out the debugger to squash a bug you better understand what might be going wrong as you can try and pattern match from the time it worked perfectly.

I treat the debugger as a tool for understanding code, buggy or otherwise, no "debugger-induced tunnel vision" for me, it widens my field of view :)


I've heard of top-down and bottom-up design, and what you're describing sounds like top-down debugging, in contrast to "open the debugger at the first bug" -- bottom-up debugging?

They're both great, like having two sets of teeth to grind problems down to a fine digestible paste.

My software engineering prof encouraged us to use both approaches to programming. The analogy to teeth, however, is my own.


When I was half-way through my physics degree a few years back, they adjusted the curriculum to include a "Programming for Physicists" course [1] that tought a little C [2] and familiarized students with how computers compute, so that, most prominently, they would be able to recognize and guard against floating-point rounding errors.

I served as a TA on that course for the first few years, and after the first year, I lobbied for a few adjustments. The most important one was to include a specific assignment on debugging, where instead of asking the student to implement a new program, an existing program is presented that has both syntax and semantic errors. Students are expected to turn in a list of all the errors that they find.

[1] A crash course, actually, but still way better than nothing.

[2] Nowadays they do Python, I hear.


I imagine it would be quite hard to come up with good debugging examples for a class. Anything that has one right answer is going to smell like a made-up classroom assignment.

The thing that makes debugging hard is multiple hypotheses (or amazingly, exhausting all hypotheses). Your crash can be caused by errors on many different abstraction levels, so your debugging process has to be thought about on many levels. On one level, you have an illegal access via pointer. On another, you have a user whose information is incomplete.

Once you find a bug it's often hard to trace back exactly what the original state (including your state of mind) was, because you'll have taken the thing apart in various ways, added logging, looked at the call stack, memory, etc. And once you know the issue you shave down the explanation so that all those tedious steps that didn't lead to the answer seem to vanish.

One thing that got me thinking was this idea that people are expected to fail. To me, such a process is incredibly wasteful, and itself a failure. Are engineering students immune to ordinary psychology? If you tell people half of them will give up, you'll lose a lot of them, even ones who could pass.

You can make a course difficult, but don't tell people they will fail. I was never confronted with such statements, yet after my first year about 10% of the engineers had been booted due to failing exams.

His other point seems to be that courses are difficult because they aren't taught to people with the prerequisites. That should never be the case. Where I went the prereqs were a very standard set of math courses, and you could access most non-math courses knowing just those.


> I imagine it would be quite hard to come up with good debugging examples for a class.

Here's a high-level idea that might be doable in the right circumstances.

Get one or two hardware guys from an engineering company near the institution to come out with an old version of their silicon that has a fairly rapidly (but not necessarily trivially) triggerable bug. Set the system up, run over how it works, explain the ins and outs of the debug environment/setup, make the bug fire, and spend the next $class_length getting the students to try and figure the bug out.

If the guys can't come back for repeat sessions, they explain it at the end.

I suggest previous-gen silicon in the hope this would decrease the NDA requirement. Then again, even that may not be enough; obviously this would be CPU dependent. But even if the chip was otherwise NDA'd, if it uses a popular (or at least non-NDA'd) core and the bug is in there, that could work too - maybe you could do a partnership type thing with a custom build of the debugger that has NDA'd components hidden.

Maybe you can set up VNC or RDP from student laptops back to the test setup up the front, maybe the students can just backseat-drive. But just describing this, I'm excited, I'd totally want to go to a class like this! (Which was why I tried to think of ways to get around the NDA issue)


Sadly, late breaking (post silicon or just late in development process) bugs aren't generally particularly more exciting in terms of debugging than earlier ones - other than the urgency/annoyance and rarity, of course.

They do tend to be 10x as annoying to debug though. Not more difficult - just tedious - especially because you often end up triaging what turns out to be somebody else's bug (i.e. a checker on your block fails, so you need to trace it back to somebody else, and often they also trace it outwards to somebody else).

And this is where using it as a class example gets really difficult - even people on the design team are specialists in their own blocks. Giving students all the knowledge they'd need to debug a problem would be incredibly difficult.


> I imagine it would be quite hard to come up with good debugging examples for a class. Anything that has one right answer is going to smell like a made-up classroom assignment.

> The thing that makes debugging hard is multiple hypotheses (or amazingly, exhausting all hypotheses).

You don't have to start out with an example that features all possible error conditions. It's totally fine to use a made-up classroom assignment for teaching. There is no need to put all the complexity in a single example, better to partition it into steps that can be attempted individually.

Moving around in a debugger, setting breakpoints and watches, knowing where to set breakpoints and watches, tracking a value on its way through the program, adding logging to get a view of varying state over time, localizing a problem within a large code base. Those are all skills that build on top of each other, and can and should be learned over a series of successively more complex examples, each focusing on a particular aspect, while reinforcing the previous lessons.

Then you can let the students loose on a wild goose chase through dozens of libraries calling into each other across language boundaries using asynchronous communication, to find the one instance where someone cast a void* to an int* to do arithmetic on it, when it should have been a float* .


These are unskilled programmers, so shouldn't it be ok to start out really simple?

Have bugs with index bugs like:

    for(var i = 1; i < len; i++)
    substr(name, indexof(name, ' ')) 
i.e. Find out why it's not showing all the items (i = 1 instead of 0), find out what's going on with the formatting of the lastname (there's an extra space at the beginning of the name).

For SQL use a JOIN where it needs a LEFT JOIN.

I can think of others off the top of my head. Leave out an optional parameter that's been populated from the front-end, but don't pass it to the method call. In OO languages, have a constructor accidentally call itself instead of the alternative with more parameters (infinite loop). Nest two for loops and accidentally use i instead of j inside the inner loop. Have an if statement that looks sorta right but is logically impossible to satisfy.

For the "only sometimes crashes" bugs, div by zero is an easy example.

All the sort of accidents you instinctively correct now are probably good beginner debugger exercises.


I think the problem is more general than debugging and is really just about troubleshooting in general.

How do you abstract away a problem so you can individually identify and isolate the parts and test them individually? The same general techniques apply whether you're debugging a c program or repairing a car.

Just on the basis of interacting with coworkers, when I see people fumbling around in the dark, it's usually just because they're not using basic troubleshooting, they're just googling (at best!) and trying random fixes in a scattershot approach.

Quite a few times, I've helped people who were far more knowledgeable than me about, for example, Linux, by just asking them questions and trying different experiments to see if we could eliminate possibilities and so on. At the end of it, the problem is fixed and they thank me for the help, even though they knew everything they needed to know to fix the problem to begin with.


> Anything that has one right answer is going to smell like a made-up classroom assignment.

It should be alright in a classroom setting. I remember one particular piece of an assignment that I received a 0 on many years ago.

We were doing String manipulation, and the iterative pieces of the assignment culminated in us building a simplistic cipher. The part I did not receive credit for was recursively reversing the String. I just called the reverse method. Same result, so no harm no foul right? Wrong.

The point wasn't to reverse the String so that the cipher could be created. It was to incorporate programming basics such as recursion into an assignment. It was a shoe-horned method to get us to use recursion, but it got the point across.


> Anything that has one right answer is going to smell like a made-up classroom assignment.

If you're doing C (or any other language where that particular problem exists), it's always a nice exercise to have students write a program (or a function) that takes the radius of a circle and outputs its volume. Most students will write:

  double volume(double radius) {
    return 4 / 3 * M_PI * radius * radius * radius;
  }
Can you see the problem? ;)


4 / 3 would be computed as an integer (eg. 1), before the result is implicitly coerced to float for the rest of the expression calculation?

A nice example for debugging, and for understanding the value of languages with better typing discipline.


Circle has no volume


Although I laughed, I'm sure GP meant sphere. That is the correct formula for volume of a sphere, and a sphere is a circle in 3-space.


Aw snap. Yeah, of course I meant a sphere.

(And technically, it's the other way around. A circle is the 2-dimensional special case of a (hyper)sphere.)


Sure. But getting the spec right is more important than being smug about a silly mistake only novices make. And C has much more interesting caveats than this one.


Any school that teaches programming will take a stab at teaching debugging, although that is often limited to general familiarity with a debugger, say to the level of stepping through code and inspecting state.

The problem is that debugging is hard, and will tend to very quickly manifest any flaws in a novice's understanding. When I teach it, I describe it as a manifestation of the scientific method: observe the phenomenon (my program isn't doing what I expect), hypothesis (it might be this), experiment (if that's the problem then doing this should result in...). Repeat ad frustratum.

Unfortunately, doing that effectively required some pre-requisites: (1) An accurate mental model of how the program should behave, with sufficient detail to hypothesize about why it's not behaving that way, (2) Available tools and instrumentation that allow you to inspect a running program (I'm old enough to remember debugging by adding commands to print state to the console), (3) sufficient experience and sophisticated understanding of the system to be able to make valid hypotheses and design tests.

Only the second item easily lends itself to the classroom, and the third item is critical: After I help a student identify and isolate a bug, they often ask me how I knew what to try. The answer is, almost always, that I've seen that before. Hell, there used to be a type of memory fault in Borland C++ that I could diagnose from across the room just by the icon and size of the error dialog. (You get a lot of experience seeing error messages by teaching Freshman programming).

So, short answer, the reason we don't do a good job teaching debugging to novices is at least partly because they're not ready to learn it yet.


I absolutely agree with the pre-requisites --

"(1) An accurate mental model of how the program should behave, with sufficient detail to hypothesize about why it's not behaving that way" -- While working as a firmware engineer, I once got a bug from a customer, he reported that they were in Argentina and the device with the firmware would lose an hour in the time displayed every time they had a power loss and that the problem goes away when they put the time-zone to Brazil, this much was enough to know that the problem was with the day light saving time part of code. Some engineer hypothesized that problem might be with the NTP server but his hypothesis was wrong because NTP server was dealing in UTC.

"(2) Available tools and instrumentation that allow you to inspect a running program (I'm old enough to remember debugging by adding commands to print state to the console)" -- I dealt with code which installs the firmware, frequent power cycles made keeping a debugger attached an exercise in futility so I just continuously logged the top two frames of the stack by using a system call. Crude way to trace the control flow but helped me in understanding the installation process a lot.

"(3) sufficient experience and sophisticated understanding of the system to be able to make valid hypotheses and design tests." -- This was even more important while dealing with firmware hangs, dealing with hangs made dealing with crashes look easy. Crashes were the messy criminals leaving a lot of clues behind. Hangs are the type of criminals who would have to be caught while committing the crime.

I would also like to add one more --

"(4) When debuggers fail you, git-bisect is your only true friend." -- I once was dealing with a memory corruption bug, in a piece of memory which was shared by two systems, the number of processes with access to that piece of memory was bonkers. I decided to brute-force my way out, it worked in the last release, it stopped working this release, I just kept doing a git-bisect between clean and dirty states and found the code which caused the bug. This way was definitely not the clever way to do it, but brute-force is sometimes the only option left.


At UC Berkeley, we are planning a lab devoted to practical debugging for next year's Data Structures course. Your ideas/suggestions are appreciated!


Please teach unit testing. I'm constantly amazed at how many students can leave school without ever even hearing about this thing called "unit testing." They'll have to look it up when they get home. Those that have heard it, still don't get how to write to in a form that can be unit tested.

The number one criteria we have when hiring a fresh graduate is the quantification of how much time we can devote to training them their first year. Demonstration of unit tests in sample code signals lessened requirements on us. We use it as one of our proxies (a real internship is an example of another.)


Based on a job at my workplace that could be available for your students, this is practical:

Make them use a JTAG debugger such as the Ronetix PEEDI. Make them debug timing-sensitive boot code, such as SDRAM detection routines that use GPIO pins to bit-bang SMBus protocol to the SPD EEPROM. Have a hardware watchdog enabled when this code is hit; it should reset the CPU after a few seconds.

Another suggestion is to use the debugger that is included with IDA Pro. Make them connect IDA Pro to a remote system, then debug a kernel driver without source code. The "trace replay" capability may come in handy.


Make sure it comes after a few rigorous lectures and exercises on requirements and specifications. I presume from the name of the course that there is already at least some emphasis on that.

If not students will conclude that writing the spec. and debugging the program are simply equal valued steps in a stepwise refinement process. Of course it sometimes is that way and can be no other, but generally it is better to think first and act afterwards.


Please make your lectures public.


:(


There are many methods of debugging as you can see sprinkled through these comments:

Using the interactive debugger Reading log files and adding more logging/trace. Writing unit tests Making some sort of test program or script to exercise the problem area so it's easier to reproduce the problem. Depending on your environment, tools and problem, different combinations of the above will make more sense.


> those who were underperforming weren’t struggling with the fundamental concepts in the class, but with algebra: the problems were caused by not having an intuitive understanding of, for example, the difference between f(x+a) and f(x)+a.

Those taking that class have spent years trying to learn algebra and still don't get it so we are already trying this.

> I’m no great teacher, but I was able to get all but one of the office hour regulars up to speed over the course of the semester.

Most likely they just memorized the new algebra rules as well. There are lots of studies showing how hard it is to teach students anything tangible, instead most will just try to memorize everything you say.

See for example: https://www.amazon.com/Academically-Adrift-Limited-Learning-...


The linked book was published in 2011 -- lacking the US context, has the situation changed in the past 6 years?


I've thought for some time that the idea of understanding is bass ackwarkds - understanding comes after learning how to do something mechanically, particularly in maths. Don't know if that's a learning theory though


This can really be extended to basic troubleshooting skills, and goes beyond software development -- beyond technology, even.

Troubleshooting is really just a special application of general 'problem solving', which is something schools aim to teach. I'm not sure why troubleshooting skills are not taught, especially something like "divide and conquer" [1]. Even in very complex systems, so long as you have the right access/visibility/logging to see what's going on at various places, it usually doesn't take very long to isolate a problem down to a particular component.

[1] https://en.wikipedia.org/wiki/Troubleshooting#Half-splitting


I wanted to write something similar, so I just upvoted you and piggybacked here. "Debugging" is really a subset of "Problem Solving". My grandfather had (by today's standards) very poor schooling, but he taught me some PHYSICAL WORLD DEBUGGING techniques which proved useful later in life. Just one simple example (I suppose this would be "Tracing" in IT) - once he was working on a cabinet that had somehow warped and wouldn't close properly. His solution: he cleaned the door and the frame, dipped his finger in oil and smeared it along the side of the door that would touch the frame. Then closed the door and opened it again - the point were some trace of oil was visible on the frame was the one that needed to be pared down to allow the door to close.

He also used intuitively divide&conquer to find faults or leaks in wirings and pipes and so on.

It is really something that should get more study and practice outside of IT or science, and I am sure that mechanics and other craftsmen have their own set of algorithms that can be used in various situations.


Yep, my dad was a mechanic when he was younger and worked on cars for fun, and he taught me troubleshooting before we even had a computer.



It is interesting to think that there have been those¹ who thought about the converse proposition, and saw early on the potential for the debugging of programs to become a widespread activity which would promote general habits of introspection in children (Seymour Papert's "thinking about thinking").

But that's not how things happened, and today people pretty much do not think about writing (and thus do not end up debugging) the programs they use.

Instead we are all users now, and for many the troubleshooting algorithm is to:

1. google it, or

2. reboot it, or, failing that too,

3. throw it away, or, failing even that,

4. complain and curse.

It's amusing to think that this model is now sometimes used by programmers as well (though hopefully mostly just #1, and sometimes #2).

[1] https://youtu.be/Pvgef9ABDUc?t=43m10s


That wiki link is a fascinating rabbit hole for debugging related advice/pages. Also I'd never heard the term half-splitting, Cheers


Half-splitting is a binary search for the root cause.


Debugging is a valuable skill that some of us were lucky enough to pick up by chance. A friend recommended an excellent book that very nicely explains the process: "Debugging: The 9 Indispensable Rules for Finding Even the Most Elusive Software and Hardware Problems"[1]. I found myself nodding along while reading. I think this would have been a hugely valuable primer if I had been exposed to it before I figured out how to do it on my own.

[1]: https://www.amazon.com/dp/B00PDDKQV2


Beyond a few simple techniques (printing values, writing and examining logs, practicing the scientific method) and defensive strategies (asserts, writing good tests, parameter checking, etc.) debugging gets environment-specific pretty quickly. Gronking through a bug in an embedded system is way different from figuring out why your rack of servers has gone dark, though the root cause might be the same.

I've found the best way to learn debugging is to should-surf someone who is good at debugging. For instance, I'd spent six months at Apple before I watched a resident wizard for an hour in the low-level debugger, and what I learned in that hour improved my skills dramatically. Repeat for two other companies: Encounter a tough problem (for you), find a wizard to help out, and pay attention.

I think the sentiment in schools is that sufficiently challenging coursework will force you to learn debugging techniques on your own, as a natural outcome of you writing code :-)


They don't teach you unit test either. Or how to read a doc efficiently. Or how to make a technical choice. Or how to conduct a meeting. Or how to direct a project.

At best they'll talk about best practice and will give you an exercice that implies you should use one of those skills.

The amount of useful stuff that are not taugh in school is enormous.


Profusely agree with the author. As someone who switched into CS from the humanities, one of the biggest challenges for me was less about syntax and more about figuring out the errors in my logic. For some reason though the former seems to get more focus by professors.

"We excitedly explained our newly discovered technique to those around us, walking them through a couple steps. No one had trouble; not even people who’d struggled with every previous assignment. Within an hour, the group of folks within earshot of us had finished, and we went home."

This is great, I wish more people at my school did this, learning should be collaborative (as long as you don't copy paste code). There seems to be an obsession with secrecy and not sharing any ideas on a project as though there is only one way to solve a Java homework problem.


  as long as you don't copy paste code
That's the million dollar question: how do you know whether a classmate is being earnest or just copying you? Especially in highly competitive environments, cheaters are extremely adept at leveraging your work without substantially contributing anything back.

The way it works in law school is that students (at least the social ones) break up into study groups early on. They may or may not be friends outside the study group. The key is that the study groups span multiple classes, and sometimes last the entire three years. In other words, it's an iterated game that discourages cheating from the outset. Being ostracized from a group is a huge liability, especially if you're a cheater.

This strategy doesn't work as well in undergraduate programs. There's too much churn and variability (in the student population, in their intelligence, in the number of subject classes, in the number of professors teaching the same subject), which means it's easier for cheaters to come and go without having to suffer repercussions.

I'm not a collaborator. To learn (and retain) I need to work through things alone, especially on difficult problems. Talking with other people about a problem is usually a distraction, and almost guarantees I won't internalize and retain the material. The above descriptions of group dynamics is my outsider's perspective.


That sounds like awful pressure to not rock the boat of your study group but maybe it's suitable for lawyers.

Cheating was solved in an engineering class I took that had us write a program in our own time, but the assessment was a test where we were asked to modify it and produce the correct output. If you had copied the code without understanding it, you wouldn't be able to pass the test.


A rigorous honor code also solves the cheating problem, as long as the administration actually enforces it by expelling violators.


The problem is actually getting administration to enforce their zero tolerance policies: expel cheaters despite the fact that they pay a hefty tuition.


>That's the million dollar question: how do you know whether a classmate is being earnest or just copying you? Especially in highly competitive environments, cheaters are extremely adept at leveraging your work without substantially contributing anything back.

What if I'm very good at one thing and very bad at another? That would probably work with the study group just fine, assuming they know it and adjust accordingly, but it's still 'cheating' in the sense that I can get away with only a partial understanding.


> A number of people resolved to restart from scratch; they decided to work in pairs to check each other’s work.

Everywhere I've studied, that alone would have counted as academic dishonesty. If it wasn't an explicit group project, there was clear language about working independently; you couldn't just decide to pair up. Almost everyone, even my graduating class's valedictorian, did it anyway. I was the honest chump and my GPA suffered in comparison.

Discussion of the problem definition was condoned, but any talk about solutions was off-limits. Many professors would include homework sets with a final question asking you to confirm that you worked alone and received no help outside the professor or TA. Some allowed you to admit to receiving help, with no punishment beyond 0 points for the assignment.

I shall assume this guy had an atypically lenient professor for his first engineering class.


> Why do we leave material out of classes and then fail students who can’t figure out that material for themselves? Why do we make the first couple years of an engineering major some kind of hazing ritual, instead of simply teaching people what they need to know to be good engineers?

Yeah, this is horrible and irresponsible on the university and professors' part. But that's the way it seems to be in most large US universities (anecdotally based on talking to friends in grad schools).

However, decent small liberal arts colleges often do exactly what you prescribe, they correctly try to teach it to everyone and generally succeed. Big box schools are just exploiting undergrads as cash cows, don't attend them or send your children to them.


I have always felt that I was better at debugging than programming :-) I have often been able to debug really complex problems but afterwards I am not able to retrace the steps that I took to come up with the diagnosis - so I'm not sure if this skill can be taught.


I KNOW I'm better at debugging than at programming. My own working code is terribly simple and straightforward but I've fixed kernel sound drivers, buggy firmwares and complex interrupt-driven music players. Perhaps I find it easier to construct a mental model when presented with a finished system rather than when starting from scratch.


Neal Stephenson writes about this in my favorite "the diamond age," he calls it honer and forger. The type of engineer that is better at improving an existing thing vs. creating something new.

There's a need for all types.


Aaaah, this makes so much sense. Thanks for the reminder. Here's the para in question:

"Hackworth was a forger, Dr. X was a honer. The distinction was at least as old as the digital computer. Forgers created a new technology and then forged on to the next project, having explored only the outlines of its potential. Honers got less respect because they appeared to sit still technologically, playing around with systems that were no longer start, hacking them for all they were worth, getting them to do things the forgers had never envisioned."


This book is probably my favorite of his, and definitely in my personal top-10 for fiction. Enjoyed it immensely on the first read, and get even more out of it on subsequent ones.


I hope your employer values you. Good debuggers (persons) seems to be rare. Esp people who also like to debug stuff. And you are soo needed.


I think you have touched on something important - you need a good mental model of the system being debugged.


I guess it's my personality type too which is INTP, the classical deconstructionist. What does this do? How does it do it? Why isn't it working?


> My own working code is terribly simple and straightforward

That makes you a good programmer, too. ;)


When I was in school I showed my algorithm(on paper) to my teacher and He said: "It's wrong". I strongly said: "It works!". He froze for few seconds and than said: "Use these data for inputs" and I immediately saw the bug. Of course the entire class laughed.

I know that The story isn't about debugging, but sometimes showing "right" inputs can be very efficient way how to explain the bug to other person.


"froze" implied he was scared. try "paused" or "ruminated" instead.


I donno, when I see code that works and I don't know why I get plenty scared.


One of my clearest and earliest memories of programming as a child is of having a working program, making a few changes, and suddenly nothing worked, being totally lost in the woods as I randomly tried fixing stuff and got absolutely nowhere. I feel like it's had a big impact on my approach to programming ever since, trying to break things down into manageable pieces rather than one big ball of mud that's fragile to errors.


Fantastic essay, and I think the same teaching could be applied to so many other skills needed to succeed as well. When people talk about academia as an ivory tower, this is exactly what they mean. Academia seems to be utterly focused on "cool ideas" and "intellectual exploration", and they really short-change the nitty-gritty skills that may not be as intellectually stimulating, but are still absolutely needed to get stuff done.

To give another example, anyone who's working in a programming context can boost their productivity by knowing how to set up their aliases, learning the basics of unix tools such as grep, and mastering a good text-editor/IDE such as vim/emacs/intellij. And yet, after 4 years in college, I didn't know a single thing about any of the above, and slowly picked up bits and pieces randomly over the course of my career. If a professor could have taught the above in a intro-to-engineering course, it would have saved us countless hours and allowed us to hit the ground running.


Interesting, just finished reading John Allspaws 2015 paper from some debugging/problem solving at Etsy: http://lup.lub.lu.se/luur/download?func=downloadFile&recordO... (really slow.. sorry Sweden!)

While it focuses on time tradeoffs and such during resolving outages, one of the major themes is that engineers/developers tend to use heuristics and past experiences to drive problem solving. I think as developers we tend to do the same, we look for common patterns for types of problems we've encountered. The debugger is just a representation of log analysis.

So how do you teach those experiences? I think a lab-based approach with examples is probably the right course.

Sun Microsystems used to have a Fault Analysis class they taught for Solaris.. It was essentially just a lab of 50 broken systems/scenarios that you had to diagnose and fix. I think more of that sort of thing earlier on would be helpful.


I haven't seen anyone mention the following yet, and it's INCREDIBLY important!

When something is not working right and you have to debug it, you need to re-evaluate your abstractions and drop down a level in your technology stack.

In Dan's case, he had to look at, and think about how the logic gates worked.

In a more modern case, when your REST calls aren't working right, you may have to debug HTTPS stuff.

If your HTTPS stuff isn't working you may have to look at key negotiation and exchange, etc etc.

If your Entity Framework isn't working, you may have to (horrors!) look at what's going on with the database.

If the database isn't working, you may have to (gasp!) look at the hardware it's running on.

Too often people look at surrounding code or state, and do not verify the abstractions of lower-level code or hardware. Those things have "always worked" - it's never the compiler's fault, until it is (or a bad assumption of how the compiler works)

I've seen people here talk about iterating, which is fine and good for many problems. For many debugging problems iterating will take you until the heat death of the universe to solve.

I don't really think "fullstack" devs are always the best solution, or a scale-able approach to development; but a fullstack dev with a debugging background is hard to beat when you have problems.

Edit: To put it another way: If your technology isn't working, you have to examine the engineering under it. If the engineering isn't working, you have to examine the science. If the science isn't working, you have bad assumptions, or you've found new science! (you probably have bad assumptions)


IMO debugging is actually the most practical definition of the buzzword-y "full stack" developer.

> There's no authoritative definition, but I see it as "I can trace a visual problem down through CSS styles, debug the JavaScript code that sets them, intercept the AJAX call with the problem-data, debug the backend language that is run, diagnose the SQL that finds the malformed row... And apply a fix on any of those levels where it is needed."

> You might not like the code you wade through, but...


"90% of programming is debugging. The other 10% is writing bugs" - Bram Cohen


> the problems were caused by not having an intuitive understanding of, for example, the difference between f(x+a) and f(x)+a

Does the author really advocate that the instructor of advanced college course needs to spend his time on explaining the difference between f(x+a) and f(x)+a to his students?

I don't have any idea how someone with such an understanding of algebra could even get into a college even if he wanted to learn humanities. If a college student doesn't get it, it's the college mission to fail him as fast and possible, not to spend time trying teach him something middle school should have.


>I don't have any idea how someone with such an understanding of algebra could even get into a college even if he wanted to learn humanities.

Schools' income depends on the number of students (either a budget allocated according to student population, or students paying directly). Therefore, the incentive is to delay failing people as long as possible, as long as it doesn't affect the quality of the graduates.


But the quality of a school depends on quality of students.

In two ways, even: first, if you select good students, they show good test results and place good jobs anyway, because they're smart. But also (and this is most important for me as a potential student) your school's _product_ is not only interaction with instructors, but also interactions with other students. The better your peers, the better education you get - even if you disregard the "school network" factor.


What kind of middle school did you go to????


I have to admit, I changed several math-oriented schools - but trigonometry is supposed to be standard middle school subject, no? If you don't understand the difference between sin(x) + y and sin(x + y), I can't imagine you can pass a simplest high school entrance exam.

But I guess it depends on the country. US math education, from what I hear, never ceases to amaze me - the contrast between the abysmal level of your schools and the best CS departments in the world is just astonishing.


People abuse debuggers (I see this esp with windows programmers). It's better to write test drivers and simulations to force the breakage. If this is how debugging is done you'll enhance your test drivers and simulations. Just running interactive debuggers on a main application you lose much of this. Honestl I spend less than 1% of my time in a debugger and all of that time as post mortem crash analysis. I also work with heavily threaded code where runing a debugger tends to make the code run without exhibiting the bug.


> It's better to write test drivers and simulations to force the breakage.

Those aren't mutually exclusive. It's much easier to write a test for an issue if you know what the issue is.


The more egregious thing schools don't teach is working in large existing codebases, i.e., the skill of reading and figuring out code.

Instead they focus on /writing/ code. In the process of writing code, at least some incidental debugging experience is gained (this article is actually an example of that -- the debugging wasn't formally taught, but the assignment clearly forced the author to learn it a bit), but almost no /reading/ code experience. It took me a while in industry to make up for this deficit.


Some schools are not just indifferent to teaching debugging, they are actively hostile to it, arguing that teaching debugging interferes with the "proper" approach of programming through correctness proofs.

This pernicious attitude, at least in my alma mater, could be traced back to E.W. Dijkstra, e.g.

http://www.cs.utexas.edu/users/EWD/transcriptions/EWD02xx/EW... http://www.cs.utexas.edu/users/EWD/transcriptions/EWD03xx/EW...

And he may not have been entirely wrong; it's not uncommon to see people spend insufficient time on reasoning about the correctness of their code from first principle, and jump directly into testing/debugging.

On the other hand, using correctness proofs as the sole and sufficient means of writing code is just not a workable approach for most non-trivial problems.


We should also probably start teaching how to effectively unit test.


There are a number of ways one could teach debugging. There are for example a number of CTF challenges with reversing that can be used to train persons in using debuggers to find and fix issues. The good thing with CTF challenges is that they have clear goals (find the flag) so you know when you are done.

Another way which we have used is to engage students in open source projects. Look at bug trackers to find a few issues and then try to solve them. This also teaches students a lot of other things like: - How to work in existing projects - How to ignore large parts of a code base to focus on a given issue - How to write test cases - How to create patches that match project code style - How to communicate and participate in projects.


The problem with teaching debugging that I see is that I am not aware of any systematic course for debugging available with suitable exercises, systematic theory etc.

If there were, I can believe some universities might at least be interested in offering an optional course.


Debugging is a composition of simulating software in your mind, verifying what you wrote down, and observing what the computer does. The reason why I think that there is no course on debugging, is because it is equivalent to learning how a computer works.


I was once on a project with a tech lead who was really good at finding bugs by inspection. In the "How to Reproduce" section of bugs he'd file, he'd always simply write "Eyeball the code."


All of this greatly depends on the teacher. While I have never specifically seen a debugging class offered at university, some professors do a good/decent job ensuring that you have debugged/tested your code.


What schools DON'T teach debugging? Simple trace statements were one of the first things we learned in 101.

Maybe I was just lucky to have a good teacher. I still remember him not liking my solution to a problem when I was extremely proud of how smart I'd made an object, so that it reacted properly based on whatever was passed in. Instead of arguing with me he offered an extra credit assignment to take all of the logic out of the object and see if I could get it working that way...I did it and realized how much simpler his way was.

Hindsight makes you appreciate the patience that took.


It's the first time I hear that anybody got taught debugging, honestly.


That's a little bit horrifying.


Can someone point to an online resource that teaches debugging?


My first encounter with programming for engineers was in college with already antiquated machines running Unix (in 2001-2002), and with a professor requiring us to use some basic text editor and debugging by printf. It put me off from programming for several years before I came back to it with higher level languages (VBA then .net) and rich IDE (the excellent Visual Studio).

The IDE (and debugging experience) is as important if not more than the language in my opinion.


I wonder why whenever such a discussion pops up, almost nobody mentions the stacktrace. I've seen so many CS students, who fail at reading that first representation of a problem, I really wonder why that is.

Also I like debugging via print much better than with a debugger, because it's easier for me to spot the problem. Breakpoints are great though.


That's not quite the sort or debugging being discussed here though - they're talking general theory of how to think through a problem to locate the issue, which applies to more things than just software including computer hardware, cars, circuit boards, just about any large system really - you do common things like replacing or disabling parts of the system, probing values at certain points, logging values, etc. For the most part, these things have a lot in common, but few people know how to approach such problems.

It's interesting that I see them saying "the more talented the engineer, the more likely they are to hit a debugging wall outside of school". I was only ever a good engineer in school because I was able to debug. You don't understand something til you tear it apart.


I went to university when few people had their own PC. It's true there was no catalog class in debugging but we all got very good instruction and learned many techniques in the lab, especially from the lab TAs. I would suggest the author seek out where the comp sci students congregate these days and talk through issues there.


Back when I was in college I had a bit of reputation as a "hacker" and one of my friends wanted my help at stealing somebody else's homework for CS101.

Turned out the program we stole didn't quite work and the process of debugging it was at least as educational as starting from scratch.


for the same reason kids come out of college woefully prepared to build software in the real world.

college is an absolutely horrible environment to learn programming.

programming should have a journeyman apprentice program like electricians.

I learned more from a mentor than I ever did in school.. it honestly was a waste of money.


Is this normal? That's pretty bad considering how much studying costs in america.


I nearly failed out of intro Computer Science class since I didn't allow for 90% debugging time. Also I wish I could save some of my errors since they are coming from my own fingers -- they are the most instructive.


When asked "How do you debug programs?" my Programming Languages professor responded, "I don't. I prove every line correct before I write it."


This can be done in the context of algorithms but not in professional programming in general. Software has dependencies with zillions of components. If you want to formal proof everything, the software field would move like a turtle, with wisdom but very slow.


I hear you. The general reaction of the class was, "Thank you for nothing."

The real world is different.


Good, now do that on a deadline, with python.


Yep. Try that in the business world and your competitor will iterate five times while you're still working on your prototype, and your milkshake will have been drunk, digested, and excreted.


Other important skills not addressed in (my) schools: source control, IDEs, practical (not theoretical) encapsulation, design patterns


Schools should teach Project Management and Quality Assurance too. Why they have not adapted is beyond me.


I thought the whole point of lab classes (in contrasted to lecture classes) was to learn debugging skills.


Rite of Passage, man, rite of passage. (Not that I endorse that.)


Because a lot of them hardly teach programming as it is.


Well, not only debugging but also performance profiling.


Debugging is much more important though.


If you had to chose between both, sure.

But I think there might be some time to at least walk students through the basics of profiling, it can be very valuable to have some exposure.

Then when it comes to memory management, there's a fine line between debugging and profiling.

I was playing with left leaning red-black binary trees yesterday as a refresher and got a memory corruption due to a bug. With valgrind (a tool I generally use to catch memory leaks) I caught it immediately, very helpful tool.


TL;DR: "Writing down the solution" without "thinking real hard" is a much better method


Yeah. And then iterate for better solution. Sometimes, after a long time, the bugs bite back at you real hard. But at least you didn't miss the deadline.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: