Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, that's the main culprit with traditional static analysis. No one wants to review the results, because the amount of signal to noise is far too low. And also since it's an optional thing and not enforced by the compiler.

I think this is where languages with stronger inbuilt analysis (e.g. Rust) win: The results are better, and since the analysis is always running as part of a compiler pass there are no huge jumps in indicated bugs at once (like what would happen if one would run Coverity on a legacy C++ codebase).



From experience on large codebases, get to -Wall -Wextra “clean” in both the latest versions of GCC and Clang and then tools like Coverity will produce much more useful results. The signal it provides to me at that point is exactly what it is meant to provide: mostly improper error handling analysis and N-level deep branches that result in a poor result due to an error or bad decision in another file that a human would not associate with the current call chain or think to look at. To be fair, the tools work much better when you know you have complicated pieces that you spend a little time writing correct models for (e.g. custom assertion/error handling, runtime supplied vtables, custom allocators, etc.).


Yes, I think the 'done by the compiler' part is really important for takeup. You can see a good example of a variant of this in the article:

> diff time [ie in the standard code-review workflow] deployment saw a 70% fix rate, where a more traditional "offline" or "batch" deployment (where bug lists are presented to engineers, outside their workflow) saw a 0% fix rate

That's the difference between "static analysis presented as part of the workflow a developer goes through anyway" and "static analysis presented after the fact". If you're in a position to enforce a code-review workflow that tools can hook into then "at code review time" works, but "at compile time" is better still since it shortens the feedback loop and ensures that everybody sees the issues while they're thinking about the code, even for smaller situations with more ad-hoc or nonexistent code review setups.


This is less true of more advanced static analysis tools.

I mean, ultimately we agree. Most people don't trust static analysis tools because they have had bad experiences with them. I just suspect most people should try them again. The state of the art is quite good in that space.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: