Hacker News new | past | comments | ask | show | jobs | submit login

A more recent paper that's related to this, "Coverage Is Not Strongly Correlated with Test Suite Effectiveness": http://www.linozemtseva.com/research/2014/icse/coverage/cove...

I wouldn't want people to conclude that coverage was a useless metric based on this, but it does seem to support the idea that chasing code coverage dogmatically is not necessarily the best use of your time.




I agree -- IMO being dogmatic about 100% or any other number can be a big waste of time.

But a coverage ratio and/or browsing annotated source is a good feedback mechanism for how many tests you've written.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: