Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think the issue with the algorithm is people deliberately trying to cheat it. The problem is that files that are actually by any empirical measure more bug-free getting higher scores.

You could have two very similar pieces of code, one of which is in one larger file with 1000 lines. Imagine this code gets 1 bug per day.

Now imagine the other code that is functionally identical happens to be broken up into 10 files. Now imagine that 9 bugs per day are applied to this unit of code that is essentially identical to the 1000 line file; the broken up code is exactly the same and is 9x buggier, but it will be scored lower because each file is only getting 0.9 bugs on average per day.

Do you disagree that the code in the second example would actually be way "trickier" and deserves to have a flag on it much more than the first one? I understand that one requirement for this is clarity of the algorithm to developers, but it seems like you could easily take the ratio of bug fixes to total commits on a file or normalize by file length if you wanted to actually get a reasonable metric of how "tricky" or dangerous it is to edit a file.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: