Because it's really easy to throw code at a problem.
OTOH it's hard to come up with an optimal minimal solution.
> Lines of code rewritten (changed N days after)
Means: Your devs commit unfinished, not thought out crap.
Or: Management is a failure because they change requirements all the time.
> Lines of code removed
More is better, right?
Because cleaning up your code constantly to keep it lean is key to future maintainability.
But could be also a symptom of someone with a NIH attitude.
Or, management is a failure because they don't know what they want.
> Lines of code that contributed to linting problems
If it's more than zero that's a symptom of slacking off and / or ignorance.
Your local tools should have showed you the linting problems already. Not handling them is at least outright lazy, and maybe even someone is trying to waste time by committing things that need another iteration later on.
Or, your linting rules are nonsense, and it's better to ignore them…
> Lines of code that contributed to security issues (as reported by the static analyser)
Quite similar to the previous.
Ignorance or cluelessness. Maybe even the worst of form of "I don't care", while waiting whether your crap passes CI.
In case someone doesn't know how to handle such warnings at all you have a dangerously uneducated person on the team…
Or, what is at least equally likely: Your snack-oil security scanners are trash. Producing a lot of the usual false positives (while of course not "seeing" any real issues).
> Average complexity measures
Completely useless on it's own.
Some code needs to be complex because the problem at hand is complex.
Also more or less most of this measures are nonsense. The complexity measure goes usually down when you "smear" an implementation all over the place. But most of the time it's more favorable to concentrate and encapsulate complex parts of your code. Hundred one-line methods that call each other are more complex than one method with hundred lines of code. But the usual complexity measure would love the hundred one-liners but barf at the hundred line method.
If there's someone who constantly writes "very complex" code (according to such measure done by some tool) this can mean that this person writes over-complicated code, as it could mean that this person is responsible for some complex parts of the code-base, or is consolidating and refactoring complexity that was scattered all across the place. Or maybe even something else.
> CI/CD build failure rates
This means people can't build and test their code locally…
Or someone is lazy and / or ignorant.
Or, equally possible, your CI/CD infra is shit. Or the people responsible for that are under-performers.
Or your hardware is somehow broken…
> Lines of code reviewed
How do you even measure this?
Just stamping "LGTM" everywhere quickly would let this measure look good. But is anything won by that? I guess the contrary is more likely.
Also someone who's constantly complaining about others code would look good here…
OTOH valuable and insightful code review is slow, takes a lot of time and effort but does not produces a lot of visible output. That's why I think it's disputable whether this can be measured even in a meaningful way.
There is only one valid way to asses whether someone is productive: You need to answer the question whether what this person is doing makes sense in the light of the stated goals.
But to answer this you need to look at the actual work / things produced by the person in question, and not on some straw man proxy measures. All measures can be gamed. But faking results is much harder (even still possible of course).
> Lines of code written
Less is better, right?
Because it's really easy to throw code at a problem.
OTOH it's hard to come up with an optimal minimal solution.
> Lines of code rewritten (changed N days after)
Means: Your devs commit unfinished, not thought out crap.
Or: Management is a failure because they change requirements all the time.
> Lines of code removed
More is better, right?
Because cleaning up your code constantly to keep it lean is key to future maintainability.
But could be also a symptom of someone with a NIH attitude.
Or, management is a failure because they don't know what they want.
> Lines of code that contributed to linting problems
If it's more than zero that's a symptom of slacking off and / or ignorance.
Your local tools should have showed you the linting problems already. Not handling them is at least outright lazy, and maybe even someone is trying to waste time by committing things that need another iteration later on.
Or, your linting rules are nonsense, and it's better to ignore them…
> Lines of code that contributed to security issues (as reported by the static analyser)
Quite similar to the previous.
Ignorance or cluelessness. Maybe even the worst of form of "I don't care", while waiting whether your crap passes CI.
In case someone doesn't know how to handle such warnings at all you have a dangerously uneducated person on the team…
Or, what is at least equally likely: Your snack-oil security scanners are trash. Producing a lot of the usual false positives (while of course not "seeing" any real issues).
> Average complexity measures
Completely useless on it's own.
Some code needs to be complex because the problem at hand is complex.
Also more or less most of this measures are nonsense. The complexity measure goes usually down when you "smear" an implementation all over the place. But most of the time it's more favorable to concentrate and encapsulate complex parts of your code. Hundred one-line methods that call each other are more complex than one method with hundred lines of code. But the usual complexity measure would love the hundred one-liners but barf at the hundred line method.
If there's someone who constantly writes "very complex" code (according to such measure done by some tool) this can mean that this person writes over-complicated code, as it could mean that this person is responsible for some complex parts of the code-base, or is consolidating and refactoring complexity that was scattered all across the place. Or maybe even something else.
> CI/CD build failure rates
This means people can't build and test their code locally…
Or someone is lazy and / or ignorant.
Or, equally possible, your CI/CD infra is shit. Or the people responsible for that are under-performers.
Or your hardware is somehow broken…
> Lines of code reviewed
How do you even measure this?
Just stamping "LGTM" everywhere quickly would let this measure look good. But is anything won by that? I guess the contrary is more likely.
Also someone who's constantly complaining about others code would look good here…
OTOH valuable and insightful code review is slow, takes a lot of time and effort but does not produces a lot of visible output. That's why I think it's disputable whether this can be measured even in a meaningful way.
There is only one valid way to asses whether someone is productive: You need to answer the question whether what this person is doing makes sense in the light of the stated goals.
But to answer this you need to look at the actual work / things produced by the person in question, and not on some straw man proxy measures. All measures can be gamed. But faking results is much harder (even still possible of course).