Hacker News new | past | comments | ask | show | jobs | submit login

doesn't have to be. it can be measured against defined standards - documentation, tests, clear naming, complexity metrics, etc. it's not just a case of "half of everything is above average".

I used to compare my code to my previous code and other code I saw, and most of the time it was 'good'. When I started comparing my code to more external measures, most of my own code would now be 'poor', because of the external things it didn't measure up to. Now, often, there were valid reasons for cutting corners, but... that doesn't make the code itself better, it just justified the poor quality.




I agree with everything you've said in your reply but it does not contradict what I said.

To be more specific, the "defined standards" (docs, tests etc) are measures of what makes something better or worse. They aren't a measure of what is good. What is good is if those docs, tests etc are superior to the docs, tests etc of other code.

And we are right where back to the relative measure problem.

One could come up with an objective measure of "good" which would require a threshold for these measures. Such a measure would require us to say that a project is good if the docs are of such and such standard, the test coverage >60%, the names can be understood by most people and so forth.

However, I think that standard would still be informed by what most people do.

To put it another way, if we waved a magical wand and everyone started documenting and testing starting tomorrow, then "good" code would require something else.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: