Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Nitpicking is definitely a real thing. Perhaps it's the feel that they need to find something wrong with the code.

You can use code formatting and linting tools to automate the low hanging fruit but there's still opportunities to offer feedback that could be considered nitpicking but IMO isn't.

For example variable names, breaking up long functions, not enough test coverage which hints maybe the author didn't consider these cases or changing this type of code:

    if not something:
        # Unhappy case
    else:
        # Happy case
To:

    if something:
        # Happy case
    else:
        # Unhappy case
If you have 10 devs working on a project you're going to get 10 different ways of thinking. If you're working on a 10 year old code base with a million lines of code and you don't address things that could sometimes feel like nitpicking then you wind up with a lot of technical debt and it makes it harder to work on the project.

I like when people comment on my code and find ways to improve it in any way shape or form. I don't care if they have 5, 10 or 15 years less experience. I treat it like a gift because I'm getting to learn something new or think about something from a perspective I haven't thought of.



11 ways of thinking!

    if not something:
        # Unhappy case
        return
    # Happy case


Yep, personally I think that's ok too.

It's mainly avoiding the "if not else" pattern in 1 condition. I haven't met anyone who fully agrees that it's easier to read than "if happy else unhappy" or returning early with whatever condition makes sense for the function.


This one specifically has a name, that if statement (including the early return) is a guard clause.


On this example:

    if not something:
        # Unhappy case
    else:
        # Happy case
To:

    if something:
        # Happy case
    else:
        # Unhappy case
This is the exact kind of rule that some people feel needs to be applied globally and will run into conflicts with rules like "put the shorter case first" so people aren't having to keep multiple cases in their mind at once.

In short it's a preference masked as a best practice, and putting it under the same kind of "oh we should change it for the boy scout rule" logic as things like "Maybe let's not have 1000 line methods" is the exact kind of performative review being complained about.


And sometimes the unhappy case is simply more "important". It's nice when code reads like prose, and matches how you'd describe something using natural language. Perl has the `unless` statement for exactly this purpose. This pattern isn't inherently bad, sometimes it's just what you wanted to say.


I don't see anything wrong with giving a feedback such as "I would find it easier to understand if it was written lile this...". I'm just stating my preference and if the author agrees (which sometimes happens), or if they don't care, they can change it, otherwise they may also ignore it.


Check out this guy who thinks it's objectively bad to have 1000 line methods.


> test coverage which hints maybe the author didn't consider these cases

In my reviews, when I look at the tests, I put on a test engineer hat. Testers like to do things like give an empty value where one is supposed to be required by the business logic, a negative or other out-of-range value, an emoji or other wonky unicode in text. Just basic stuff to a test engineer, but lots of programmers only write tests for the expected happy path, and maybe one or two cases for common alternate paths. The most common failures I find are in handling dates and date comparisons.

I've been wanting to add more fuzz testing to the tests I write, but I haven't quite gotten my head around how to do it effectively.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: