This is pretty compelling. Why isn't it in the "recommended" preset of lints?
In general, I find it frustrating that the eslint recommended presets don't document why they are recommended. I disable several of the rules in all my projects because they seem to be arbitrary stylistic choices (e.g. [0], [1], [2]).
[edit] In looking up no-empty-function, I saw a stack overflow post that provides a compelling alternative of using `() => undefined` instead of `() => {}`, which suppresses the error. That should be shown as an example on the eslint page!
> I find it frustrating that the eslint recommended presets don't document why they are recommended.
That’s a good point. While this doesn’t address the root problem, I can share some context here as a former maintainer: browsing the core rules [1], you should see that recommended rules flag cases that are highly likely to be bugs or unnecessary constructs. Recommended rules should have false positives only in exceptional cases and be objective or near-universal consensus opinions.
Plugins, of course, are free to choose their own threshold for their recommended configs.
> Why isn't it in the "recommended" preset of lints?
It will be added to recommended in v9! [2] The rule was written during one of the v8 minor versions, and adding to the recommended config is always a breaking change.
0 is to prevent mistakes where you declare a function, but forget to implement it. If that's really intended I like to put a comment "do nothing" in empty functions, even in projects without the ESLint rule, so future readers know what's going on.
1 is for consistency & readability. Imagine you read someone else's code like <div children="foo" />, and you see that it's self-closing, so you expect it's an empty div. Even more confusing when you have more than just one attribute. Or say you want to modify the code to add a new child, so you remove the self-closing and add the child, breaking the old children in the process. What is the reason for doing children="foo" in the first place anyways?
2 is to prevent infinite loops, it's good practice to keep an upper bound (in a single-threaded world like JS, infinite loops can be very deadly and hard to diagnose). I like NASA's ten coding commandments (this is #2): https://devm.io/careers/power-ten-nasas-coding-commandments-...
Yeah I love rust’s todo!() macro for this. Not only does it throw an error, but it also typechecks in any context. Normally a function needs to actually return whatever it says it will return. But throw a todo!() in there, and you don’t need to return anything at all. (I assume there’s some type magic going on behind the scenes. Whatever it is, I love it.)
But generally I agree that it’s overly persnickety to complain about empty functions. My use case for them is that sometimes you need to pass a function as an argument and the function may be null - to indicate you don’t want to do any work. It’s often cleaner to use an empty function as a default value rather than add null checks everywhere.
I don’t use eslint at all because of defaults like this. Having my coding style negged by a tool feels awful. I hate gofmt. I ran it once and it deleted a bunch of empty lines in my code that I had put in on purpose to aid readability. And wow, that pissed me right off! I just know I’ll hate a lot of eslint’s rules just as much. 30 years of coding experience gives you opinions. Kids these days don’t even know how to use the ternary operator properly.
For being explicit that I’ve deliberately chosen to leave a function empty, I like to have a single noOp function defined that’s documented and use that to show I meant to do it. Only if you need it more than three times though of course!
This is correct. The post is from 2022. New default rules are a breaking change and version 9, the first major relates since it was added, is coming soon and will include it by default.
From what I remember, being able to pass children as a prop is considered a side-effect of an implementation detail, that breaks the expected abstraction. There really isn't any reason to use it, and I think there's a chance it may even limit what the virtual dom diffing can manage?
Also this would prevent you from accidentally doing both at once:
<MyComponent children={<div>Is it me?</div>}>
<div>Or is it me?</div>
</MyComponent>
(I don't remember what React does in this situation)
I use it fairly extensively whenever I have code like <Cpt>{props.children}</Cpt> (i.e. whenever the only thing I'm doing with the children prop is passing it to another component. This comes up pretty often with context providers, where I'll write a provider wrapper that handles some stuff by itself, and then passes the value and children to a React context.
I like using the "children" prop explicitly because it's a lot more concise (with prettier, I can often go from five lines to one line with this style), and because it's more explicit. I'm not creating my own children, I'm not adding anything to the markup myself, I'm just passing the input children directly to another component.
This isn't really anything unusual, it's exactly what JSX compiles to under the hood. And passing JSX elements in props rather than as children is useful in other contexts as well - for example, a button that allows <Btn icon={<MyIcon />} ...>
You're right that this can cause odd situations where children are specified in two ways, but I'd rather have a lint rule explicitly handle this case than disallow using the "children" prop altogether. (In fact, I think Typescript does validate this case already? But I might be wrong there.)
I can see why this rule might be useful to have in general, but I agree with the previous poster that it's a semi-opinionated rule, and one of the sort of rules where I end up having to disable it and fiddle around with configs whenever it comes up, which in turn puts me off using ESLint in general because I know to get it to be useful I'm going to need to spend some time changing everything up.
> Eventually it clicked for me: developers don’t intend to write useless code, and code that does not match the developer’s intent is by definition a bug. Therefore, any useless code you can detect is a bug.
I'm not completely sure I would take it all the way to calling it a bug, but I do appreciate a rigorous way to simplify code, because the worst case is that you've made it easier to reason about and that's a win all by itself.
(EDIT: This post also makes me feel better about my personal coding style being paranoid and doing things like using parens to force order of operations and avoiding "advanced" constructs like ?? because I don't trust myself to not shoot myself in the foot. I'm not a professional dev, so I'm happy to write verbose, inelegant code in exchange for it being so simple that I'm less likely to screw it up)
I am a professional developer, and I'll take the verbose, ugly, but readable code every time. Clever code that makes me work harder to understand what it does has a greater chance of being missed, blindly trusted, or just misunderstood, and that causes us untold numbers of headaches.
Writing clear, concise code is *hard.* Writing just clear code, conciseness be damned, is a perfectly fine middle ground.
OK but if you have large amounts of unused code throughout the codebase, mistakes are more likely to be "missed, blindly trusted, or just misunderstood".
You have to find it, first. I have a PHP codebase where symfony and doctrine are using strings to connect distant parts of the code. Just trying to find what calls what is hard. I am in the process of replacing as much strings with real calls and adding as much typing as possible, and have thrown away 1/3, but the code base fights back.
I think when the OP says concise they mean using several lines to make some logic clear, rather than a tricky little one liners. (like many of the mistakes in the article)
yeah, it's reassuring that for most (all?) of these I'm like "I would never write that"... though I also lean towards very "simple" code that is not only blatantly clear but totally non-"clever". I'd rather the early-in-career dev we bring in 6-12mo later can read it and immediately parse it. Plus it's easier for ME 6-12mo later! hahah :)
Author of the rule and post here. I reread the post this morning, and I agree that I should have been less definitive in that sentence. But I stand by the broader point: Useless code is generally not something developers intend to write. When we do, it's generally something exceptional, so a lint suppression is a reasonable way to clarify "I meant to do that". And in the common case where it was an error, the lint rule proves quite helpful.
I hope the take away for the reader is: If you can think of other rules that will detect useless code, you should pursue them, because they are likely more valuable than just enabling dead code elimination. They have a high probability of being able to uncover interesting bugs/mistakes as well, which is much more valuable.
I often add an extra line or two of no-op code in order to improve readability. But in these examples, the code clearly wasn't written to communicate anything, but rather by mistake.
Rule author here. Would love to see some examples! I think the closest I’ve (knowingly) done to this is to add an empty else clause that contains a comment.
So one thing I try to be very good about is never doing more than one thing in a line, which means always using named conditions and named returns. So in other words, if statements and return statements should always contain a single variable that describes the condition or what's being returned, and should not contain function calls, boolean logic, etc. This typically involves giving things descriptive names and then not using them more than once, but it makes the code more readable.
Interesting. I see what you're saying, and I don't know if I would characterize that as no-op code. All of that code will run. What I see there is redundant assignments, which I agree can act as a great form of code comment.
Often a code comment can be avoided (and clarity improved) by giving a name to an intermediate value rather than letting it be a nameless expression.
I like it! But I don't think of this a no-op code and would not imagine a lint rule objecting to it. That said, a code minifier (or compiler) would be well positioned to optimize that code, so you shouldn't even have to worry about even the thought of perf implications.
> The rule checks for comparisons (==, !==, etc) where the outcome cannot vary at runtime, and logical expressions (&&, ??, ||) which will either always or never short-circuit.
If you write if(baz == 6) then the if clause is useless and probably a bug.
> When trying to define default values, people get confused with expressions like a === b ?? c and assume it will be parsed as a === (b ?? c). When in actuality it will be parsed as (a === b) ?? c.
I'll never understand why programmers don't simply put parens where they want the expression to be evaluated, rather than relying on their (sometimes incorrect) assumption about operator precedence. I want to chalk this up to hubris, but it's probably just laziness.
Author of the rule and blog post here. I agree that, for me, I appreciate the extra clarity of explicit parens. This lead me to explore a VSCode plugin which visually show the implicit parens even if they are not present in the code: https://jordaneldredge.com/blog/a-vs-code-extension-to-comba...
I like the idea that different readers of the same code base could opt for differing levels of explicitness when it comes to operator precedence. One thing that working on the project helped demonstrate for me is that adding parens around _every_ subexpression is _way_ too noisy. So, you need to draw the line somewhere. But for me, I prefer drawing that line on the noisier side.
I've written a static code analyzer for C/C++ code. The javascript operator precedence table is almost the same as the C/C++ operator precedence table. So I was pleasantly surprised when I could also run the script on javascript code and have reasonable errors reported as well.
One bug that appeared is bitwise-AND followed by a comparison with no parentheses grouping the bitwise-AND operation. Obviously developers thought the bitwise-AND had higher precedence than the comparison. They were wrong. They obviously didn't test the code either.
Javascript code doesn't twiddle bits as often as C/C++ code so this bug doesn't appear as often.
For text editor support of this problem, what may work is to show that an expression with higher precedence IS evaluated first before the expression using an operator lower on the operator precedence table.
There was a suggestion in the Vim mailing list years ago to use background color to show the nesting depth of a block.
One could use background color or a font attribute, say underline or font-size, to indicate the evaluation order of a non-parenthesized multi-operator expression.
HN posts only support italics for font formatting so I'll use that in my example - hmmm, looks like I can't use block formatting in addition - just imagine the code indented in a fixed-width font
> One thing that working on the project helped demonstrate for me is that adding parens around _every_ subexpression is _way_ too noisy. But for me, I prefer drawing that line on the noisier side.
I prefer the noisier side as well. When adding parens to indicate/force precedence, I will sometimes split my expressions into multiple lines, with indentation, as an aid to legibility, e.g. instead of ( ( a + b ) / ( ( c - d ) / e ) ) I might write:
(
( a + b )
/
(
( c - d )
/
e
)
)
I've found this not only quite legible, but also fairly easy to edit because you can easily match parens visually.
I'm curious, what is it about my multi-line expression is unclear? I find it easier to match parens when they line up vertically (similar to aligning braces in C functions) than when they're all on one line. True, most IDEs will happily show you a matching paren when you hover over one, but that still doesn't help you understand how a long, one-line expression really breaks down.
Another example: do you write { foo: bar, baz: bat, bing: bang } or:
{
foo: bar,
baz: bat,
bing: bang
}
If you use the latter method, why wouldn't you write your expressions the same way? Obviously not the ultra simple ones like a + b, but the longer ones.
Incidentally, I used to write code with no spaces between parens, which is what you suggested: ((a + b) / ((c - d) / e)), but eventually found it much easier to always put spaces around parens: ( ( a + b ) / ( ( c - d ) / e ) ). I would say this is similar to the indent-with-spaces-vs-tabs argument, which will rage forever.
This. An extreme example, where I think my solution works best, is SQL statements that are hundreds of lines long. No sane developer would put that all on one line.
Ahh, I think your example would have communicated the benefit of what you were sharing more clearly with longer variable names. For single word variables, though, it feels needlessly verbose.
To answer your earlier question about object definitions, in cases where the objects are small, I do think more concise (single line) notation can be more readable. An example, though not a great one because it's data more than it's code:
Agreed, that would be one level too far, and thus unnecessary. I'm not arguing that every operator should be parenthesized, but I think parens should be used more than most programmers tend to do.
On a related note, I occasionally write expressions that both assign and test, e.g. if ( ( a = b ) == c ) and in those cases I parenthesize liberally and always write a comment that indicates yes, I intend to make an assignment in the middle of a test for equality.
While your solution splits the whole thing into smaller and arguably more easily digestible steps, you have introduced two new variables, key1 and key2, which need to be carefully scoped (if they aren't already) so they don't clash with existing variables. If your project consists of many instances of code like this, you end up with many variables which can become harder to manage.
You're also splitting what is intended to be essentially an atomic operation into multiple steps, which can be good if you want to analyze and tweak them in the future, but it's now no longer clear where the process begins and ends in relation to the code that comes before and after: you have to add more comments, or split the whole thing out into its own function.
I'm not saying your code is bad or wrong, just that there are downsides to any solution (including mine), and ultimately everybody has to pick whichever has the fewest negatives for their particular project.
Just curious, do you also find the ternary operator hard to parse? This is a serious question: I've found some programmers tend to avoid it entirely, while I think there are some clear cases where it's advantageous to use.
What I'm getting at is one person's "easy to parse" is another person's "difficult to parse", and there may be no objective answer which makes one any better than the other.
Mission critical automotive code under ISO(ASIL) that usually follows MISRA, assign and compare in the same clause is banned. If you're writing C code that lives don't depend on, do whatever you want, but I pity people that have to own your code.
I hope this is satire, DRY is more about extraction of shared function not about individual expressions. You write code once, no need to save yourself typing time at the expense of you and other peoples ability to read and understand the code afterward.
> DRY is more about extraction of shared function not about individual expressions
I'm not sure where you get that idea. Repetition anywhere is usually a code smell.
> You write code once
And then you update it when you add new features or the requirements change or you fix bugs, etc. Having to change two symbols is more error prone than changing one. And having to parse more code is harder than parsing less code.
The assign-and-test pattern is common in several languages (e.g. C), and adding a comment that explains the logic should remove all doubt as to what is happening and why, so I see it as a win/win.
In any case, there is a trade-off between terseness and legibility, and while I usually favor more verbose code, I tend to draw the line at needless repetition. But that's my personal preference, and everybody draws that line in different places.
> And having to parse more code is harder than parsing less code.
I'd rather parse three straightforward copies with minor differences than one chunk of dry spaghetti. And in assign & test case, splitting them makes debugging much easier.
Also it’s easy to make assumptions and have them slip by when more experienced. Like how the ?? operator works in C# or even just the bitshift or logical operator order which is pretty much standardized across languages by now. But still, this can confuse readers or even yourself on a tired day.
I personally make liberal use of parens to make the precedence of operators as clear as possible. Then I run formatting tools like Prettier or rustfmt on my code, which remove any parens that are not strictly required. This way, I end up with an expression that is clear as well as concise. Best of both worlds.
This is the approach I take as well. But it's not necessarily best of both worlds. Removing (or not inserting) technically useless parens can obscure bugs if you are not perfectly fluent in the precedence rules of the language. Their doubly easy to miss if there's a lot going on on one line.
Ahh... there's a strict equality operator comparing undefined (both a value and a type) to something that's being coerced to a boolean... it will always return false. The first thing I look at when I see a strict equality comparison is "are the operands actually the same type" because that's a very common mistake (or so I've noticed)!
Huh, now I'm curious if my IDE would highlight this with a warning... will have to check!
I was referring particularly to my parent's point about adding parentheses to remove ambiguity. That prettier oftentimes removes parentheses that I explicitly want. So rather than risk that, I remove the ambiguity by moving one of the comparisons off into a named variable.
I was just pointing out that this is a case prettier wouldn't do anything. The expression parses differently and adding or removing parentheses to any subexpression will change its meaning.
That by manually doing the transform in your example that you would realize your cody was a no-op or that the intended version was what I posted.
I think one reason is that when I am refactoring I am thinking about other larger logic, and as expressions get reduced, I become blind to the details that I and previously internalized as correct. Linters really help because I can’t always be thinking at both ends of the abstraction.
> I want to chalk this up to hubris, but it's probably just laziness.
You may be able to chalk it up to some “senior” reviewer who asks a junior developer to remove the parens because “they aren’t needed”, and the junior developer doesn’t know how to argue their case.
In fact, I’ve seen an extension several times: senior developers performing code review, and requesting that unnecessary, but clarifying, parens be removed. Not excessive parentheses, a single pair in a line or statement.
One thing that reading _The Little Prover_ did for me was correct a sort of lazy preconception I had that you cannot “reason about” code in dynamically typed languages. And it sort of crystallized my experience of large dynamically typed codebases not being as bad as one might expect. The patterns in OP are one case here where the boolean expressions are trivially analyzable and there are other classes of such analysis: e.g. it’s possible to use the condition of an id statement to detect useless expressions in both the then and else clauses.
Benjamin Pierce has a famous quote: "Attempting to prove any nontrivial theorem about your program will expose lots of bugs: The particular choice of theorem makes little difference!" (This is also true of mathematical proofs - any attempt to formalize a nontrivial proof will usually expose several minor but fixable bugs in the proof.)
Clang and GCC have something like this but restricted to just comparisons, called -Wtautological-compare. I recommend turning it on for the same reasons. Although in practice it may need to be suppressed in macro-heavy or template-heavy code.
C compilers are funny on this one, because instead of warning you about code that don’t do anything, they take advantage of it, in the form of UB optimizations.
Like you say, macros often make this very hard to enforce in practice.
None of the examples in the article would cause UB in analogous C code. A C compiler could still optimize away the useless check itself, but that's true for every language.
> Comparisons which will always evaluate to true or false and logical expressions (||, &&, ??) which either always short-circuit or never short-circuit are both likely indications of programmer error.
I regularly do things like:
if (0 && expression) ...
and:
if (1 || expression) ...
to temporarily disable a conditional when I'm looking for a bug.
Well, then presumably it would be nice to have your linter point that out for you so that you don't commit it to the codebase on accident! I've definitely seen a few of those kinds of things accidentally left in in production, which is definitely a code smell
WalterBright is the mantainer of the D language, so I guess he sees everithing from the point of view of the compiler.
I wrote a lot of code for the optimization step of the compiler of Racket, in particular steps to eliminate similar code. I saw those expressions and I inmediately think:
Macro expansions create a lot of similar expressions, and also more complex expressions that the compiler can reduce to trivialy looking expressions [1]. It's nice that the compiler can detect them and eliminate them, but raisng an error would break a lot of code.
My guess is that all three of us agree that the post is about linters, but it would be very bad to extend this rule to the compiler.
[1] For example, after an expansion of a macro or inlining a function you may get:
(define x (random 10))
(if (integer? x)
(display x)
(error "The number should be an integer"))
Despite being Turing complete[0], TypeScript itself is more like a linter with a special syntax than a complete programming language.
Its compiler can only transpile TypeScript into JavaScript code, and the language itself does not affect runtime behavior (with the exception of enums).
Having any such binary expression is considered a severe violation of coding rules for certified software (avionics, medical...). There's a good reason, because it's my almost always hiding a bug. And in gray cases where the duplication is not obvious, forces to think hard about it.
> There's a good reason, because it's my almost always hiding a bug
There's another good reason -- by definition, it's impossible to write a test case with full decision coverage if some of the decisions are themselves impossible.
Post author here. That’s a great framing! That means that, in theory, an enforced 100% code coverage would catch these issues as well, which I’m inclined to believe!
I'm a huge proponent that branch coverage is way more important than just statement coverage, because as you mentioned, it's way easier to miss these types of bugs just by getting 100% statement coverage.
I think I am unclear what you mean by branch coverage. I assumed it means ensuring that every 'then' and 'else' branch is taken? But if so, that means every statement must get covered, right?
Put another way, you can only get 100% statement coverage if every 'then' and 'else' branch is taken, so 100% statement coverage and 100% branch coverage seems to be saying the same thing I think a misunderstanding you, an explanation would be helpful, thanks.
Branch coverage is also known as decision coverage. Essentially, it means coverage of every machine code instruction in the generated binary. This includes every statement in every 'then' and 'else' (which would be statement coverage), but also includes decisions that are sub-expressions and not statements. Consider:
if (foo() || bar()) {
a();
} else {
b();
}
Statement coverage says that a test must cover the lines of code (or statements) `if (foo() || bar())`, `a()`, and `b()`. But if `foo()` is tautological then the || short circuits, and all of your tests can pass, with 100% statement coverage, even if `bar()` causes the sun to go nova. With 100% decision coverage, you must have a test case that causes `foo()` to return false, so that you can test the behavior when `bar()` both returns true and returns false.
Edit: The above isn't a correct example, because statement coverage will not hit `b()` if `foo()` is tautological. Still, you can see the point -- there's a difference between decision coverage and statement coverage.
I'm fascinated by how many examples of this kind of issue there seem to be. Seems like the kind of thing that should be fairly obvious in code review, or should be caught by unit tests.
No contributor writes consistently (truly) thorough unit tests and no reviewer performs consistently thorough line-by-line analysis.
It’s extremely valuable to have multiple layers of verification, and fast-cheap static analysis tools like linters have a tremendously high ROI as one of those layers, especially in languages with many subtle syntax surprises.
Hey, author of the rule/post here. I'd encourage you to click through to the actual examples linked from the post. Seeing the issues in context, as opposed to the minimal example, can help show how quickly these issues can get lost. It might also be interesting to click "blame" on the line and look at it in the context of the PR that added it.
Overall, my point with the examples was to highlight that these are mistakes that even make their way into high visibility projects built by highly competent engineering teams.
That said, looking at the issues few were in really critical paths of these projects. Often they cropped up in auxiliary areas like test harnesses or more off-the-beaten-path features. One can assume the same bugs may have existed at some point in the development cycle in other areas of the code base, but they got caught by more rigorous testing/review of those areas, or bug reports. But it's surely a time saver to identify them _as the developer saves the file_ rather than later in the process. The sooner you catch the bug, the more engineering energy you save.
I love that framing. I’m the author of the rule/post and I see writing rules like this as an opportunity to mentor at scale. Incredibly rewarding to think that, in a sense, I can be in so many engineer’s editors helpfully pointing out (and via documentation explaining) issues right at the moment that the developer needs it.
I thought TypeScript was able to analyze some static code, like below:
const alwaysTrue = true
if (alwaysTrue === false) { }
// This comparison appears to be unintentional because the types 'true' and 'false' have no overlap.(2367)
The back and forth on that thread seems to result in advocating for what we’ve got: compiler (TypeScript in your example) not caring, but linter caring.
The reason not to in the transpiler is it can be very helpful during testing to block or force some paths.
TypeScript is a superset of JavaScript, so they can't just like.. stop you from writing valid JS. Generally speaking, any valid JS is valid TypeScript. It's not invalid to write that, so TS won't do anything to stop you from doing so. However, fortunately linters exist for the exact reason of catching these sort of "gotchas" or "probably not what you were intending" kind of situations.
TypeScript seems to often make arbitrary decisions what they consider a type error and what is not. The argument is often "TypeScript supports everything in JavaScript" but that is done inconsistently.
A typical example is string + number results in no warning. I understand that some people actually want to do it for "convenience", but often this turns out to be a mistake (e.g. passing the wrong variable). However, if you have a Map<string, number> and tries to do map.get( 3 ), that's an error even though you can absolutely do that in JavaScript -- it just always returns undefined, no error thrown.
You can find a stackoverflow thread about this. To me the current behavior does not make any sense -- I use TypeScript for static typing and avoid any implicit type conversion, and I want that to be consistently applied without exception, that's why I am using TypeScript. I am a bit disappointed that there are a number of such holes in TypeScript.
The type system in TypeScript can compose types from string literals using a similar syntax.
Obviously, this is runtime code, not a type declaration, but you can see how a new or tired/busy developer’s brain might spit this out in the wrong place.
> Genuinely wondering what language that would have been valid in.
Raku, but using “junctive or” | (which creates an any junction of its arguments, which “autothreads” on method calls producing a Junction of the results) instead of “tight or” ||.
One could abuse Kotlin to make almost this syntax valid. Something like: states.includes { +”VALID” || +”IN_PROGRESS” }
Where statestype function ‘includes’ takes a lambda that’s run in states context and overrides plus operator for string to evaluate to states.contains(that string). One could, but should they?
> When trying to define default values, people get confused with expressions like a === b ?? c and assume it will be parsed as a === (b ?? c). When in actuality it will be parsed as (a === b) ?? c.
Note that this is pointing out a bug in the language spec. There are two ways to assign the precedence:
a === b ?? c
One says "Compare a to a value, b, which might be null. If it is null, compare against the guard value, c, instead."
a === (b ?? c)
The other way says "Compare a to b. Consider whether the result of the comparison is NULL, and if it is, do nothing. If it isn't, do nothing then too. In all cases, return the comparison between a and b."
(a === b) ?? c
There's a good reason people make the erroneous assumption that the language itself isn't trying to sabotage them. The left operand of ?? must be a nullable value; for === to bind tighter than ?? implies that it can return NULL, which in fact it can't.
It's surprising how many bugs can be caught by lints like this, which check for code which is obviously flawed in some way (other examples are unused variables, dead code, statements which will always crash, statements which will always throw an exception that aren't `throw` or something else where that would be intended)
My past experience with (maybe more limited versions of) this is when you’re developing something and you throw a ‘true ||’ in there as part of that, in can get annoying to have to outsmart your environment to get your code to compile. But surprised at how many issues this can catch - seems worth the occasional extra pain.
It seems the hypothetical developer in each example didn't know the language that well, and didn't use parens when they were in doubt. Prettier will remove the parens automatically on save, if auto format with prettier is set up in your editor, when the parens are not needed.
> where developers misunderstood the precedence of operators, particularly
> unary operators like !, + and typeof.
if (!whitelist.has(specifier.imported.name) == null) {
return;
}
which is exactly why I always bracket stuff, almost to a fault. I've seen almost exactly this in a piece of shipped code which meant a crucial condition would never trigger. Cost of that? Possibly hundreds of millions or even higher. I'm not exaggerating.
This is why I totally hate the tower of precedence in C-style languages.
On the one hand, now I really want this for python. On the other, I've made most of these bugs with JS, and very few with python, despite having used it more. So many of them are consequences of JS wats...
Author of the post/rule here. I'd be very curious to see this rule ported/translated to Python! While the gotcha's would probably be different, I suspect it would uncover real bugs. One interesting challenge with JavaScript is that its so ubiquitous that _everyone_ ends up needing to write it at some point. Even those for whom it is a second or third language. I suspect this makes its gotchas all the more common to encounter. Conversely, it makes the value of tools to guide around those gotchas that much more valuable.
My friends are trying to get me to port it, I'll ping you if I do.
Being a common second language is relevant, but JS also has weird truthiness rules and very first-class-feeling lists and hashmaps but not native deep comparisons, and many of the bugs you described stem from this. Those categories of bugs don't exist in Python, that has almost perfect truthiness (PSA: an epoch datetime is falsy! Wat) and deep comparisons by default.
> you just always need a braces around a single-line loop?
Yes. And ditto for `if (expression)` and a couple others. In developing D, I looked at what linters and coding standards did and incorporated into the language rules for troublesome things that there's just no excuse for.
For another tidbit, the JSF coding standard specifies that using a lower case 'l' as an integer literal suffix is not allowed, because it is too easily confused with '1'. D just doesn't allow it; you gotta use 'L'. No need to write a coding standard about it.
Ideally this lint would ignore if (false) and if (true), even though these are still constant binary expressions, they aren't vectors for similar bugs like the constant expressions in the blog post.
Although "real" production software would use feature flags of some kind instead of hardcoding the constant, sometimes you do just need to hide some code behind an if statement that is currently ineffectual, and linters that prevent that are extremely annoying and force me to write a confusing, convoluted expression that is too complicated for the linter to detect as constant true or constant false.
Can't you, and shouldn't you, just add appropriate comment / annotation to tell the linter to ignore the next line? That way you make it clear to others reading the code that you know what's going on.
Just remove the conditional? If you ever need it back, git is your friend. Or comment it out or something if you really want to keep your code messy, I'm not your mom. But I don't see any reason why `if (true)` would be better then `// if (!isAdmin(user)) {`
It’s annoying that linters are responsible for enforcing code-style (indentation, line length, etc.) as well as code correctness (as discussed in the OP), and these sorts of intentional temporary code smells. There’s a huge amount of overlap, so I understand where it comes from. But it means that linters are too intrusive for most small or early-stage projects. (Obviously these lint rules can be turned on and off individually but that’s a hassle.)
Author or the rule and post here. ESLint version 9, which is going to enable this rule as part of the set of "recommended" rules, is also removing all of the formatting rules. I'm pleased to see that. In the era of pretty printers (which ESLint predates) I think it makes sense to encourage linters to focus on correctness and other conventions aimed at improving code quality rather than just style.
In good setups, linters are only responsible for the latter. Autoformatters of the zero config "any color you want as long as it's black" variety - gofmt, black, etc' - take care of the former.
This won't be a problem if you have 100% test coverage. Potentially useful to point something out as you type it, but I don't think it does anything profound.
You can't have 100% test coverage with these kinds of bugs, because these bugs create unreachable expressions.
You can either find out you have these bugs from lint errors or you can find out by finding them in code coverage reports while chasing 100% code coverage. Most people would obviously prefer the former.
I considered mentioning it, but to my previous understanding sqlite measures coverage at the assembly level and not at the C level. Skimming the link, it seems I was wrong, so thanks, TIL. My previous conception doesn't make much sense, now that I think of it.
I think you’re technically correct. But if you are someone who is writing 100% branch coverage code this rule could still be useful. It will quickly point out parts of your code base that are going be impossible to test with 100% coverage.
100% coverage is very rare outside safety critical code. People are much more willing to invest time in adding features than writing (often boring, repetitive and tedious) tests just to get perfect coverage -- finding a bug later is a risk people are willing to take.
And even 100% doesn't mean much -- you can easily come up with examples where you have 100% coverage but there are still bugs.
100% branch coverage, which is even more rare. I've done it for a couple of projects, and it's really useful (there were bugs hidden even in the last few percent), but it is a decent amount of work.
In general, I find it frustrating that the eslint recommended presets don't document why they are recommended. I disable several of the rules in all my projects because they seem to be arbitrary stylistic choices (e.g. [0], [1], [2]).
[0] https://typescript-eslint.io/rules/no-empty-function/
[1] https://github.com/jsx-eslint/eslint-plugin-react/blob/maste...
[2] Specifically checkLoops of https://eslint.org/docs/latest/rules/no-constant-condition
[edit] In looking up no-empty-function, I saw a stack overflow post that provides a compelling alternative of using `() => undefined` instead of `() => {}`, which suppresses the error. That should be shown as an example on the eslint page!