Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The real issue the Balkanization of JavaScript programs. The `rate-map` package is essentially one line of code:

    start + val * (end - start);
https://github.com/shinnn/rate-map/blob/90c234c9/index.mjs#L...


I honestly don't understand why people use packages like this. If I need this functionality, I will simply write my own. Plus, I will never able to find this specific package. I guess PureScript uses this because its author is also the author of rate-map.


I’ll give you one: it’s code already written and tested by > 1 person, edge cases already figured out. Saves you time. The gains are small but quickly add up.

This is why lately I’ve been a fan of very extensive standard libraries (like Crystal has) - its like having a huge repository but vetoed by the same team and without any of the package management drawbacks.


> it’s code already written and tested by > 1 person, edge cases already figured out.

It is tested? Edge cases figured out? Essentially for this code?:

    return start + val * (end - start);
Sure, if its running in hostile environment, you might need to do those sanity checks for your parameters - but I have hard time imagining such situation. If you actually need to "Map a number in the range of 0-1 to a new value with a given range" in your own code, can't you guarantee that the variables are all numbers? Its your responsibility as a developer to know your code, and the data your code is handling.

> Saves you time.

There is something really wrong if finding a package to do this niche thing is faster and more optimal that just writing out that one line of code.


> The gains are small but quickly add up.

There are costs to these micro-libraries that outweigh the gains. This code is trivial; there aren’t really edge cases to be worked out.


Also: the way the library handles those edge cases isn't necessarily the way you want. Case in point: rate-map throws exceptions in situations where you might expect it to fail more gracefully.


> edge cases already figured out.

Nope, it doesn't guarantee its invariants. This would pass all the tests and yet returns a value out of bounds:

    > s = 1e12
    1000000000000
    > e = 1e-8
    1e-8
    > s + 1.0 * (e - s)
    0


>The gains are small but quickly add up.

Same could also be said for build times not to mention security issues should one of your micro-dependencies be hijacked.


What gains? That line wouldn't take much resources or time to figure out and test. Now you have yet another dependency that could be injected with bad code in the future...


[C]ode already written and tested by > 1 person, edge cases already figured out wastes time when that person was not you. This costs you time. The losses are small but they add up.


> This is why lately I’ve been a fan of very extensive standard libraries (like Crystal has) - its like having a huge repository but vetoed by the same team and without any of the package management drawbacks.

Having had some involvement in the early days of node, I had imagined there being something like the Python stdlib. When the npm world grew, I thought "oh, that's pretty cool. It's neat how npm can handle multiple versions of the same package."

Now, I'm in absolute agreement with you. There are definitely downsides to a large standard library, but I think the upsides are worth it if that library is maintained.


As usual, everyone here will start their "I'm better than this" comments.

If you've ever used a code dependency, you are a target for malicious code. That's just how it is.

In this case, using small packages like this helps in...

1. Reliability - these packages are typically 100% unit tested

2. Convenience

3. Reduce codebase size. I can't imagine having to copy paste every little small function into a mega utils file.


You should take a look at the trim package and how often hat is downloaded.


There is not even a link to the source code on the npm page for it. I installed it and inspected the source code, but I doubt everyone does this when installing a dependency.


Here's the complete source for anyone curious

``` exports = module.exports = trim;

function trim(str){ return str.replace(/^\s|\s$/g, ''); }

exports.left = function(str){ return str.replace(/^\s/, ''); };

exports.right = function(str){ return str.replace(/\s$/, ''); };

```


That code doesn't match what actually happens. It will only trim a single character. Have asterisks been trimmed in your copy-paste or something?


Ah you're right, the asterisks have been interpreted as italics. Can't edit my comment, sorry

Here's a paste: https://pastebin.com/kBHprdyj


Funnily enough, in my opinion, the JS ecosystem fully embraced the Unix way: have small programs/libs that do only one thing and do it (somewhat) well.


I fear your analogy does not go far enough if you are comparing the JS ecosystem to the UNIX philosophy.

Javascript would offer a library for every single option, variant and logical operator for a UNIX command. Combinatorial explosion will devour the web. It already destroys dev laptops anyways when downloading something via NPM.


No, they do not on the whole do anything well.


Wow—and spread into 11 files to facilitate CI, linting, etc—and with 1kb worth of error checking. That is nuts!


...wow. I literally did not believe that until I clicked the link. JavaScript has gone too far.


The problem is clearly due to vanity metrics like number of packages motivating people to publish an insane number of useless packages to fluff their contributions.


Agreed - I’ve found and been astonished by Github users who maintain hundreds of these little npm packages, all of which have usually under 10 lines of actual code, and sometimes even have chains of dependencies on the user’s other packages.

It seems like the only reason these packages get any significant downloads is when one of them gets depended on by a big package, causing the entire dependency chain of the user’s little packages to be downloaded.


That is not clear.


well, the rate-map at least includes a function: https://github.com/sindresorhus/shebang-regex


A dependency for /^#!(.*)/;

With 8.6 mil downloads and 71 dependents...

https://www.npmjs.com/package/shebang-regex


I love the useless tests : https://github.com/sindresorhus/shebang-regex/blob/1cb5d4aee...

You could replace the regexp with something matching everything and they would still pass.


and if you look closely, 70 of those dependents are just small-scale/bs packages and the only sensible real dependency is cross-spawn _via_ the shebang package (which applies this regex to a string). The whole ecosysten could use a purge of all those useless 1-line-requires (which were all introduced by helpful commits from the "i has 1337 downloads"-community), currently this is madness.


sindresorhus is famous for that stuff :D


This just made my day.


And this one-liner library even has a dependency (!) used by the surrounding error-checking code.


What's the difference between an NPM dependency chain and a black hole?

We can measure the depth of a black hole.


Don't know whether to laugh or cry at this. Truth is stranger than fiction in the land of JavaScript. Could any developer 20 years ago predict this is what software engineering on the Web would devolve to ?


But it's great for resume and portfolio.


This is the problem - how to disincentivise that behaviour?


Do not hire people who do this. Most of them will say "over 200m downloads of npm packages". Go and take a look. If you see things like 'is-not-foo, checks if a given string is not equal to the string "foo"', and 0 meaningful contributions, either pay no attention to these claims or pass.


It's an ecosystem full of reinventing the wheel. The most popular library, lodash, includes a reimplementation of a foreach loop for Pete's sake, for reasons passing understanding since it's part of the ecma spec.

JavaScript is just amateur hour, and these things are going to keep happening. It's pathological.


Which foreach are you talking about? Array.prototype.forEach, for...in loops, for...of loops?

The first only works with arrays and array-like objects.

The second works on objects and arrays, but it iterates over all enumerable properties, so you don't want really want to use it for arrays. It's also made a lot less useful because it only iterates over properties, not keys.

The third finally provides some sanity, but it's only been around since ES6. Before that, lodash's each method was the most reliable way to iterate over a collection, be it an object or an array.

Just because you don't know the reason for something doesn't mean there isn't one.


`for ... of` only iterate over objects that implement `Symbol.iterator`. Native objects don’t do that by default, so `_.forEach` is more useful the `for ... of` even if you are only targeting modern browsers and not compiling the code down to an earlier version of the spec.

That said, you can use `Object.keys`, `Object.values`, or `Object.entries` if you want to iterate over objects that don’t implement `Symbol.iterator`, so if you only need `_.forEach` there is no reason to pull in any libraries.

    Object.entries(object).forEach(([key, value]) => {
      // ...
    });


> Array.prototype.forEach

Yes, this one. If the object is an Array. According to whatever test they are using for that.

Lodash includes a reimplementation of Array.prototype.forEach because mistakes were made. It also works on other objects because other mistakes were made.

We all make mistakes. But just because there is a reason for something does not mean there is a good reason.


[flagged]


Hey, can you please stop breaking the site guidelines? https://news.ycombinator.com/newsguidelines.html

I don't want to ban you, but when accounts keep doing this, we kind of have to.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: