I'm the original co-author of the "Conventional Commits" spec. Although, I should give credit where credit is due, and say that it evolves directly from Angular commit conventions.
I started adopting these conventions with the goal of automating releases, both on my open-source and on the services I was working on at npm (I've since brought the practice to my team at Google).
I very much did not want to introduce road blocks to folks committing to their own branches -- which is what the "rewrite the message when you squash" advice grows from.
Here's a post I wrote on how my team uses Conventional Commits in our release process:
man alive, an irresponsible approach to shutting down a SaaS like this is exactly why infrastructure startups face an uphill battle getting folks to adopt them.
The problem with measures such as 2FA is that they are voluntarily implemented only by users who are most concerned about security, whereas users setting their password to "password" are on the opposite end of the spectrum.
What we really need is (1) 2FA and other enhanced security measures and (2) the ability to exclude all packages from a project, whether imported directly or indirectly, that do not abide with a minimum level of security.
I like the direction you’re heading with this, as it touches on supply-chain issues that have most folks just throwing their hands up. What would be most interesting is a standard framework for expressing a security policy combined with some hooks in the build tooling.
I also wonder whether it would be appropriate for the repositories themselves to hold maintainers to a minimum standard as well as their own claims. E.g., package maintainers must set a >12 character password and employ 2fa.
In reality, this isn’t just an NPM issue. I suspect that similar issues plague just about every package management framework, App Store, or CDN out there. Having a couple of standardized approaches would enable developers who care to automate checks and start to generate new incentives for the folks that are publishing their work to follow some basic standards.
This is exactly what is needed. Publish on each user's profile whether their account is secure, and provide an option in the client to disallow upgrading package versions owned by users that don't comply.
You could even try to crack the password of any user with enough (by some threshold) downloads using known leaked passwords as seeds, and mark them as insecure and reset their password if successful.
You're talking about actions that security-minded parties could take already, if they cared to do so. Run your own registry, and audit everything that goes in, before it goes in. That would be a lot of work, but it would actually affect security to some degree. This idea that packages will be safe if only we inconvenience all package authors enough is just silly.
I'm curious what the percentage of npm publishers that have this toggled on is, and I wish that was available data. First, in terms of all packages, and then of top 1000 packages. I'll wildly guess <0.5% of all packages, and <1% of top 1000. You don't get 2FA without the beta client right now anyway.
It's of no surprise to anyone that follows Node.js security at this stage that the third party dependency chain is really its biggest weak link. Jordan Wright did some good research a couple of months ago on Node dependency trees and malicious packages that's worth a read:
> I'm curious what the percentage of npm publishers that have this toggled on is, and I wish that was available data.
I know we're tracking this data and I bet a follow up post will be written at some point once some numbers are available. As you say, I expect 2fa will see wide adoption as soon as a stable version lands in the upstream Node.
We use Atom's syntax highlighter for syntax highlighting on npmjs.com -- I originally wrote onigurumajs, because I was seeing if we could viably remove the website's only compiled dependency (oniguruma) ... I wrote it over vacation, and then had to put the work down.
I would love help to see the library over the finish line; on a grammar by grammar basis it would be great to figure out what the JavaScript regex engine is missing, and try to shim the logic.
why???
The great thing about using oniguruma, is that it lets you leverage the huge collection of grammars available for TextMate -- unfortunately JavaScript's regex engine doesn't support quite a few rules that are present in TextMate grammars.
Oniguruma was a big roadblock for me on Chromebooks too.
Ideally, I'd have liked to have a version of VS Code that could be released to the Chrome Web Store, but regex is so labyrinthine in its edge cases that dropping in any other implementation was just inciting hatred from the editor.
Ultimately I looked at cross-compiling Oniguruma to WebAssembly to see how far I could get with this, and if that didn't work, NaCl.
I didn't get too far down this road as it was taking me away from the core goal of the project, and there's one of me.
As one of the folks on the front-lines helping patch this, I certainly have no hard feelings; and I'm excited to be able to support this feature properly
... also ... not going to lie, this was the first time we've gotten to test several of the checks and balances we have in the npm registry which I was jazzed about :)
Thanks, Benjamin, Laurie and everyone else for mitigating this, it feels great to know when the community chimes in together for such highly unanticipated scenarios.
On that note, however, respectfully I believe that features which have the potential of hitting the registry so bad should first be beta tested on a private registry and moved on to the high traffic serving CDNs of npm.
And 10% of the daily traffic is from India??? Whoa, every day is a school day.
We've had the opposite experience using Replicated for npm's on-prem npm Enterprise software.
I was originally trying to build our Enterprise solution using Ansible, targeting a few common OSes (Ubuntu, Centos, RHEL); headaches gradually began to pile up, surrounding the "tiny" differences in each of these environments -- I'm VERY happy to offload this work.
It took me a little while to wrap my head around best practices regarding placing our services in Docker containers, but once I was over this conceptual leap I was quite happy.
You realize that the majority of IPs submitted may not be from a known cloud provider/test suite, and so you're still dealing with the rate limiting from falling back for most requests?
I'm the original co-author of the "Conventional Commits" spec. Although, I should give credit where credit is due, and say that it evolves directly from Angular commit conventions.
I started adopting these conventions with the goal of automating releases, both on my open-source and on the services I was working on at npm (I've since brought the practice to my team at Google).
I very much did not want to introduce road blocks to folks committing to their own branches -- which is what the "rewrite the message when you squash" advice grows from.
Here's a post I wrote on how my team uses Conventional Commits in our release process:
https://dev.to/bcoe/how-my-team-releases-libraries-23el