Properly escape all relevant html entities
Avoid problems with files named things like '<img>' and so on.
- var name = f.replace(/"/g, '"')
+ var name = f
+ .replace(/"/g, '"')
+ .replace(/>/g, '<')
+ .replace(/</g, '>')
+ .replace(/'/g, ''')
Jesus tapdancing christ that is seriously scary to see in something that's allegedly "all totally secure now, for really reals". More so the fact that such simple sanitization was missing for so long.
Well, I guess I'll put off learning node a bit longer then.
not much more encouraging. it looks to me like patch work. ive had this in the past. would give a PoC to a client along with a recommended design change to the questionable methods of the code. they would send back a new version with a patch much like all of those linked here. in the end those patches address the PoC but not the problem. then i just rework the PoC to go around the patch. This cat-mouse game goes on until they go back, do the f'ing work, and implement the original design change suggested. I say all that just to point out that this looks like patch work and is a scary behaviour. Then again, maybe this is the nature of nodejs (omg).
Also, as a general rule:
ANY SECURITY PATCH THAT IS A REGEX IS NOT A SECURITY PATCH
It's worth noting that the XSS vulnerability ("A user could inject scripts into the npm website via the README and license fields") assuredly exposed a whole slew of easy-to-exploit vulnerabilities, and the community should feel very lucky that such an obvious vulnerability was in the wild for so long without being exploited.
TL;DR always use a templating engine that makes you think about XSS and don't allow unsanitized user-provided HTML through raw.
> don't allow unsanitized user-provided HTML through raw.
It's sad that you even need to say this, both for the fact that Javascript sandboxing is so terrible and the fact that developers aren't aware of the hazards of just blindly taking user-provided HTML.
As a point of honor, I feel obligated to mention that @rockbot and I are very aware of these hazards, but our website was put together in somebody's spare time when Node had like 100 users in total :-)
If it's referring to what I think it is, part of this is my fault and I feel I should give a post-mortem as well. There was a problem with marked (the markdown parser npmjs.org uses for READMEs) which allowed users to provide `javascript:` pseudo-protocol links even when the `sanitize` option was enabled. It was fixed[1] with marked v0.3.1 on jan. 31st. It looks like npm-www started using marked v0.3.1 on feb. 17th[2].
edit: On closer inspection it looks like it may have been a problem with the html sanitizer[3] it used as opposed to the marked `sanitize` option (which is not used at all). I guess my conscience is clear here at least.
Why do they have to apologize for that? Almost every piece of software has security vulnerabilities.
Do people now really believe that there are definitely no security vulnerabilities related to the npm registry? Or any other type of registry, or website or application for that matter?
People have completely unrealistic expectations about security. Every time you had a significant amount of new functionality, or even a very insignificant amount, you could have introduced a new security vulnerability.
Basically this other company Lift or whatever could have two full time engineers doing security audits for npm from here until npm is done, and you could still have some other hacker who was thinking differently come up with a vulnerability in some new feature that they missed.
Its great to have the attitude that you are going to make a serious effort, but totally unrealistic to think that you are going to do one security audit and then there will be no more vulnerabilities. And also really makes no sense to make a big deal about it or give them grief or for them to even feel bad. If you think that way then you don't understand security.
You see all of these large projects having security issues not because they are full of negligent or sloppy engineers, but because security takes a lot of resources and is very difficult. The security firms will certainly suggest that engineers are negligent, of course. That ensures that they will continue to get new clients. But the reality is even with a lot of resources dedicated to just security, projects can easily have new vulnerabilities.
So that's great that they are getting regular security audits now. They are ahead of the curve.
EDIT: I notice I have been buried without anyone bothering to even respond. If you disagree, say why you disagree.
"Almost every piece of software has security vulnerabilities"
While this is true, there are certain classes of software for which vulnerabilities are a bigger deal and thus should be treated much more seriously, which includes much better up-front analysis of potential attacks and decent disclosure systems for when an attack (yes, almost inevitably) occurs.
Software distribution systems are pretty high up on the list for which exploits are A Big Deal because of how an exploit within them can very quickly spiral out of control (eg. attacker injects malicious code into very widely used module that is distributed from your repository -- now you're responsible for a lot more security problems than just the original problem on your site).
IMO this response is a good one; not over the top at all but gets the point across that they take this situation very seriously, which they should.
(FWIW, I upvoted you, so just because I partly disagree don't lump me in with those 'burying' you; I think the question you raise is entirely valid, but I do think this was a serious situation and the response was warranted).
It's just the sad state of the 'industry'. As soon as some armchair warrior finds something remotely wrong with whatever, they'll go nuts on you and you need this sort of touchy-feely PR nonsense to placate the comic book shop types. Hats off to these guys for being level-headed enough to be able to play the game this way - I couldn't do it any more, I'd go bonkers over overhead like this.
If anyone needs placating, it's those developers, managers and executives who pushed hard for the use of Node.js in business settings, not expecting a serious security incident like this one to happen.
For those who are especially serious about their careers, reputations, budgets and power, incidents like this involving the technologies they hyped and pushed through can be disastrous. Now they're seen as being very wrong about something very important, and this in turn makes them extremely angry.
They know that their competitors within the business will use this incident against them. Their next initiative will surely face comment like, "Why should we listen to you after the npm disaster?", and they will face a much harder battle if their decision involves any controversy or doubt at all.
Every technology is going to have an "incident" like this. Experienced developers expect security "incidents". There are quite a few managers and executives who do not understand security and don't. And many people who will use this sort of "incident" politically. Politics is the problem there, not the technology.
Just because they found one significant security issue with npm does not mean they were "very wrong" about using Node. Its just a reality of security with any technology.
This isn't a disaster, this is a demonstration of maturity, responsibility and transparency.
PROTIP: if you're so serious about your 'career' and 'power', maybe you should stop 'hyping' and 'pushing through' this weeks hot tech toy you read about on HN and Reddit, and start building something worthwhile yourself. That way, you won't have to bet your precious career on some dude on the internet you never met before to not screw up. What a concept!
There's a difference between, say, the local bank claiming to have a giant safe that's rated TL-30 but it turns out to actually be rated TL-15 while the bank across the street also claims to have a TL-30 rated safe when in actuality it turns out that during installation someone smashed a 4ft by 4ft hole through the safe then covered it over with aluminum foil.
There are the inevitable almost impossible to fully plan for security vulnerabilities (e.g. in code outside your control) and then there are vulnerabilities due to extremely poor process at a fundamental level (like sql and html injection).
Nice write-up and good on them for fixing that quickly, but it's a serious bummer they unnecessarily bring in the RubyGems incident as some sort of awkward "Well at least we didn't screw up that badly!" swipe. It's not relevant to anything else they said.
Update: I was in fact incorrect about the severity of the rubygems.org incident; their issue was a disclosure without a breach, exactly like ours. I've updated the original post and also issued a correction:
It's not a swipe at all. It's almost self-disparagement. They're being totally up front with "as history has shown, this could have been incredibly disastrous, but we got really lucky."
The 'we fixed it' link points to four new lines including these:
.replace(/>/g, '<')
.replace(/</g, '>')
I can't really tell without more background and context, but I'm surprised this doesn't turn > into > and < into <. Is this a mistake? The same code's still in the HEAD.
Nice disclosure, and can't fault the npmjs team given that they commissioned a security audit as soon as they possibly could.
I wonder if the npm, Inc. team told ^Lift about the disclosed vulns before ^Lift's own audit completed. I can imagine being tempted to see whether they'd discover it themselves, to gain more evidence on how comprehensive the audit was.
We told them as soon as we found out, because we needed them to go looking for the same hole in all of our code bases :-)
As part of the audit, ^Lift audited a lot of the third-party modules we use, and notified the authors of those packages separately (I'm not aware of the details, but I don't think there was anything major).
Full Disclosure: I work at ^Lift (liftsecurity.io)
To be honest, we usually bill out by time rather than code base size.
To determine costs, we:
* look at the application size
* estimate how long it will take us to get good coverage
* take in all the other factors (source provided vs blackbox)
* then we give an estimate based on how long we think it will take
I must say that I honestly believe that what we provide is totally worth the money. We work really hard to provide a clear assessment of the problems with recommendations on how to fix them.
We aren't an "automated tool" shop, we actually look at the app and understand how it works and then see how to break it.
But to actually answer your question, having NO IDEA what your application actually is:
(Keep in mind I am not the actual guy who does the bids, I do assessments and training. No promises, please don't fire me, etc.)
Prices vary drastically by project, and it would really depend on what we were looking at.
Just to be clear: any firm charging appsec rates should not be an automated-tool shop. I know there are some firms like that, but we banned scanners altogether the first year we were in business. I used to think we were cool for doing that, but really among the high-end firms that's table stakes.
Don't scope projects based on lines of code; you'll get shafted. Figure out the attack surfaces (for instance, how many app endpoints, how many URL routes, how many roles), come up with a total person/weeks scope, and then (especially if it's your first project) triage: capture the most important attack surfaces in a "pilot" or "90%" project to figure out how well you work with outside security teams.
A good firm will help you do this, gratis, if you're serious about funding the work. We do it "on spec" for most of our clients, even though that work sometimes ending up helping a competitor deliver the project.
It's fine if firms ask you for lines-of-code counts, but if that's the only question they ask, I'd consider that a red flag.
Thanks for the heads up. I had actually mentioedn only KLOCs as an initial number just to have a rough idea; I understand it's not sufficient for a real quote, but I was just looking into ballparks here.
Thomas could give you better numbers here, but in general, appsec reviews cost as much as getting software development done. (i.e. For a security review worth the paper you print the report on, you're looking at $5k at the lower end if your application is simple or if someone wants to really do you a favor, and they get substantially more expensive than that. You can get someone to run an automated scan for $500 and give you a CSV file, but that is not maximally in your interests.)
I'd turn away any client who asked me to do appsec work, because I don't think I'd produce work of a sufficient quality to justify the sort of rates I charge, but I do think I'm probably good enough to roughly scope appsec projects on technologies I understand well. Example: Appointment Reminder is an architecturally simple Rails application. I think an audit of the marketing site, application, and architecture would reasonably require probably 1 to 2 billable weeks depending on how much I asked you to plumb e.g. line-by-line HIPAA requirements, and that would probably run in the $4k to $12k region based on my understanding of prevailing rates for appsec work. (I'm sure that if I had $25k budgeted for that audit many firms could find a way to get me my money's worth for every penny of that budget, by the way.)
A $4k billable week is incredibly cheap, so much so that I'd worry about the team delivering it. Our rates are high because big firms bid them up. Why are the cheap teams turning down free money?
If someone offers you a $4k week, make sure they know they're cutting you a deal.
Hey, just to be clear: I don't think it's wrong to charge less than the market rate, and while I'm more likely to do a project gratis than at 1/2 or 1/3 my rate, we've given people breaks before.
^Lift gave us a really reasonable rate based on the number of people, the amount of time spent, and the number of weeks that they'd be poking at stuff.
They were extremely easy to work with, and very fast about getting stuff to us and verifying when it was fixed, and I felt like we definitely got more than our money's worth.
I used to work as security consultant for a while and final cost depends on how thoroughly you want to go. There's never "i'm done" state. There's always something left to check.
1) app-agnostic bugs, such as XSS/CSRF and other blatant issues
2) app-specific bugs such as access bypass, goto-fails, other obvious bugs like eval(params[:serialized]), security measures switched off, mass assignment :)
3) complex bug chains. Usually I end up with account hijacking or similar severity bugs by chaining few of unrelated and barely exploitable bugs, such as redirects, cookie encodings etc. This requires at least a week (which is $12k if you work with me).
4) infinity. Checking some unpopular ruby gems project uses. Checking popular ones. Checking rails codebase to be sure methods don't have "magic" arguments. Nobody goes that far usually, because attackers will have to do 2-4x more work to get same bugs you may find.
TL;DR, for quick & budget auditing a website like npm $3,200 and one day of work is enough, for any medium sized website people should take 1+ week.
So the audit was mostly about nom the website and the service for maintaining npm packages? That's a good first step.
Has there been any talk within the node community about auditing node modules themselves? Maybe start with the most popular? I could see this being popular with enterprise development, etc.
I want to say that Strong Loop made noise about doing something like this, but I haven't seen much on it of late.
One of the Node Security Project (https://nodesecurity.io/) main efforts is to audit all the npm modules in a community driven way.
We are accepting contributions from the community to build the tools that get the job done efficiently and to audit modules, disclosing vulnerabilities in a responsible manner.
I'll take a look, thanks. Some reviews will, of course, be manual in nature -- implementation correctness of digest auth, for instance, is one that comes to mind (I need to contribute that back to a particular module).
The original abandoning of the self-signed cert was because self-signed certs were a bad idea.
The issue that post refers to (it is a little unclear, because we ourselves were a little unclear what had gone wrong at that point) is when we broke older clients by moving to a non-GlobalSign cert. We cleared that up here: http://blog.npmjs.org/post/78165272245/more-help-with-self-s...
We had already planned to move to a new cert before the security disclosure, and hadn't anticipated the size of the problem with a non-GlobalSign cert, so although the two happened pretty much simultaneously, they weren't really related.
One of the things I love about Isaac is his empathy. Reading his blog posts, listening to his podcasts on Nodeup, looks like he is writing/talking to me, and since I'm a developer, this makes a huge difference in my motivation.
I'm sorry you took it that way. The scope of our security hole was exactly as big as the Rubygems vulnerability. If I'd omitted that comparison, I was sure somebody would say "these guys were just as bad as ruby but they're covering that up!" At the same time, I wanted to make it clear that the only reason this wasn't a game-over disaster for us is because we were lucky. We weren't any smarter, or better designed. Just luckier.
I think the problem I had with it had to do with the way the sentences were constructed. For example:
"... this could have been a disaster, very much like the rubygems.org security breach in early 2013"
This implies that the issue you had wasn't as serious as the RubyGems issue.
Similarly, the following sentence likewise implies that the breach was not as severe:
"Unlike that incident, there’s no evidence that, other than ourselves, the engineers who reported the bugs, and a few members of the GitHub security team who knew about the issue, anyone knew about this hole."
This implies a confidence in the presumption that you weren't breached that you then backpedal on in the following sentence by saying "of course we're not positive because we didn't have logs".
Both of the sentences I cite lead a reasonable reader to a different impression than the one you say you were attempting to convey. I'm glad to see you clarify things here, but I hope you can see why people would misconstrue things based on the words in the post.
Glad to see the correction, it solves the one problem with an otherwise very well written incident report. Kudos for setting the record straight and taking the criticism so well.
> * We fixed it on February 17th
the fix scares the shit out of me:
https://github.com/isaacs/st/commit/5a0c1886737a20d78ae00b61...