Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Slack’s hashing function is bcrypt with a randomly generated salt per-password which makes it computationally infeasible that your password could be recreated from the hashed form.

I'm happy to hear they didn't just use MD5 with no salt as this would be the same as storing it in plane text...

bcrypt + random salt sounds to me like the best practice nowadays, is it still holding? or are there some advanced in GPU cluster costs on EC2 that make even bcrypt hackable. I think I heard something that it has a way to "adapt" to advances in computing, is that by simply adding more iterations based on the current CPU speed or something? how does that work?



There are some thoughts on the matter here: http://chargen.matasano.com/chargen/2015/3/26/enough-with-th....

But I would say yes, bcrypt is still best practice. Other commenters are right that bad passwords will still be recoverable, but using one of bcrypt/scrypt/PBKDF2 is due diligence. I would use whichever one is most easily available on your platform.


Why do articles talking about this always talk about specific work factors and choosing a correct work factor, as if it's fixed? I thought good practice was to choose the work factor dynamically so it's calibrated to whatever hardware you're running today.


Uhh.. it doesn't? While it does give some specific values as 'a reasonable starting point' the article suggests tuning these to the specific environment.

"Each of these algorithms requires setting of an appropriate work factor. This should be tuned to make digest computation as difficult as possible while still performing responsively enough for the application. For concurrent user logins, you may need <0.5ms response times on whatever hardware is performing the validation. For something like single user disk or file encryption, a few seconds might be fine. As a result, the specific work factor values appropriate for your application are dependent on the hardware running your application and the use case of the application itself. Thomas Pornin gave some great guidance on determining these values in a post to stackexchange"

The stackexchange post (which is linked in the article) can be found at http://security.stackexchange.com/questions/3959/recommended...


Yeah, what I'm getting at is that those posts don't actually come right out and say your code should be calibrating itself regularly. They talk about selecting a work factor by benchmarking your current hardware, but they leave it there, and one might come to the conclusion that once they've measured their hardware, they put "13" in a config file and call it done. I'm advocating for advice like "Don't think about work factors, think about time. Your configuration should be a time value, and when your app starts up, it should compute its own work factor based on this time value." If your code does this, there's no need for rules of thumb like "11 is a good starting point for a bcrypt work factor."


Ohh, okay. I see what you're getting at now.

Yah, that does seem like a reasonable design. The implementation wrinkles there are that the hardness params have to be encoded in the digest itself, or otherwise stored alongside it. Since implementations commonly do this anyhow, that doesn't seem likely to pose much problem in practice (if any). The other issues would be details around when to calculate the hardness. On app initialization seems obvious, but you'd have to sample over some period of time to get a representative benchmark. I worry that this could exceed administrator tolerance for how long an app can reasonably take to start up, but this doesn't seem like a show stopper either.

Ultimately though, I'm not sure the factors are THAT variable. I mean, you want to reconsider work factors as hardware advances, but I don't think the line is so solid that running a work factor of N versus N+1 will make that much of a practical difference in the span of a few months or even a few years. Still, with the goal of making it as hard as feasible while still being suitably performant in the context of a given system it makes sense.


This doesnt make any sense: the goal of this work factor is to make things slower for the opponent, not my own servers. My estimation of the computational power available to my opponent has nothing to do with how large the web servers I chose to use happen to be.


The recommendation is to make it as slow as you can tolerate, which is based on how fast your web servers are.


Well, bcrypt + random salt is great but it still can't protect you from bad passwords. It's vulnerable to a dictionary or bruteforce attack the same way everything else is.

You _can_ increase the difficulty of bcrypt, which essentially increases the iterations (as you mentioned). Most bcrypt libraries default this value, but the programmer can override it. The downside is that this requires you to update all of your hashes, which limits how often this is done.


Note: if your hashing algorithm is too compute intensive, you have to take other measures to prevent your login system from being a DDOS vector... for example, with SCrypt's recommended defaults for passwords, the .Net library takes almost half a second of time on a modern CPU... if you get more than a few dozen requests per second, per system you can be brought to a crawl without other mitigation in place.


What makes it slow? Is it implemented in C#?

I would avoid slow implementations of password hashing algorithms. You want the overall operation to be slow due to the computations you're performing, but you want the implementations of those operations to be fast. Because the attacker's implementations of those operations will be fast.


I didn't compare to the C/C++ implementation, it was harder to get that working on windows... I did compare to a JS implementation[0] running in node, which was about half as fast though... mainly because at the time many node modules requiring a build step didn't run well in windows, and my target environment was windows.

[0]: https://www.npmjs.com/package/js-scrypt


  bcrypt + random salt
bcrypt incorporates a random salt by definition, so it's redundant to add " + random salt".

http://en.wikipedia.org/wiki/Bcrypt


Yes, you can set the work factor. It seems like the default is around 10 at the moment.

http://wildlyinaccurate.com/bcrypt-choosing-a-work-factor/




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: