But I would say yes, bcrypt is still best practice. Other commenters are right that bad passwords will still be recoverable, but using one of bcrypt/scrypt/PBKDF2 is due diligence. I would use whichever one is most easily available on your platform.
Why do articles talking about this always talk about specific work factors and choosing a correct work factor, as if it's fixed? I thought good practice was to choose the work factor dynamically so it's calibrated to whatever hardware you're running today.
Uhh.. it doesn't? While it does give some specific values as 'a reasonable starting point' the article suggests tuning these to the specific environment.
"Each of these algorithms requires setting of an appropriate work factor. This should be tuned to make digest computation as difficult as possible while still performing responsively enough for the application. For concurrent user logins, you may need <0.5ms response times on whatever hardware is performing the validation. For something like single user disk or file encryption, a few seconds might be fine. As a result, the specific work factor values appropriate for your application are dependent on the hardware running your application and the use case of the application itself. Thomas Pornin gave some great guidance on determining these values in a post to stackexchange"
Yeah, what I'm getting at is that those posts don't actually come right out and say your code should be calibrating itself regularly. They talk about selecting a work factor by benchmarking your current hardware, but they leave it there, and one might come to the conclusion that once they've measured their hardware, they put "13" in a config file and call it done. I'm advocating for advice like "Don't think about work factors, think about time. Your configuration should be a time value, and when your app starts up, it should compute its own work factor based on this time value." If your code does this, there's no need for rules of thumb like "11 is a good starting point for a bcrypt work factor."
Yah, that does seem like a reasonable design. The implementation wrinkles there are that the hardness params have to be encoded in the digest itself, or otherwise stored alongside it. Since implementations commonly do this anyhow, that doesn't seem likely to pose much problem in practice (if any). The other issues would be details around when to calculate the hardness. On app initialization seems obvious, but you'd have to sample over some period of time to get a representative benchmark. I worry that this could exceed administrator tolerance for how long an app can reasonably take to start up, but this doesn't seem like a show stopper either.
Ultimately though, I'm not sure the factors are THAT variable. I mean, you want to reconsider work factors as hardware advances, but I don't think the line is so solid that running a work factor of N versus N+1 will make that much of a practical difference in the span of a few months or even a few years. Still, with the goal of making it as hard as feasible while still being suitably performant in the context of a given system it makes sense.
This doesnt make any sense: the goal of this work factor is to make things slower for the opponent, not my own servers. My estimation of the computational power available to my opponent has nothing to do with how large the web servers I chose to use happen to be.
But I would say yes, bcrypt is still best practice. Other commenters are right that bad passwords will still be recoverable, but using one of bcrypt/scrypt/PBKDF2 is due diligence. I would use whichever one is most easily available on your platform.