As Stavros mentions, you can, and if you feel qualified, you should manage your own keys. Be that with some software authenticator you deem safe or write yourself, or with e.g. our keys that are open source, so you can modify anything to your liking, etc. etc.
I sense a bit of 90s security thinking from your arguments though, where every end user and mid-level admin handles security decisions they're frankly not qualified for.
This is what I meant by "safe defaults". Yes some people use e.g. password managers, but no, most people don't. Yes, some people manage to use GPG to manage their ssh keys, but no most people, even qualified, don't/can't/won't.
"Bad defaults with patches hopefully making it safe" is just not the way we should be heading.
I agree with all of your points, I'd just like to point out that, even if everyone ends up using software authenticators (password manager-style) with WebAuthn, we'll still be in a much better position than we are today, where people just use the same insecure password everywhere.
In FIDO-speak, "platform" authenticators are your laptop or phone, using their contained secure storage, vs "roaming" authnrs like our SoloKeys. Most people assume that the former will be the main way to use WebAuthn. Consumers using keys are mostly enthusiasts/early adopters/special needs.
Mainly in a corporate setting, a separate hardware key may provide a root of trust (and audit trail if the key is modified to be trackable) , with which you can then unlock your devices in a self-service manner.
You're right that software authnrs are a bad idea.
For services that don't want the security to be pierced by such unsafe fallbacks, initial key attestation can whitelist the acceptable authenticators.
One thing that is too infrequently highlighted is that FIDO2 is decentral authentication between you and the services, unlike "login with big-corp".
The point is that ssh keys lying around on your laptop aren't the greatest idea either. Where is the root of trust? The password you type to terminal if you encrypt them?
FIDO2 starts with the idea of safe defaults, where either client devices (Android, laptop TPM,...) store the keys safely, or dongle vendors (like us, SoloKeys). These have a business interest in doing their job properly.
But there's nothing preventing software implementations, it's an open standard in that respect (I do have other issues with it but your specific concern is unfounded imho).
SoloKeys person here ;) You can implement software authenticators (listening on local USB port), I imagine some password manager people will do so eventually, or have a direct way to hook into requests. Krypton did this for U2F.
Hardware keys are for if you want hardware security, obviously they can't be free unless you want someone with a different business model to subsidise them.
Yes software authenticators that use a security key for OTP are a good option. This is what Yubikey does with the Yubikey Authenticator. As with most security things there are tradeoffs to each approach. The pro of a software authenticator is that you can have an unlimited number of accounts, the con is that it requires the user to install an app on their phone/computer (in some cases the OTP shows up in a desktop app it may be possible for hacker to intercept). With the OTPs being generated and typed by the key itself the pro is that you don't have to install an app and login can be faster because the OTP is typed for you, no reading and typing it manually (also harder to phish). The con being you can't store unlimited accounts and since the key is typing the OTP the key has to be physically connected to a phone/computer.
Listening to local USB port? Hm... Why should I listen to a local USB port to exchange keys in a PKI? This only proves my point that WebAuthn is about hardware replacing passwords.
There's an ascending signature counter that's intended to prevent cloned devices (replay attacks are prevented seperately with server generated challenge). One way around it is clone (backup key) having very high initial signature counter set, so first use invalidates original (on loss). But yeah it's a UX problem that hopefully will find a better/non-hacky solution than "register multiple keys for each site".
> But yeah it's a UX problem that hopefully will find a better/non-hacky solution than "register multiple keys for each site".
This seems like a huge blocker for adoption. I currently run into this issue with FIDO for 2fa - I store my backup key offsite, which means enrolling the second device requires me to make a special trip to retrieve the device. My current approach is to retrieve the backup token every few months and add it to all of the new services I have enrolled in, but I don't have a good system for remembering all of these services, so I inevitably forget one (despite only using the key on a few services)...
Write the services down. You don't need to keep the fact that you have FIDO tokens secret unless you're keeping them somewhere otherwise unsafe like under a rock in your garden. So a list titled "Services I've enabled for FIDO" with check columns for the tokens gets it done.
As I'm currently working on possible options to expose on-device keys and cryptography for our open source FIDO2 key (SoloKeys) beyond the FIDO use case, I'd be curious about opinions on just exposing and using the PKCS#11 API (Cryptoki) [0] directly.
Envisioned setup would entail: download (custom) `libsolo-pk11.so`, generate RSA or ECDSA key on the USB key, get public key via `ssh-keygen -D libsolo-pk11.so`, use via `ssh -I libsolo-pk11.so user@example.com`.
The equivalent thing can be done for TPMs with simple-tpm-pk11 [1] today.
Technically, I'd extend the FIDO2 CTAPHID transport with "vendor commands" [2] mapping the basic Cryptoki API, and call that from the custom PKCS#11 shared library, which is then just a simple shim/wrapper. No additional drivers needed (everyone has HID).
Issues I can foresee: Users too attached to GPG workflow. Installation of custom shared library. No SSH support (via PKCS#11) for Ed25519 yet. SSH support for ECDSA only in about-to-be-released OpenSSH 8.0. Vanilla PuTTY on Windows has no PKCS#11 support. Bad rap of PKCS#11 due to existing vendors adding proprietary and closed source extensions. And the fact that SSH (currently) presents all keys to the host - I'd really like to be able to specify which key to use.
Personally, I'm a bit allergic to the GPG/PCSC/PIV/CCID way of doing things... My itch-to-scratch is just having a few keys off my computers (in particular, portable), and perform (infrequent) signatures on the separate device. And do this via a (comparatively) sane, open standard.
Personally I'd prefer as open a standard as possible usable across the greatest swath possible (e.g. Chrome Windows/Mac, Chromium Linux, Firefox W/M/L, Android Chrome/Firefox).
Someone else seems to second lower-level standards as the best way [1].
I used TPM authentication with libsimple and recommended it to others with the assumption that I wouldn't need to back up any usespace data when when upgrading the system. Turned out this is not the case. Not only do you need the TPM password but also certain files from /var from the old install.
Google states there is a process to go through if you lose both keys that takes up to 3 days. The question is, how strict is this, and how easy is it to lock yourself out.
https://login.swissid.ch does this too: disallow password managers from filling out the login. Upon asking them to fix: "Autofill completion is not allowed by us for security reasons. First, if that's the case, if someone gets to your PC, we can stop a hacking attempt and that's one of many reasons. For other questions, we are at your disposal."
They also only enforce SMS as two-factor authentication.
The idea of this SwissID is to become a nation-wide identity service, yet they manage to do everything wrong. Yeah, this annoys me to no end :(
Hah, that page actually allows you to test whether or not a certain email address is signed up for the service, which seems like an even worse idea given what they're to become.
You should validate credentials all at the same time. In general, you only fail a login at the end of the process, not halfway through. Also every login failure, regardless of reason, should be accompanied by short, random server-side sleep before returning (e.g. random between a couple hundred milliseconds and a second).
Account enumeration is impossible to fully prevent, and as far as security vulnerabilities go, the risks associated with account enumeration are usually pretty irrelevant. It’s the sort of thing you’d see on a penetrating testing report when the testers didn’t find any actual security vulnerabilities.
I generally agree with that, but it's worth mentioning that there are exceptions where being able to tell if an individual is registered can be sensitive information (Ashley Madison, Grindr, etc.)
Yeah, there are some unique threat models where determining that an account exists would be a sensitive information disclosure. In those cases users would be more willing to endure the potentially heavy handed UX trade offs required to adequately prevent it.
It’s the idea that knowing that an account exists somehow represents a compromise in the accounts security posture that I generally reject.
I still think that's vulnerable to timing attack. Timing attacks are the voodoo that should keep us up at night, a clever attacker can extract information that seems impossible.
With unlimited logins, it is indeed vulnerable to timing attacks. You could measure the mean time of a certain email against the mean time of another email. This effectively gets rid of the random delay.
Perhaps you could measure the time of the login process and adjust the random delay based on how long the process has taken. If you can get this to average to a <1ms difference between the "email exists" and "email doesn't exist" you could probably defeat any timing attacks over the network.
That, or just limit retries and have different random timeouts for every login. Now you can't try enough times to get a good estimate of the mean for each login path, and you can't use other logins to help you refine your estimate because each has different timeouts.
Once upon a time I thought this was a pretty serious avenue of attack and wrote login forms to always run on the server in constant time -- starting a timer at the beginning and only returning output after a fixed number of ms.
I mostly don't bother anymore, because an effective timing attack for account discovery against something that's doing everything else correctly should take so many attempts that it should wake up whatever brute force protection sites should be running now anyway.
Given the number of dumb automated brute force probes against just about anything with a login, you can't just allow an infinite number of requests from a single IP (or a handful of IPs).
Oh sure, didn't mean to come across as someone who worries deeply over account discovery. I just saw an opportunity to remind folks that foiling side-channel attacks is very non-trivial and you can put forth a good effort and still be surprised at the information you're leaking.
From what I read, they can tell differences when having the randomness in the hundredths-of-seconds (specifically said they patched to go from 10s to 10,000µs). The randomness I mention, assuming N is the maximum number of tolerable system time for successful login, should be 2N + random(2N, ~100N). Or just store a time at login start and force hit the same deadline every time (via sleep of diff from start) then add randomness on top. Of course, additional brute force detection/protection is ideal for repeated failures.
The random sleep is to prevent obtaining enough samples and provide reasonable noise at this small sample size. Given enough samples until the end of time, patterns can be obtained. This is not breaking TLS here, this is login, and seconds of sleep vs microseconds makes a difference.
Randomness inserted like that can be filtered out statistically with a decent sample set - it's a gaussian distribution, so you'll still see the saying timing differences.
This is why things have to be constant time, rather than random return time.
Usually you do it by giving the same response for an invalid password and a non-existent email. I wanted to see how that particular page was leaking but the site wouldn’t load for me.
Not quite - best practice is to continue the initial setup - ie, "we've sent you a link, please click to activate your account".
Except if the email address is already in use, you email the address and let them know that. That way they only leak that info to the owner of the email address - and they can include a password reset link too.
How about for websites that give you some functionality without a verified email address? At that point, you can't let a user dink around if the address is in use.
Granted, this doesn't apply to eg banks, but there's plenty of websites where this could apply.
Don't ask for their email address then. What's the point in having an email address that you have no idea if it's correct or not? You might as well ask them to put in a random string of characters.
Absolutely. It's really more relevant with services you can't directly sign up for, such as an internal service in the company where user enumeration helps you find a target when the error messages are different.
No, but given a list of random addresses of Swiss citizens scraped from wherever, you can validate each one by checking them against this portal. If it shows up, it's an active address.
I too have a bad feeling about SwissID. Only SMS as two-factor authentication, turned off by default. No other methods planned, according to the support.
The fact that the Swiss Post requires it as login method makes me uneasy.
Forbidding pasting provides the security of having to click "Inspect elements" before the hacker can proceed.
(Most PW manager plugins I know already ignore pasting properties, though a bank I was customer of circumvented that; the textbox was actually a DIV and they had coded the functionality of a textbox into it to prevent pasting)
Yea, I ran into a bank that did that too. Then they proceeded to ask me a half dozen "security" questions from a massive list I could choose. Most of which I didn't know the answer to.
I answered all of them (and put my answers down in my pw manager) with something like "X bank has terrible security, I hate this bank" - hoping that one day I'll have to answer those by phone haha.
I always answer the security questions with gibberish that I also save with my password manager. I now use a method like correct-horse-battery-staple to create answers, but I used to use long alphanumeric strings. I switched methods because, yes, one day I had to read the answer over the phone.
I'm a bit worried about social engineering there. "Oh, it's a bunch of gibberish" may pass muster with a support rep (in both of your approaches), leading to compromise.
Lately, I've been making up a seemingly correct, but random response (and different each time). My favorite vegetable? Sea cucumber! I store that in my password manager.
> I'm a bit worried about social engineering there. "Oh, it's a bunch of gibberish" may pass muster with a support rep (in both of your approaches), leading to compromise.
I can confirm that this is the case. I provided a gibberish answer to a security question for Blizzard. I didn't bother to write it down, relying on not forgetting my password.
I never forgot my password, but Blizzard shut down my account anyway because I was making payments with a card that was not listed as the account's "primary payment method". (The card I was using was listed on the account, but another card was the "primary payment method".) When I had to call support and answer my security question, the answer I'd filled in just meant that I wasn't required to provide the correct answer.
I've found it's better to give them correct answers that are entirely fake. The make of your first car is an astin Martin. Your nearest sibling lives in lunar colony 1.
This way "it's a bunch of gibberish" doesn't get past their security.
Clearly random answers are a problem. You're going to find support reps inclined to accept "oh it's just something random", which means you're guaranteed get compromised if you're a big enough target to spend some hours on.
Random but outwardly appearing valid ones are fine (but you'd want to avoid using the same answer on different sites). One site's "first car" could be Porsche 911, another's Aston Martin. Both aren't true, but the support rep doesn't know that.
I've had the same situation before, and I don't think I've ever had to read them the entire thing. Usually we did something like this:
Rep: "Tell me the answer to this question."
Me: "Ok, let's see.....ah. So, it looks like a random string of gibberish, right?"
Rep: "Um, well...(unsure if he's allowed to say Yes or No)"
Me: "Yeah, I use a password manager for all my stuff, so all my passwords are randomly generated. I didn't think I'd ever have to read it over the phone. Sorry about that! I can read it out for you, but it might take awhile. If I read you the first three characters and the last three characters, is that sufficient to demonstrate for you that I know the Answer?
Rep: "Yes, I think that would be fine."
Me: "Alright, then! First three, 'F', 'caret', 'capital O'. Last three, 'capital G', 'lowercase l', 'dollar sign'.
---
As I said, I've never had anyone challenge me to read the full thing out. When I explain why it is that way and give them the bookends, they are usually convinced that I'm me.
I use GPW strings for this use-case. GPW has weaknesses that make them somewhat poor for use as first-line passwords, but they're still really good for passwords that you need to read to someone over the phone.
In my most recent experience with them, the company allowed to be set both the question AND the answer. So, they had to read a random string to me, and I had to read one back. It went quite well, actually.
I do the same thing, but I only use strings that are maybe four or five characters of letters only. Most of these answers are expected to be things like people and street names, so I think it's still vastly more secure without looking like the system had an internal error.
The only change I make to this process is my security questions are stored in a separate password manager to my passwords. That way if I lose access to my passwords and actually need the stupid (ahem, security) questions I can find them.
I did that at a certain corp, my security answers were all long sentences completely unrelated to the question (mostly things like "This question is useless" which made for some interesting phone conversations until they got it)
Basically if you wanted to reset your employee password, which gave you access to corp vpn etc, you could call a 24/7 support line and give them your security answers.
The problem with this is that most of the questions were not things that are inherently secure, things like "what was the name of your primary school" are easy to guess or research.
> The problem with this is that most of the questions were not things that are inherently secure, things like "what was the name of your primary school" are easy to guess or research.
They're also inherently unanswerable in many cases.
As an example, I went to two different primary schools. I don't have a favorite musician or sports team, and the answer to "where did you meet your wife" might be the school, the city, or "in class".
Last time I had to update my Apple security questions a good 80% of the questions weren't ones I felt I could answer in a way that'd be memorable a few years later.
Not to mention, these things are usually case sensitive. Sure, I can remember my childhood address, but how did I capitalize it? Did I abbreviate street? If I abbreviated street, did I add a period to make it "St." or "St"?
Fortunately, I don't notice too many services requiring security questions these days. Unfortunately, most of them are banks or other services that probably also have my SSN.
* you leave the password in the clipboard, and another website copies it (used to be a thing, I think it's patched now)
* same case, but now a coworker comes to your unattended PC and retrieves the password by pasting it somewhere
* allowing pasting would undermine the idea that you should never write your password down, and lead to a proliferation of files called "passwords.txt" on everybody's desktops
None of this arguments is really good, but I can believe that they would be the result of a world without widespread password managers (also known as "the 90s") and tradition.
> * allowing pasting would undermine the idea that you should never write your password down, and lead to a proliferation of files called "passwords.txt" on everybody's desktops
"Never write a password down" has always been a bad idea. A file named "passwd.txt" on my desktop still is better than using a trivial password or the same password on all sites. It still requires compromise of my machine and prevents the password from being recovered from a dump of the pw-hashes.
No, there's a reason password managers should be preferred. For example, sometimes (browser) sandbox escapes grant reading of arbitrary files. Take for example the recently discussed malware scanner the sent the browsing history, it could read such a file and transfer it back.
Modern browsers and OS kernels have extensive mitigations against this. Reliably extracting a password from a browser process's heap would be newsworthy today.
I think “just” is apt. If you have a web request to send the password, you will have a url or username string very close by in memory that can be searched for.
I specifically picked an example of a malware that was capable of reading arbitrary files, but not arbitrary memory because the authors found a simple way to trick users into granting them this permission set, but not another.
A sandbox escape that allows the attacker to trick the browser into sending arbitrary files back is also substantially different to having malware on your system that can read arbitrary memory.
But that's not the point. The point is they have to break past your login screen, or, failing that, pull data from your storage while it's "offline" (i.e. not booted). If it's encrypted, they can't pull data off your drive externally, and as long as they can't login you're fine. Plus all the data is stored still encrypted. It's not like it decrypts the drive when you boot, it just enables an decryption algorithm that decrypts data on the fly (AFAIK).
The data is encrypted, but as long as the encryption keys are in memory, they could be retrieved via either an attack against peripheral ports that can read memory (thunderbolt has proven vulnerable and USB too, iirc) or via a cold boot attack, possibly using freeze sprays. Such attacks against FDE have been demonstrated. A good password manager purges the keys after a bit or on lock. pass ties into the gpg ecosystem and thus allows having the keys on a smartcard, a capability I’d like to see in other PW-managers.
MacOS has the option to purge decryption keys from memory on lock, but that effectively puts the computer to sleep on lock. It’s more secure, but annoying as hell since all network connections die (VPN, ssh, ...)
True, there were a couple teams recently with proof of concept for a cold boot attack on BitLocker, so I guess it's still not so secure. But unless you've got some crazy blackhat or a three letter agency after you, I'd argue you're probably not at risk ;)
If you have a fancy "USB" port which allows connexion of graphics cards (so basically a PCIe port, although it also accepts USB), chances are that you can do whatever you want with unrestricted DMA through this port. It seems that letting Windows use the IOMMU is only allowed on the Enterprise edition, which is basically unavailable for the general public. So facing determined and/or well financed actors, it is as if the Windows login do not exist anymore for tons of Windows users.
Using the clipboard at all for security related things like temporarily storing a password is a bad idea. The clipboard is a big public billboard visible to anything running on your computer.
The fact that password managers use it at all is simply because it is the only hack that works to reliably get data into password boxes. Yes, its a hack. The HTML5 spec should have exposed a mechanism to securely insert data into an element tagged for such a purpose. A one way mechanism.
> Using the clipboard at all for security related things like temporarily storing a password is a bad idea.
(Emphasis mine.)
Well. The moment you have evil code running on your box, as you, then I'll naively assume you have a bigger problem to deal with anyway.
> The clipboard is a big public billboard visible to anything running on your computer.
And everything from client work to love letters in my home folder is available to anything that runs as me, unless I've gone out of our way to secure it - and succeed.
Not saying the clipboard isn't a problem.
Not saying browsers shouldn't expose a carefully thought out API.
But the way I read your post it might scare people away from password managers and back to a single password or passwords written on papers stored within reach from the workplace.
> But the way I read your post it might scare people away from password managers and back to a single password or passwords written on papers stored within reach from the workplace.
Browser extension password managers are very much a step in the right direction. For most people, they strike the right balance between convenience and security. I guess I'm just a very paranoid developer who does not value that convenience as much as most.
A number of people dislike Wayland because applications can't watch the screen, keyboard input, clipboard etc outside of their own window. Really, that's one of its great strengths over X11.
Keepass tries to mitigate this, as well as keyloggers, by splitting autoinsertion into parts using both. An even better solution is probably one-time passwords with 2FA.
Does any password manager uses a virtual keyboard to type the passwords in? That would avoid using the clipboard, but wouldn't work with one of my banks which doesn't even have an input box. They show a keyboard on screen and you have to click on the letter to type your password.
You have to type in your password WITH YOUR MOUSE??? Wow. Sounds like a great way to make sure everyone uses the minimum allowed length for their passwords...
You only realize that the field doesn’t support pasting once and don’t attempt it ever again. If it allows pasting, the password will be in the clipboard every time you log in, which arguably could be more times than one.
No, the next time I try again and after it fails again then remember they didn't allow it and curse them for not fixing it already. Then repeat the process.
I think it's "Cargo Cult Security". My hypothesis is that someone once realized that having a password stored in the clipboard could be bad, so let's just ban pasting passwords. The logic here is obviously completely backwards, but once a meme is out there it's hard to stop it.
I believe that the argument they rely on is that if you're pasting it then you must have it written down somewhere so that you could have copied it.
It's not a completely left-field position - it's definitely wrong in a modern context - however, previous years of security advice did focus on not writing passwords down.
I have also heard that they believe the removal of the option to paste removes the ability of attackers to exercise brute force attacks against their site. This betrays a lack of understanding of multiple technologies, though.
It's just as worrying to parrot advice that hasn't made sense for years than if they'd dreamt it up themselves. Whoever is in charge of security should have updated their knowledge in at least the last decade or so.
It's my experience that at some organizations, there is effectively no learning anything new beyond the hiring date from any outside source. They may hold very general training for uselessly shallow/fad stuff like "how to be an innovator" or "what the cloud means for our business" but those are generally not substantive efforts to improve the effectiveness of employees. They often set no goals and have no consequences for anyone. They're check boxes/busy work for upper management.
Indian SBI Card does this, prevent pasting, & has set autocomplete=off to prevent browser from remembering the password. I got into Inspect Elements, removed autocomplete, & then typed in my password & then Chrome offered me to remember it. Now on subsequent logins, clicking into UserID does not trigger the saved id/pw combination dropdown from Chrome, but clicking in Password field does.
For pasting of passwords, there is no security benefit. password managers ignore it.
"They" often also disable pasting of other duplicated fields, like bank routing number, email address. So this shows why it is done (but misapplied to password). It's so you don't just copy and paste a wrong value that is hard for them to verify and leads to support calls. By forcing you to retype from scratch, the theory is you will either get it right twice or the error will be flagged.
As a naive non-security expert, I'd consider it a risk to put passwords in the clipboard - what if there's something that can read your clipboard? JS on another site for example? What if you accidentally paste it on a 3rd party site?
That's copying it though. I know a decent password manager will clear the clipboard after an X amount of time too, mitigating the risk somewhat. But that's copying, not pasting in a field.
Whenever I see messages of this kind, there is some doubt light that turns on in my head. Wondering if it’s possible I missed something, but I always have to conclude that, no, there isn’t anything wrong with my thought process.
India Government's official Website for managing public retirement fund, NPS, eNPS does this. Password need to be changed every 90 days, of max 14 characters length, but no where documented. On password change page, it will silently accept your any 14+ length password & will truncate it to 14. Then you try to login with your actual password, it gives error, Wrong Password.
In the U. S., Washington state's initial rollout of their ACA site did that. Gave it a big, long >20 char password, it created the account, go to log in and...
How did I figure out what was going on? They would happily email your password in plain text. </facepalm>
They were not really good anyway. They took classic local recipe and standardize it to lowest denominator and made them cheap and failproof. There is new wave of interest in Czech traditional cuisine + there was always homemade versions, but I dont think there is any cookbook translated to English.
If you are curious search for Austrian, German or Hungarian cookbooks, all of those share the same roots and overlaps.
I sense a bit of 90s security thinking from your arguments though, where every end user and mid-level admin handles security decisions they're frankly not qualified for.
This is what I meant by "safe defaults". Yes some people use e.g. password managers, but no, most people don't. Yes, some people manage to use GPG to manage their ssh keys, but no most people, even qualified, don't/can't/won't.
"Bad defaults with patches hopefully making it safe" is just not the way we should be heading.