I recently used auth0 to implement passwordless login (via "magic link" emails) for a client project. Auth0's documentation is not great, but some of their blog posts are pretty good. In any case, if you're interested in WebAuthN, you could do worse than reading what Auth0 has to say about it:
im talking products like notion that only support magic links (or google auth I guess but I’m not doing that). Slack does it right. You can use a magic link or use your password.
I don't want to let the password go. It gives me the freedom to rightfully access my service if I just know the secret, without any entanglent to some app, device, or other account.
Is there a solution for the fact that all of your accounts will be secured by the same "source"? Isn't this almost close to using the same password on every site? I realize a physical secret is better than a password, but if someone gets their hand on your little FIDO device, do they instantly get access to all your accounts?
The big problem with using the same password on multiple sites is that if any of the sites record your password (because of maliciousness or incompetence), they can re-use your password to log in as you on any other site.
Using a security token is more like a password manager with random passwords everywhere than that (the attacker needs to get access to your password manager to get access to your accounts; it's not enough for someone to hack a single site you use), but more secure because it's generally not copyable and the attacker needs physical access to use it. (A virus on your computer can't clone your security token, even if it's plugged in.)
That's fair. Although, for my password manager, you need both password and 2FA to access it, whereas a FIDO key would just require stealing the physical key.
Does there exist FIDO key (other than phones) that require a password to "enable"? For example, when it's plugged into a new device, the key locks until you input some master password?
Most smartcards require you to authenticate to them before they will perform operations using their private cryptographic keys. I'm adding WebAuthn support to my smartcard middleware [0].
> Is there a solution for the fact that all of your accounts will be secured by the same "source"?
2FA is still an option (e.g. 1 thing that you have + 1 thing that you know), with the hardware token representing a more secure alternative to phone SMS messages.
> Isn't this almost close to using the same password on every site?
To get the a shared password, you have to hack one of hundreds of different services that password is used on, or phish the user, and penetration often goes undetected for years.
To get the physical secret, you have to rob the user, who will notice they've been robbed the next time they attempt to login. Additionally, trying to login to a phishing website won't automatically auth them with the real website - I can't do secure key exchange algorithms in my head, but a hardware device can.
I've avoided getting a Yubikey because there's not an easy way to use it on my iPhone, sticking with TOTP. But that's a bit of an edge case. (iPhones lack NFC chips)
Actually, you're right - that capability has apparently been available since last year: https://www.yubico.com/2018/05/yubikey-comes-to-iphone-with-.... The link I cited is for lightning-connected Yubikey hardware, which is supposed to come out this year.
I think fundamentally most users don't understand anything more complicated than passwords. Passwords are easy. They make sense. A kindergartener understands the idea of a secret word that only they know.
Tokens, certificates, FIDO -- it's black magic. Therefore people don't trust it.
It has to be as easy and intuitive as passwords or it's a non-starter.
That's why the SMS codes (though insecure) are so popular. People understand "enter this number that I just texted to you"
Passwords are only easy if you're using them in an insecure fashion (sharing common passwords across multiple sites). Doing passwords right is actually really, really hard without the assistance of an external tool (password manager).
I get what you're saying though. Users are used to passwords, so moving to an alternative means of authentication will introduce a bit of friction. That said, I think that done right WebAuthn will actually be way easier to use than passwords. Users will just be able to sign in to their browser once, then use what is effectively single-sign-on for every site thereafter.
We're still quite a ways away from that point, but that's where we're headed.
> I think fundamentally most users don't understand anything more complicated than passwords. Passwords are easy. They make sense. A kindergartener understands the idea of a secret word that only they know.
I don't think it's that conceptually difficult to understand even for a layman.
The bare minimum understanding of web security is that authentication is the process of proving who you are (your identity). You can do it one of three ways (or a combination of them):
1. "Something you know" - Password, Background questions, etc.
2. "Something you have" - Yubikey, Smartcard, TOTP, SMS, email, etc.
3. "Something you are" - Biometrics
OpenID Connect is probably the most popular alternative to having a password for every single site. Especially on mobile, most apps usually have an option to sign up/login with your Google or Facebook account.
I'm still a bit bummed that OpenID (the original version) got lost to history. It's not really 'open' if you are handing over the keys to Facebook or Google.
What in the spec precludes this from being implemented in software?
[edit] Reading more of the spec it definitely seems like they meant for it to be possible to implement this in software. So while a physical FIDO device might be preferable, it shouldn't be necessary.
Does anyone else find these informal specifications difficult to digest?
The informative appendices link to papers on TPM and the like but it's hard to find a formal description of the protocol, or at least the sensitive parts, that could be independently validated or verified.
Has there been any work to formally verify/validate the design of this protocol that I'm not seeing?
You're not alone. I'm self-taught in english and it's not my first language. Although native english speakers have commended me I still find reading technical texts taxing.
They fall in the category of any academic text. Be it from a uni, research group, specfication manual. I did not receive formal education in English so I don't understand those formal words. Every other sentence there's something I have to look up and then I'm in a rabbit hole.
Actually same goes for my native tongue in some respect since I dropped out of school before reaching university.
I've still managed to make a career in IT and often desire to read technical specifications but feel helpless when I try.
My strategy so far has been to wait for an implementation in a language I can understand like Python, hopefully.
Formal as in Formal Mathematics -- a specification with a precise definition that can be verified by a model checker to have the desired safety and, if necessary, liveness guarantees.
That's... actually not as bad as I was expecting it to be. If you're willing to limit your audience to modern browsers only, the only holdout is Safari; and on that score, what else is new.
It’s actually included as an Experimental Feature in the preview version of Safari, so there’s some hope that it will be present in the mainline version before too long.
Its not just whether the API is available but whether its practical to use. I'm not sure which browsers recognize or support fingerprint readers, though all the implementations seem to support usb u2f.
Feels like a total failure to launch that the spec doesn't recommend the use of browser accounts as credential providers. Every single major browser has an associated web account with it (Firefox Account, Google account, Microsoft account, Apple id, etc) and could trivially use those accounts as authentication providers.
It's unsurprising this is the case: to be published as a Recommendation you have to demonstrate interoperable implementation experience.
The implementation report is at https://www.w3.org/2019/01/webauthn-report.html and shows Safari passing most of it, AFAICT. (Though it's based on Safari Technology Preview and is yet to ship.)
I got a YubiKey a year or so back and looked at this. It seems like Safari's holding out on a confirmed spec because before then it was a bit too Chrome-specific.
Almost there; now we just need some cross-platform implementations with synced credentials, and support from a couple major sites. Ideally some password managers will step in and implement support, and Google will add support to their own login flow as a primary authentication factor.
Use case: I create an account using using a Yubikey on my desktop, then want to access that account from my mobile phone using a fingerprint. How does the website know I'm the same person?
Keybase has a nifty personal web-of-trust for this stuff, but (A) that ties you to a single strong identity and (B) you can't really use that identity outside of their services.
Realistically, that's what will happen. Your password manager will add WebAuthn support, you'll get a "Do you want to log in to this site? y/n" popup instead of a login box, and you'll click "yes" and be logged in.
Eventually, instead of your password manager having a billion passwords, one per site, it'll just consist of one cryptographic key.
Still waiting for Google Chrome and Firefox to support User Verification in the form of PIN prompts and Resident Keys for true passwordless login (at the moment WebAuthN in Chrome is basically just 2FA, no option for Passwordless).
Somewhat random thought: is Challenge-Response sufficient or should it be 'Challenge-Challenge-Response' so that the client only answers a challenge it requested? Otherwise, what's to stop an XSS attack on page A from effectively MITM page B by overriding the event listener for the login on page A, asking to sign for page B, then exfiltrating the response?
EDIT: looks like the dialog attempts to give you some information, but it doesn't say WHICH profile on the domain and people could certainly not pay attention to the domain in that prompt (I had to check if it existed because I hadn't noticed).
From what I understand, the way that FIDO defeats phishing is that it signs a response on the basis of the presented domain. If a phisher stands in the middle with a domain looks similar to human eyes to the legitimate domain, then the attacker is returning to the legitimate domain a response that was signed for the wrong domain, causing origin to reject the response.
If your site makes it even remotely possible to have an XSS attack on the login page (by not being a separate page with no user-provided input apart from the login credentials) then you're doing login pages wrong to begin with.
> If your site makes it possible to XSS the login page, you're doing login pages wrong.
Agreed, but the point was that a DIFFERENT service might be vulnerable and have XSS on their page which allows an attacker to request credentials for the real target. Your service isn't hacked, your users are.
> signs a response on the basis of the presented domain
This might do it, depending on what this means... Does this mean that if my address bar says `www.serviceA.com` that I include that domain in the response? Then if I asked the client for their credentials with a challenge and `rawId` (I think that's what identifies the relying party, IIRC) matching `www.serviceB.com`, it's possible that `www.serviceB.com` can reject the MITMed response because the attestation has the wrong domain. This is similar to JWT implementations only verifying the signature is valid and not checking that the signature type is the kind expected (i.e., not none). So, a weakness but nothing fatal.
I am curious how that works. I'll need to try it out.
If only Microsoft hadn't chosen to use the code-name Hailstorm for its authentication proposal back in the days (and generally had a better image and a more open approach etc). Would have alleviated a lot of the pain earlier.
Hailstorm was always just a very early version of what today we see in OAuth/OpenID Connect.
The one that we should be truly sad didn't connect with enterprises/consumers was Vista-era CardSpace (http://en.wikipedia.org/wiki/Windows_CardSpace). That was an early play at what today we are finally seeing in FIDO / Webauthn standards, with a rather good UX to go with it (using the visual metaphor of plastic cards/credit cards for PKI identities).
Albeit with the usual problems that that version of Microsoft only supported Internet Explorer on Windows Vista+. The standards behind it (PKI and SAML) should have been interoperable enough that other implementations would have been possible, but the Microsoft of that era wouldn't have been the one to build it. Had they supported XP, and had they supported Firefox/Chrome maybe more people would have heard about CardSpace at all.
ETA: Wikipedia points out it did ship for XP at least with the giant .NET Framework 3.0 upgrade that almost no one actually installed on XP. I had forgot that.
Hailstorm wasn't really the same thing. It positioned Microsoft as a centralized identity provider, with MS holding all the user data and everyone else just connecting to proprietary Microsoft online services to check if the user was who they said they were. Kind of like Facebook Login, but built around 2000s-era-trendy technologies like XML and SOAP instead of JavaScript and JSON.
It's hard to see how Hailstorm wouldn't have run into the same issues people have today with Facebook Login, the big one being that it's maybe not awesome to have a gigantic, notoriously ethically-challenged competitor sitting directly between you and your users.
OK, so here are the options I've seen people deploy for 2FA:
1. Force users to register two U2F tokens. Google's 'advanced protection' requires this.
2. Have users print out one-time-use recovery codes and put them somewhere safe. This is what Google does without 'advanced protection'.
3. Require the user to provide a cell phone number, thus offloading the problem to cell phone companies, introducing all the insecurities that result from that. This is what Apple does (as far as I can tell)
4. AWS is similar, but requires both an e-mail and an automated phone call.
5. Github delegates 2FA recovery to Facebook, via their "Recover Accounts Elsewhere"
6. Facebook allows the user to designate 'trusted contacts' who can get a code they can give to the user in person or over the phone.
7. Have the customer contact customer services, who follow a process companies are cagey about disclosing.
8. In corporate settings, just have them visit helpdesk in person.
#2 sounds most appealing. I live in Japan and don’t have a reachable phone #. Wife and I simply FaceTime if we need anything. Otherwise I have no mobile means of authentication. There have been several occasions where I simply couldn’t use the service because I could not authenticate via a phone number. I feel that companies that force this method of authentication have never considered my use case and could care less if they lose me in the signup process. Too bad. I for one actually have the money to pay them, but hey, if you don’t want it that’s fine with me.
In addition to being able to add multiple devices, there are recovery scenarios that would ostensibly fall outside of the scope of WebAuthn. A service using WebAuthn could give you a set of one time use high-entropy codes that can be printed and stored in a safe location. When you use those code to gain access to an account for which you have lost your token(s), you would of course get an e-mail letting you know that someone (you in this case) did that.
Not necessarily, it just has to not preclude a de facto standard solution. And it doesn't. The scheme described by OP above sounds fairly sane without compromising the overall security of the system (in the way that e.g, password reset forms and security questions do).
No, its a shitty way of doing it that will never work in practice.
Nobody will remember where those are.
They will likely download the pdf, store it in downloads, where it will be used by some Trojan to hijack their account, or they will lose it during a device switch or some dataloss event.
Telling people to store one time codes securely and reliably, for every single account they own, and telling them that will only work for the services that bothered to add that on to their implantation of the standard, works for techys, but not the overall populous.
How is losing your key any different from losing your password? "Password" doesn't try to solve the problem of losing your password, that's beyond the scope. All recovery methods that apply to passwords also apply here.
If you lose a FIDO token now somebody else has a FIDO token. Unless they know specifically that it was yours the only thing they can do with it is use it as their own.
A good implementation of this approach allows you to add multiple tokens. GitHub does for example, but not all are good.
That will depend on the site. It's not a new problem to think about either.
For instance, you can set up multiple MFA mechanisms on Google, and I believe you can set up multiple U2F devices for any given account.
To this day you cannot set multiple MFA devices on an AWS account. No, enrolling multiple devices at the same time from the same screen does not count.
I think this is an issue of 2FA in general. If you can only have one second factor (and some backup codes), then you're going to be out of luck if something changes.
I can tell you that I own a couple of off-brand devices and a blue Yubico "Security Key".
But rather than specifically recommend things I will tell you what I believe you should care about:
1. Does it actually connect to things you authenticate on? If you always authenticate a Mac Book Pro with only USB C ports, then the USB-A Security Key is stupid because it'd need an adaptor. For physical connections if it shows the connector you can feel comfortable, this isn't 4Gbps video it either works or it doesn't. But for stuff like Bluetooth, find somebody who has actually seen the thing you want to use working.
2. For the primary device (if you don't have them identical) be sure how you are going to carry it. Will it go on your key ring, or in your wallet? If you have to carry an extra device and you're someone who has never owned an umbrella for more than a month, that's futile, they're too tiny to rely on getting them back but too expensive to throw away - pick something you won't lose.
3. Robustness. Again for the primary device, the Yubico key I have (USB-A one) has good reputation here, with people leaving them in jeans pockets through a wash or dropping them onto concrete floors without trouble. Others, even from Yubico, vary, you may be super clumsy or not.
Beyond that there are some technical things you could decide you really care about, hardware bugs, but none of them are exactly show stoppers that I've seen. And there are extra features, that Yubico device I own does FIDO2, which means it could be a true password _replacement_ not only a second factor. But I think that feature has even less chance of taking off than WebAuthn itself, so I didn't rate this in choosing the device.
> The last time I saw 2fa and fido talked about on here, someone recommended a set of 2 keys, but they ones they recommended are now out of stock.
There isn't really any reason to have a backup 2FA key. Just have TOTP set up on each account as a backup, so that way if you lose your 2FA key you can still log in that way. Then just order a new one. But having an extra 2FA key just sitting in your drawer on the off chance you leave your laptop in a taxi or whatever isn't really necessary.
Right now it's only major sites that support U2F anyway, so basically all of them allow you to have TOTP enabled as a backup. If you want you don't even need to enter the TOTP codes in your phone, you can just store the secret keys encrypted somewhere.
>Users log in with simple methods such as fingerprint readers, cameras, FIDO security keys, or their personal mobile device.
Neither of these methods are simple. I don't have a camera or fingerprint reader, idk what is FIDO security key or how to get one, and mobile phone can be lost or cease working at any moment so it's not a reliable method of authentication.
A password sent over an encrypted connection and hashed+salted on the backend? It's an extremely reliable and proven method that has been used for decades!
How easy will it be to implement? We should keep in mind the most dangerous guys out there store passwords in clear text in databases and other amateurish rookie mistakes. Having easy to use / impossible to f__k up libraries for every major platform is going to be critical.
There are two methods, IIRC. `get` and `create`. Everything is done with Challenge/Response with the browser handling the Private Stuff. It's hard to mess up, at a glance.
You ask the browser to create an asymetric key pair. It returns the public key, which the server saves. On login, you provide a challenge to the browser to sign using the private key from earlier. It returns the signed message and the server verifies the signature.
If you mean "what it I only used Security Key A to register, but now I want to sign in with Security Key B?" the answer is that you can't, that's the wrong key. Register all the keys you want to use.
If you meant what if I registered with my Pixel phone and now want to sign in on my Windows PC, that just works fine. The client "state" lives in the Security Key (actually there is no state whatsoever in affordable designs), it's very clever cryptography.
I was saying for a long time that a new protocol for a biometric driven login scheme should become the new default. We use biometrics to log into our phone, then a password manager uses the same biometric to authenticate on the same device to log me into a website by auto populating the username + password for me. Afterwards I'll get a 2FA confirmation on the same device which again I'll have to confirm via the same biometric. Instead of having so many moving parts which all boil down to authenticate via a single vector (my fingerprint or eye) on a single device we might as well have a new auth scheme and get away with insecure passwords and expensive password managers and replace them all with a new biometric driven login scheme.
Yes, there are still some issues that biometrics don't solve, but they should not be a concern to most websites. If everything authenticates me via my AppleID (which uses FaceID or Fingerprint) then I only need to remember one password for Apple - which is just the same as remembering one password for a third party password manager - except it's overall much safer and better for me as a user as I don't have to upload all my online identities to yet another third party that I don't know anything about (= password managers).
Biometrics are just fine as a username or one factor of a MFA, but they are terrible for usage as a password due to the simple fact that if they are ever compromised, they cannot be changed.
The truth is though that everyone is using biometrics to log into their device which controls everything from emails, to password managers and 2FA codes. Does it mean if your fingerprint gets compromised that you'll be unable to use the biometric feature of any device for the rest of your life?
It's a good point which you raise, but ultimately biometrics will be the best way to authenticate someone. It might have to evolve and get smarter and better, but one day if someone is able to reproduce all your unique attributes of who you are then nothing will probably hold them back to reset your password manager, email and what not either. They will socially engineer whatever they need and even when a human will verify that you are you they will probably be able to provide enough believable evidence at which point it doesn't matter anymore if they hacked a biometric login or socially engineered your password manager.
> The truth is though that everyone is using biometrics to log into their device which controls everything from emails, to password managers and 2FA codes. Does it mean if your fingerprint gets compromised that you'll be unable to use the biometric feature of any device for the rest of your life?
No, because the device is only using that to protect local storage and anything which leaves the device is using strong keys which can be rotated. If they don't have the device, the fingerprint doesn't matter. If they do have the device (and are within the timeout period, etc.), it's like any other credential compromise: you get a replacement, rotate passwords, etc. but the replay value is sharply capped because at no point is a network service depending on the component which can't be changed.
(If you have an attacker who gets a scan of your fingerprint/face and keeps stealing phones you need a restraining order; that's reasonably outside of the threat model for consumer devices)
This is also important since there's a subset of users who won't be able to use biometrics for some reason and the decoupled approach avoids making it impossible for them to use.
> Does it mean if your fingerprint gets compromised
Technically, your fingerprint is probably already compromised, just nobody's bothered to put the pieces together yet because you're not a high-enough value target.
Check out some of the CCC conference videos on youtube, where they show how easy it is to reproduce someone's fingerprints to fool most biometrics.
However, once it becomes possible to do this at a low enough price point, that's when it realistically becomes a problem for the majority.
> The truth is though that everyone is using biometrics to log into their device
Not to be pendantic, but not _everyone_ uses biometrics to log into their device, either due to lack of hardware or due to lack of trust in said hardware.
>Does it mean if your fingerprint gets compromised that you'll be unable to use the biometric feature of any device for the rest of your life?
No, it just means that it shouldn't be treated as a password in a username+password setup. It's still perfectly usable for a MFA setup.
>if someone is able to reproduce all your unique attributes of who you are then nothing will probably hold them back to reset your password manager, email and what not either
This is exactly why everyone really ought to be using MFA - biometrics are a good identifier and are strongest in conjuncture with a knowledge or physical-item-based authentication. These too can be defeated, but having to nick a physical object, trick the user into revealing a password or similar knowledge-based key, and reproducing a fingerprint/facial/retinal/whatever scan is much more time-consuming.
You acknowledge that biometrics have some issues they don't solve. Not being easy to steal is one of them. The problem is that you leave your fingerprint all over the place, including all over your phone, there are likely multiple pictures of you publicly available that can be used to construct a model to fool Face ID etc. Most biometrics only provide really minimal security, and the ones that provide anything more don't provide much and are inconvenient.
I use my fingerprint to prevent people casually browsing my phone if I leave it on the table while I pee, but I wouldn't rely on it for more than that, and neither should other people.
You need something else (a key, password or something) to secure most things as well as just your fingerprint.
You miss a crucial point though, if you fake my fingerprint you still need my personal device to authenticate with it. You can't just use a copy of my fingerprint and set up a new iPhone with it without confirming at least on one other previously confirmed device or a second factor. So when you need to fake my biometric AND get hold of my personal device then you have to solve the exact same problem asnif I was using a password+ password manager.
WebAuthn doesn't preclude the use of biometrics locally. Whether you securely store and use a private key in a discrete hardware key like a U2F token, or in a computing device's TPM chip secured (locally!) by a biometric access check; it boils down to the same mechanism WebAuthn describes.
WebAuthn rightly does not push biometrics beyond what you can do with them on a local device. It would be a privacy nightmare!
Biometry for service login is about the worst idea ever. The problem of biometric attributes not being secrets has already been mentioned by others, but what is at least as important is that I want to be able to use computers, and computers don't have biometric attributes. I want to be able to task my computer with watching my bank accounts, for example, and for that my computer needs to be able to log into my bank account. Using biometric authentication essentially means that corporations get a monopoly on using computers to scale the work they are able to accomplish, while they force me as an individual to do everything myself, or at best to have another corporation run a computer on my behalf.
I moved from primarily using a MacBook Pro to an iMac Pro a few months ago, and have struggled to find a non-awkward FIDO U2F key due to the ports being on the back. I'm really looking forward to a decent range of BLE U2F keys that are supported on Desktop and Mobile.
tl;dr: Every downside to 2fa is out of scope, so this doesn't solve them, and doesn't require sites solve them.
It then suggests using this as both factors.
Most of all is reliability.
all "Something you have" based factors have one key issue, reliability.
Backup codes are not a solution, I'm not going to have those when i'm at a friends house and get an alert the server is dead but i left my token at home.
Customer service is not a solution, its hard getting me to change my address in the millions of places that have it, now I have to call up, to change my token, because I lost it and have no idea where the fuck i put the backup codes? Across the millions of websites I have an account on? Where each provides their own backup codes?
Backup tokens are barely a solution. In that they only work once, lose your backup token and you are back to the above. At the least you now have to buy another one to become the new backup and go and load it on to all of your sites.
I can't lose, break, forget at home, or otherwise invalidate a password. I can forget it outright, something we know a lot of about, and something we have workflows setup to deal with, some better than others, but I can't just one day lose it and get locked out of everything, I would have to forget all of my passwords simultaneously to do that.
2fa for people who care about it seeing adoption: cloneable tokens. I shouldn't need to re-setup my token across every site when it lose it. Habadab about security all you want, as long as this is a barrier to entry it will stay a barrier.
Also, with fancy crypo, it would be piss easy to make a token key base where each token had its own key and that key can be revoked, but in a way where all tokens work out of the box once you add 1 to a site.
SQRL is a half measure, like SMS-TOTP it barely raises the bar because it doesn't solve a key real problem we actually see happening in the wild and so that would just happen more.
If site A is protected by SQRL, and I'm a bad guy, I can just live phish sign-ins for site A using SQRL from my phishing site, site B. The users all believe (as with other phishing attacks) that they're being asked for credentials by a legitimate site and so they provide them with SQRL, and I'm in.
This (very common and fully automatable) trick doens't work on WebAuthn, completely defeating phishing. This is because the fundamental idea in phishing is "Humans are idiots, fool the human into mistaking site A for site B". In WebAuthn the credentials are mechanically derived from the site you're on, so for site A they will always be site A credentials, and for site B, site B credentials. Convincing page design, an urgent email "from the boss", clever use of IDNs to fake the URL, those fool the human but not the machine, and the human is taken out of the "what site is this?" decision by WebAuthn.
But the human is left _in_ the loop in another way that leverages our strengths. WebAuthn requires a physical interaction, typically a button press by the human. So a hypothetical attack that takes say, 50 million authentications, cannot work because the human will not press the button 50 million times while you do the attack. They'll get sick of it and go on Twitter to moan instead.
Nope. SQRL assumes that phishing simply won't work. Users are going to be shown example.com and realise that's not right and abort. That's their "protection". But phishing does work, people have tested. Some users notice the domain name is wrong. Some don't. All press on anyway.
Why? Because Humans have a psychological problem that makes them bad at giving up. We need to explicitly train safety critical people to go "Oh, this isn't working. I will now report that I failed" rather than keep trying. In normal people the drive to press on is almost unstoppable.
Hence "brick wall UI" design for things like HSTS. If you give humans two options, destroy everything versus admit defeat, they pick destroy everything, every single time. So we changed the UI to not have options. "Defeat" announces the UI. And, defeated, the human gives up and doesn't destroy everything. Hooray.
SQRL does not give the real identity to a phishing site. Full stop.
Now a phishing site can do a fake SQRL login, accepting any credentials. But the client will send a different identity (derived keypair) with no information linking to the real identity, so the phisher will have no information with which to persinalize their fake site.
The user might press on anyway and divulge some sensitive data if the fake site is really convincing. But the phisher cannot use any credentials passed to them against the “real” site.
WebAuthn does NOT solve this “look-like site exists and accepts any credentials” problem either, nor does any other authentication mechanism.
The only protection against against what you describe is preventing registration of look-alike domain names entirely, and ensuring all DNS is secured with DNSsec or TLS. Good luck with that; it’s been tried.
I wish a larger conglomerate would steal the idea and implement it. I don’t like having to carry some physical hardware to login to some website. And the stateless nature of sqrl makes it quite easy to syncing logins on multiple devices without having to rely (or trust) on a third party.
> I'm referring to passwordless solutions that involves things like QR codes, pictorial representations, 3rd party mobile apps, dedicated hardware devices or "magic" links sent via email.
Which of these is AuthN, in your view?
Edit:
Troy Hunt also writes ...
> WebAuthn has the potential to be awesome, not least of which because it's a W3C initiative and not a vendor pushing their cyber thing. But it's also extremely early days and even then, as with [insert things here], it will lead to a change in process that brings with it friction. The difference though - the great hope - is that it might redefine authentication to online services in an open, standardised way and ultimately achieve broad adoption. But that's many years out yet.
Troy specifically addresses WebAuthn in that post but only as a 2fa mechanism (which is an optional way to use it). He doesn't address it as a standard to replace passwords.
His premise of the post is that passwordless mechanisms are non-standard and difficult to use. WebAuthn can be used easily and implemented by anyone as it is now an open standard.
Trpy's article is great, as always, and I'm not invalidating anything he said. But this particular post of his is intentionally a more short-term look at proprietary solutions, not a longer term view of evolving standards.
Aside from the awful UX of client certs, which we could imagine being fixed, FIDO tokens are very narrowly conceived to solve the exact second factor problem - and nothing else.
If you do client certs you've got this whole identity thing baked into the certs. But the FIDO token doesn't have an identity, it only knows how to prove it's still the same FIDO token you had before. So that's immediately much better.
If I use a client cert to sign into GitHub and Facebook, it's a matter of moments for that to be correlated. If I share the client cert with my sister or a colleague, again easily correlated.
But with WebAuthn there's nothing to correlate. The only way to check that Bill and Suzy are using the same FIDO token is to wait until say Bill tries to log in, and ask his token to prove that it's still Suzy's token. This requires a physical interaction (e.g. button press) from Bill. If the guess was wrong you learn nothing but Bill notices it didn't work. So, maybe, if you're nearly certain but just want to be 100% that could work, but ordinarily it's not viable at all.
Also with a decent WebAuthn implementation (e.g., FIDO U2F hardware tokens, or using a TPM), the private key material cannot be copied by a compromised device, or even by someone with direct access to the hardware (unless they actually disassemble the hardware with quite professional hardware).
Client-side certificates are a great technology, but you can copy the certificates without the owner knowing it. Getting the password is just a matter of social engineering or (further) compromising the device. It beats plain username plus password though!
Well, that has been solved by smartcards for 2 decades already... More importantly, PKCS11 layer is completely independent of the browser internals, and works with just anything relying on common pk crypto: http, imap, smtp, sip...
Smartcards are not flawless, they never just work, there is always something that fails, I'm not saying that USB is flawless, but there is absolutely room for improvement here.
My experience on windows was just plug and play. I plug my bank's usb smartcard, wait for drivers to install, and just open its online banking page in Edge.
Nothing prevents delaying auth with a client cert until the user enters a login.
That's what a great lot of people don't understand about TLS. Your cert ID don't have to amount to user ID.
You can let your user enter login as usual in a web form, or use a stored cookie for that, and only then look up if client cert matches the user record.
Moreover, for as long as you can be confident about cookie security, you can forego authenticating every connection in favour of using a session cookie once you did a smartcard auth.
One last tip: ensure keepalives are handled properly, so you don't have to reauth TCP connections over and over.
That's the not so secret sauce to fast client side certificate auth.
https://auth0.com/blog/web-authentication-webauthn-overview-...