Hacker Newsnew | past | comments | ask | show | jobs | submit | more thekeyper's commentslogin

The similarities with VPA/Alero end at the concept of QR-based login. It is a system for provisioning enterprise vendors and requires a substantial onboarding process. It is not an SDK for integration into consumer-facing / third-party applications.

Can't speak for how it works on the back end (though it clearly works differently from Keyri given VPA's QR code contains much more data and is therefore slower / more unreliable to scan). In terms of security on the front end, VPA is phishable as explained in earlier comment threads.


We do not use Hyperledger Aries, but thanks for showing us. I have a blockchain background, and Keyri is somewhat inspired by blockchain concepts, but we've stayed away from blockchain-based solutions for privacy reasons. The pseudonymity of traceable blockchain transactions (in the auth scenario, authentication request transmissions) do not provide adequate privacy. Apologies if I'm misinterpreting Aries - perhaps its ledger is not publicly viewable. I have other objections to blockchain-based identity solutions, but privacy is the main one.

Then there are other passwordless auth solutions employing "private blockchains". That term basically means "database" in my mind and is obviously not ideal from a privacy perspective.


Thankyou for taking the time to respond, and congratulations on the launch!

Aries leverages, Decentralized Identifiers[1] with Verifiable Credentials [2]

the "ledger" is where the public keys are stored. eg it could be a permission-ed ledger, similar to SSL certs only known/<want to be known> parties would publish their keys to the chain.

Example: Sovrin network [3]

Or could be permission-less, maybe focused more towards IoT/whatever

Example: ION Network [4] coming out of Microsoft.

The ledger is not a requirement to establish an identity as shown with the did:peer [5] method

Frankly I think the usage of blockchain was more to get on the marketing bandwagon at the time. Messaging is now moving from "blockchain" to "distributed ledger"

1 - https://www.w3.org/TR/did-core/

2 - https://www.w3.org/TR/vc-data-model/

3 - https://sovrin.org/

4 - https://identity.foundation/ion/

5 - https://identity.foundation/peer-did-method-spec/index.html


Understood, thanks. I think the concept is excellent - truly a digital ID card that you can present with a simple cryptographic token, thus a real "proof of identity". Keyri is "proof of ownership of a trusted device", which, while being a narrower concept, we believe is more palatable from a go-to-market perspective, since companies prefer to proof identities in their own proprietary way.


I agree, the worst part of the Aries project is the extreme egos associated with it - but i do really like the ideals.

Another example of over reach: Trust over IP [1]

Again congratulations of the launch, I am not advocating Aries but I am very interested. Especially the intersection of actual users (what you are doing) / pie-in-the-sky ideals

I wish you both the best :)

1 - https://trustoverip.org/


Appreciate the detail. It's quite helpful. Thanks very much!


Login on desktop happens through scanning a QR code on the service's login page using the service's app. On a mobile device, logging in happens by tapping a button and being verified by biometrics (FaceID etc.) or a passcode (if enabled by the developer).

TOTP is an objectively worse UX - first you type in your username, then password, then open your phone, open the relevant app, read the code, and type in the code before it expires. With Keyri, you open the relevant app, tap a "scan" UI element, and point it at your screen. No typing, memorization, or race against the clock. Also, with TOTP, you're pulling out your phone and navigating to a specific app anyway, so I don't understand your UX objection. I'm also struggling to picture a situation in which a laptop or other device has connectivity but a phone does not. Presumably the laptop is on a WiFi network that the phone can also connect to. If the laptop is using some sort of satellite connection module, that module and/or laptop can fire up a hotspot. This connectivity problem would also arise in the push notification solution you propose in the next sentence.

Push notification solutions ("prompts") are defeatable using trivial man-in-the-middle phishing techniques. For example: https://github.com/kgretzky/evilginx2. Authenticator-initiated authentication solutions with two-way authentication like Keyri eliminate phishing.


Two options, each configurable by the developer implementing the SDK:

(1) When a user sets up their new phone using an iCloud / Google Drive backup of their old phone, the private keys will be already embedded in the relevant apps when they first open the app on the new phone. The developer can ask the user to decrypt the private key for the first session with a user-defined passcode

(2) The SDK provides a QR backup system - users can export their private key in a QR code, print it out or save it on a USB drive, and then scan that code using their new phone. Alternatively, they can just open that QR backup screen on their old phone and scan that with the same app on their new phone. Google Authenticator recently released a key export feature like this (we had it before Google, but it's inspired by blockchain.com's wallet backup system from 2012).


So, if a user changes from Android to iOS or vice versa, there's no (automated) path for continued service?


Correct, that's currently the case. Users can use QR code backup/restore functionality if enabled by the developer to switch between iOS and Android. That would have to be done app-by-app. We're working on our own cloud backup system to automate this.

I think such transitions between smartphone OSs already entail significant credential transfer issues, since saved passwords also do not automatically move between OSs.

You'd have similar problems if you used "Sign in with Apple" for an app on an iOS device and then switched to Android.


Sent you an email. Anyone else - please let me know if you're having trouble with the demo here or email me - zain@keyri.co


That's a similar concept, though despite their claims of unphishability, it is phishable, since the QR code is portable, and that is the only item that the authenticating device reads. The contents of the QR code simply don't matter as long as it's portable (it always will be) and is the only item that the authenticator reads. Keyri reads stuff other than the QR code.


It's unphishable because the keys used for each domain are different.


Regarding your first point: phones are already intrinsic to authentication, whether it's through SMS OTP, TOTP, or push notification verification. Wherever you have 2FA enabled (other than email magic link), you are generally SOL if you lose your phone. We are well past the days when people would forget their phone at home, and most people have their phones within reach. That said, our early customers are looking to deploy Keyri as an option parallel to password-based auth, which, while not ideal, is a smooth way to transition their users to a better UX that just happens to be more secure.

Regarding account sharing: agreed that the "robbing" language is harsh and should be toned down. That said, it is a problem that deserves a solution. For example, there are companies like data providers that charge businesses hundreds or thousands per month for access to their platforms, and they face massive account sharing issues from these businesses that can totally afford to pay for all of the seats they need but are not willing to pay because they don't need to - they can just share accounts among their employees. At the same time, I'd argue that any account sharing, even if it's for a $5/month streaming platform account, is unethical and a violation of TOS - companies should have access to tools that definitively prevent these violations. They currently already try to stop account sharing through IP logging, cookie tracking, etc., but those methods are not as reliable as changing the auth mechanism altogether to something like Keyri, in which credentials are not free-floating strings that can be passed from one person to another.

Regarding OpenID: OpenID providers (Google, FB, etc.) don't see your private keys, but by registering and logging in on various services through them, you are giving those platforms yet more data about yourself. That is why these platforms provide OpenID auth services for free. This privacy threat is nebulous, but privacy-conscious people like myself don't use OpenID for this reason.

Edit: an article on OpenID privacy issues from people who know more than me: https://people.inf.ethz.ch/basin/pubs/asiaccs20.pdf. Excerpt: "s. Unfortunately OpenID Connect is not privacy-friendly: the identity provider learns with each use which relying party the user logs in to. This necessitates a high degree of trust in the identity provider, and is especially problematic when the relying parties’ identity reveals sensitive information"


Thanks for the response. The way I use passwords is way safer than Keyri, so not having the option limits those extra security conscious users (you have certainly heard of hardware OTP devices like Yubikeys). Sure, you are likely right that on-average, Keyri-like approach is more secure (just like biometrics), and that's definitely where your potential for business lies (with companies looking to increase that average).

As I said in a comment below, the fact that companies "can afford" is not the same as "it's worth it" to them, and per-seat pricing is "robbing" those customers when there is no increased value for the customer or increased cost to the provider: make a product that's valuable to be per-seat, and customers will pay for it (sure, some who can't afford it won't, but that's not lost revenue anyway)!

Finally, with OpenID, I can set up my own identity provider, or use a privacy conscious one. Unfortunately, almost no web sites accept pure OpenID (they did for a while ~10 years ago), but instead only a limited set of "large" providers. However, a company can easily decide to support arbitrary OpenID providers instead of just Google SSO or Keyri, and then users can choose how much they care about their privacy and use an appropriate provider.

In short, web sites are not implementing OpenID authentication, but instead somewhat-custom SSO through Google/Facebook that mostly uses OpenID Connect (Oauth) protocol for authorization (in a way, it could be any other protocol that preserves the security properties of OpenID Connect).


> The way I use passwords is way safer than Keyri

I don't see how that is possible.

(1) Keyri private keys cannot be stolen other than through smartphone malware, which is exceedingly rare, while password managers and older USB keys are vulnerable to desktop malware, which is much more common - both credential stealers and, in the case of older generations of Yubikeys, keyloggers. Hardware OTP devices are additionally vulnerable man-in-the-middle phishing attacks (though the HN audience is generally savvy enough to not fall for phishing) - https://github.com/kgretzky/evilginx2.

(2) As long as you rely on passwords and TOTP, you're relying on the shared secret paradigm and trusting the relying party to handle your credentials properly. If the relying party's credential store is breached and the credentials were improperly stored (common even today), your credentials (both your password and OTP secrets) can be used by a bad actor to access your account. Public key systems like Keyri and FIDO2 substantially reduce this risk.

> As I said in a comment below, the fact that companies "can afford" is not the same as "it's worth it" to them

Please see my response below regarding account sharing. In short, eliminating account sharing in order to enforce TOS is an opportunity to (a) improve security (b) improve UX in cases where provisioning multiple users access to one account is warranted.

> Finally, with OpenID, I can set up my own identity provider, or use a privacy conscious one.

As you note, the vast majority of web services don't support arbitrary identity providers or use privacy conscious ones. History has proven that people don't set up their own identity provider. Additionally, the universe of "privacy conscious" OIDC providers is limited (non-existent?).


> ... per-seat pricing is "robbing" those customers when there is no increased value for the customer or increased cost to the provider

A good example of a company doing that is Zendesk: as an engineer, I want to make a comment on a support ticket once every 3-6 months, but Zendesk would require my company to pay for another user license to do that. That's not value provided nor is there a cost for them in having another non-read-only account. They are attempting to rob their customers instead.


Eliminating account sharing does not preclude offering the ability to share seats. Zendesk could very well offer their customers a way to provision users like you a limited account or some other mechanism that allows commenting on a support ticket every now and then. For example, Netflix offers a mechanism to formally invite members of your household to your account for free, which is the scope of "account sharing" that they allow in their TOS.

Either way, it's in Zendesk's and Netflix's best interest to make sure that a given account is used only by the person they were told would use it when the account was purchased, both from a business perspective and a security perspective. How they can address the needs of their customers while enforcing their stated TOS with a mechanism like Keyri is up to them.


> That said, our early customers are looking to deploy Keyri as an option parallel to password-based auth, which, while not ideal, is a smooth way to transition their users to a better UX that just happens to be more secure.

While you should certainly hope that this is just a "transition step", I am sure you are treating it as a business risk as well: companies will frequently have people who are downgrading their security and convenience by moving to Keyri (eg. people like me :), and they will always push back. After a while, those companies might decide that it's not worth it to keep both options available, Keyri will be the one to go (some will, of course, decide otherwise). I am sure you will be tracking this, but it's worth pointing out that this risk is there and not insignificant :)


Account sharing is never unethical. It's my account and I'll damn well give access to whoever I please. Services such as Netflix mitigate this by limiting the amount of concurrent use, which is fine.


That's a fair point for services that limit use based on number of concurrent devices/streams. There are, however, many services that explicitly state the account may only be used by a given single user and those providers will seek legal action if that user shares their account. This ultimately leads to significant pain for both the company and user.


Going from "there are...many services" to "sharing accounts is robbing companies" is what I am having a problem with. That per-seat pricing "because companies can afford it" is robbing customers instead!

Hey, you are selling on the internet, do not make up "costs" just to increase your revenue. I am fine with restricting simultaneous usage where there is an actual cost to it (eg. streaming) as long as that's clearly indicated (and as people have multiple devices these days, it should never be limited to one-at-a-time).


The contention on account sharing is "robbing companies of revenue". It is not related to additional costs imposed on companies due to account sharing. A non-negligible number of people engaged in account sharing are enjoying real value from the service(s) they are not paying for and would pay for if they could not account share. Hence account sharing enables the loss of potential revenue. As stated in another reply, if a company sees value in allowing customers to share accounts, they can build provisioning mechanisms that align with their TOS as Netflix has done.


It's a fair point for any service. If the service doesn't actually provide a service they can control concurrent access to, then it doesn't cost them anything. So you're literally just trying to scam your customers. Gross.


> Wherever you have 2FA enabled (other than email magic link), you are generally SOL if you lose your phone.

I realize this is not exactly widespread (neither on the user nor the provider site), but as we are on HN: Luckily security keys exist and are cheap enough to have backups. I hate having to use my phone for 2FA (but also realize that I’m in a tiny minority there)


Fair point. As you implied, security key adoption, particularly for the consumer-facing web, is very low, as is support for more secure security keys (FIDO2) by consumer-facing web services. We're trying to bring that level of security to mass audiences through a simple UX that a minority audience (that dislikes relying on phones for authentication) may dislike. That said, we think our phone-based auth security and UX are better than those of SMS OTP, TOTP, and push notification verification, so hopefully we can convince that audience over time.


Ah yes, thanks for pointing that out. The iOS app store listing is ancient and terrible. The Google Play store listing is a bit better, but still from a time when I was doing all the design work :)


Keyri makes less sense for smartphone-only applications. The primary case we solve for is applications that have both mobile and desktop web interfaces. Phones are already essentially considered trusted devices, whether auth there happened via password, SMS OTP, OpenID, FIDO2, etc. Keyri bridges the "trust gap" between the trusted smartphone in the user's hand and the untrusted desktop computer / smart TV / whatever other screen they're sitting in front of via our QR + CV + HTTPS system.

Deploying WebAuthn / FIDO2 on desktop web/native apps is far more challenging given the standard's need for communication between the authenticator device and client device to happen via USB, Bluetooth, or NFC. USB is obviously out of the question for consumer-facing apps, and companies simply can't ask typical users to connect their devices via Bluetooth - setup and reliability are the UX issues with BT.


For the desktop scenario, the reason that "trust gap" exists is that the chain of custody is too murky and if Keyri solves that I didn't see how. Of course maybe that's your secret sauce.

Specifically, how does the phone know which web page its owner is looking at on their laptop when they scanned the QR code ? You need to arrange that it's not possible to take the QR code generated for you and present it to a sucker for them to scan instead so that you're signed in as them and like I said, if Keyri does that then I don't see how.


> Specifically, how does the phone know which web page its owner is looking at on their laptop when they scanned the QR code? You need to arrange that it's not possible to take the QR code generated for you and present it to a sucker for them to scan instead so that you're signed in as them...

Isn't this a problem then with WhatsApp Web login too which shows a QR code (presumably not tied to any one account as the web-client seemingly generates it without any user-input) that the app then scans to initiate auth?

I was also wondering if its a severe vulnerability, given that the phone (roaming authenticator) continues to be in the possession of the victim, and they retain the ability to revoke other keys / tokens (which could additionally be authz restricted) shared with untrusted devices (client/platform), much like how one would revoke leaked API keys?

Btw, am I correct in guessing that FIDO2 solves this "trust gap" problem with CTAP2 relying on BLE, USB, NFC to prove user-presence? Thx.


> am I correct in guessing that FIDO2 solves this "trust gap" problem with CTAP2 relying on BLE, USB, NFC to prove user-presence?

FIDO2 isn't interested in this problem at all. From the point of view of FIDO2 the rpId ("Relying Party ID" the thing that distinguishes Apple from Facebook) is just arbitrary data selected by the application.

WebAuthn solves the problem by trusting the web browser to know which web site you're visiting. Specifically, the relying party (a web site you're trying to sign into) gets to pick a DNS name you'll authenticate against but the browser matches this DNS name against the HTTPS URL you're looking at, and rejects requests that don't match. The rpId is based on this DNS name, so a phishing site can't work.

e.g. You may think this page is from your bank, but your browser knows it's https://fake-bank.example/ and won't give it WebAuthn credentials for real-bank.example even though you firmly believe that's where you are.


> Isn't this a problem then with WhatsApp Web login too

Yes, it's an issue with WhatsApp QR login and every other QR login implementation.

> I was also wondering if its a severe vulnerability

It can be severe if the attacker only needs to be authenticated in the victim's account for a short while to do damage. For example, to withdraw cryptocurrency from an exchange account. Or, in the case of WhatsApp, to extract damaging personal info. Persistent access is not a prerequisite for the ability to do harm.

> Am I correct in guessing that FIDO2 solves this "trust gap" problem with CTAP2 by relying on BLE, USB, NFC to prove user-presence

Partially correct - there's two-way communication in FIDO2/CTAP2 in which the roaming authenticator confirms the "identity" of the web session before sending an auth request. The user-presence aspect is incidental to these three protocols. We do the same thing, just with CV


Nail on the head. That's the problem that the secret sauce solves. Without going too much in to it (because I don't think it's a very defensible moat at the moment), the phone sees stuff on the screen other than the QR code, which bad actors cannot present to victims.


So are you saying that even if the bad actors found out how the secret sauce works, they would not be able to spoof a legit page? In other words, is the obscurity in the CV part purely for competitive reasons, or does it also serve a security purpose?


Yes, even if bad actors figured out the secret sauce, they couldn't defeat it easily. It's possible, but if they hack you the way they'd have to hack you in order to defeat our CV element, you'd have much bigger problems to worry about. So yes, the lack of disclosure is for competitive reasons, not for security purposes.


You referred to 'QR + CV + HTTPS system'. Is there more to the CV component than taking a photo?


Yes, there's more to it than just reading the QR code. As mentioned above, I don't think it's a very defensible tech differentiator right now (lots of better-funded cybersecurity companies out there), but in summary, the phone sees stuff on the screen other than the QR code.


Thanks very much for the heavily referenced post. As an aside, I built the prototype of this last year in a vacuum without knowing any passwordless solutions other than FIDO2 systems. My cofounder disabused me of the notion that this was a totally novel concept. I'd never heard of gazepass or sawolabs before and now feel even less original :). That said, I think you recognize the differences between our system and those two, so I won't get in to those details unless you want me to.

Agreed, device continuity is the #1 challenge for truly passwordless systems. The Google/Apple cloud backup system we're currently on is a compromise to deliver a seamless UX for mass audiences in the majority of device transition cases. As soon as a user sets up their new phone using an iCloud / Google backup of their old phone, they will have Keyri private keys already embedded in their restored apps. Developers, optionally, can require users to input a PIN/passcode in order to restore the keys following a backup restoration.

For the minority of cases in which this cloud-backup-based device transition does not work smoothly, companies can offer customer support lines, which, as mentioned in another reply, will be far less busy than "forgot my password" CS lines, thereby making social engineering easier to detect.

Again, while not ideal, as mentioned in another reply, the current solution is based on Keychain (iOS) / KeyStore (Android), which are rather secure and private, and compromise of those systems entails... a really bad day for the victim given they're associated with saved passwords, emails, text messages, photos, etc.

And yes, we definitely plan to maintain our own cloud backup service. That is really hard to architect in a way that's both secure and frictionless, so we'll be designing that for some time. Evervault and scrt.network are great references - thank you.


Thanks! Btw, is keyri short for key-ring? In my native tongue, Gujarati, it means "Ant".

All the best.


haha - you guessed it! Draft 0.1 was to be named Keyri while owning the .ng domain name to create Keyri.ng. We opted against the Nigerian domain but kept Keyri.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: