"If we assume the purpose of a WoT is to unambiguously and unimpeachably map public keys to human beings"
Why would we assume that? I can think of a few other use-cases:
* I want to verify the PGP keys used to sign packages in some GNU/Linux distribution
* I want to verify the keys used by anonymous remailers (or at least a PGP key used to sign a mixmaster / mixminion pub key)
* I want to verify the PGP key used by a business to sign official messages (such businesses do exist, as I found out a few months back)
It is also wrong to think that the purpose of the web-of-trust is to unambiguously or unimpeachably map public keys to anything. The web-of-trust is a heuristic that makes a particular kind of impersonation more difficult, so that PGP is more convenient to use. If you need something unambiguous you need to manually verify keys (which most people do anyway).
I think it is reasonable to observe that the current PGP WoT is designed to try to unambiguously and unimpeachably map public keys to entities (if not necessarily literal human beings). It is ambiguous to me in the original text whether the author was claiming that all WoTs must have that characteristic or if he was just describing the current one.
Personally I think this has been a problem with many security systems so far. You can't be 100% confident about anything, so building a system in which that is a fundamental element is doomed to fail. See also the SSL/TLS cert system, in which it is assumed that you 100% trust all the cert vendors in your key store, absurd even before we consider the default key store inflation over the years. If WoT is going to work, it's going to have to have some concept of levels of trust. I'm willing to sign my wife's key with the highest authority I can give. I'm willing to sign the dev I meet at some meetup and who definitely seems to have the same personality and knowledge as the guy I know online with a medium degree of trust. I'm willing to sign other people with low degrees of trust.
Sure, dealing with the consequences of partial trust are difficult. But since you can't have full trust, it is in less difficult than our current systems based on it, inasmuch as a thing that is possible is less difficult than a thing that is not possible.
> I think it is reasonable to observe that the current PGP WoT is designed to try to unambiguously and unimpeachably map public keys to entities (if not necessarily literal human beings).
I don't really think the PGP WoT can "unambiguously and unimpeachably" provide that mapping, or is even intended to. At best, I think that the WoT documents trust relationships between entities with keypairs.
I actually really like this feature of the WoT, because I think it does a good job of simulating the actual trust relationships between people in real life. In a private conversation, I might imagine a friend vouching for someone another person as trustworthy; a key signature from my friend does something similar, in a secure fashion. This is good because I don't want my web of trust or social network to be able to say "this key is certain to be trustworthy": after all, it can't actually guarantee that. But I don't mind seeing trust opinions from my friends, and their friends' friends.
If anything, I think the trust relationships in GPG ("unknown", "marginal", "full") are too unclear, and I know that these numbers mean different things to different people. I'd prefer the ability to add a short note to the signature so I could say something like,
"I know this person well and you can be confident that this key is theirs, but I don't think they're careful enough to trust their signatures."
I've never understood why a weighted WoT system has never become popular. I trust some friends implicitly, and I trust some other friends less. I trust friends of friends, but generally less than I trust direct friends. I'm still willing to trust someone who two friends friends know, and if you can trace me a dozen lines to Kevin Bacon, I'll trust him, too. Sure, there's some hard graph theory and weighting to be done, but I can't imagine those aren't problems that can be solved with modern big-data techniques.
If someone sends you an email and you partially trust the key, how does that map to the contents of the email? How does it map to executable code? Source code? Images? Digital signatures?
It's like saying some people's trust is a square circle. It's a correct sentence but it doesn't map to any meaning usefully.
As two examples: Partial trust means you trust a key for different purposes, or for different levels of validity.
Different purposes: "Do I trust that this key correctly identifies this person?" is a separate question from "Do I trust this person to do proper verification before signing others' keys?" (i.e., trusted link in the Web of Trust)
Different validity: "I've met this person and verified they own the key", is different from, "They've identified themselves with two forms of government ID", is different from "I've known them all my life".
I agree 100% with the idea that we need to build the next generation web-of-trust so that from the user's perspective it looks like a social networking app/game.
Trust grows organically as your interact with others. Our computational model of "trust" needs to work the same way.
Part of the problem with existing web-of-trust usability is that it makes "trust" too explicit and coarse. It would make more sense to tell the user "this message is signed by the same person you've had 70 conversations with before, and who has liked 40 of your photos" or "this message is signed by someone you've never interacted with before, but they have a long history of interacting with your friends X, Y, and Z".
I think signing someone's identity + other attributes is a very interesting idea. One problem that could be with such a system is the "mixing of the worlds" issue: knowing my email address, you could look up my profile and see a lot of personal information about me (any attribute someone has signed for me).
A possible solution to this could be to use "partial disclosure" of attributes associated with an identity. In an authentication scenario, the server learns the attributes I disclose (or a function there of), and nothing else. I think this is called a "zero knowledge proof". If I have to prove to an authority that the I am over 18, I could reveal only the answer to ((me.date_born - time.now()) > 18 yrs) and not my actual birth day.
This idea was invented and developed by Prof. Stefan Brands who was at McGill at some point, but then started a company around the technology. Later Microsoft bought them.
I was thinking if the WoT every becomes as easy/popular as social networks, it will be inherently untrustworthy. Has the author forgotten about how common impersonation is on Facebook? And how many people access social networks from public machines or rootkitted/trojaned devices?
In trying to actually use PGP, I think the first step is well integrated mobile clients! Everything is pretty much crap on iOS and Android with regards to PGP when I tried a week ago. If I can't use it on my mobile, I might as well not use it at all.
"We also need to take advantage of mobile computing technology. Secure key exchange has to occur through tamperproof channels... The rest of this proposal assumes that we can trust the hardware we own. This is a known-false assumption, and an urgent problem, but solving it is something that will have to be handled via other efforts."
But wait, the smartphone is not a tamperproof channel, and this borks the proposed scheme. Unless you've rooted your mobile, it is subject to remote control by the vendor. The attack mode would be swapping a public key, then you're talking to someone other than whom you think you're talking to.
A desktop or mobile, running "Free" software, can be in practice secure enough for reasonable trust, but not a captive phone.
Kudos on the effort to get WoT going, tho - we really need it as the CA scheme has been a house of cards.
Just a quick note, but rooting your phone does nothing to confirm or guarantee that you're untamperable. There are many binary blobs on the SoCs (like Qualcomm things) that are present regardless of whether or not you control the root account.
"Unless you've rooted your mobile, it is subject to remote control by the vendor."
... in fact, even if you have rooted your mobile, depending on the baseband processor and how it is implemented, the carrier may still have complete control - as in, DMA control - of your device.
A desktop or mobile, running "Free" software, can be in practice secure enough for reasonable trust, but not a captive phone.
Yeah, but there's a reason why rms uses a MIPS laptop; it's extremely difficult to find a full machine that can run completely on Free Software, and the non-free parts are often critical (kernel-level drivers and firmware).
This is an excellent suggestion. I imagine a future where when we meet people in real life, rather than ask, "what is your phone number?" we ask, "what is your public key?"
I doubt it. The trend has been towards things that are easier to remember than phone numbers, and public keys are much harder to remember. If public key encryption ever becomes commonplace for personal communication, it will probably be via identity based encryption; if a web of trust is involved it will be between IBE authorities.
We plan to implement a web of trust at the protocol level in Tent (https://tent.io) before 1.0. Tent already allows users to store arbitrary data on their own servers and send it via webhooks to other users.
the main issue with pgp... and really i mean the gpg implementation, which everyone but symantec uses, is that its far from user frienly. its unintuitive even for experts. its not impossible to use either but its pretty painful to setup.
that and being misunderstood, both design and some functionalities
I'm not a massive fan of the PGP WOT. The way people sign each others keys and then push the signatures to the keyservers. It's kind of like publishing your contact list for everyone to see.
And I say this as somebody who uses PGP many times a day.
If you are concerned about leaking your associations via public key signatures, you could use (and request that your contacts use) local/non-exportable signatures. That way you will be able to keep track of your trusted keys locally, but your signatures (and the associated metadata about your contacts) cannot be exported, either directly or to a keyserver. You can do this with `gpg --lsign-key`. Enigmail also exposes this option in their GUI, as a checkbox in the "Sign Key" dialog.
Of course, this reduces the utility of the web of trust, but within the current design of PGP this tradeoff is inevitable.
If you meet up with people specifically to sign their keys, all one can confer from a signature is that you have met this person – not quite a contact list, IMHO.
Why would we assume that? I can think of a few other use-cases:
* I want to verify the PGP keys used to sign packages in some GNU/Linux distribution
* I want to verify the keys used by anonymous remailers (or at least a PGP key used to sign a mixmaster / mixminion pub key)
* I want to verify the PGP key used by a business to sign official messages (such businesses do exist, as I found out a few months back)
It is also wrong to think that the purpose of the web-of-trust is to unambiguously or unimpeachably map public keys to anything. The web-of-trust is a heuristic that makes a particular kind of impersonation more difficult, so that PGP is more convenient to use. If you need something unambiguous you need to manually verify keys (which most people do anyway).