Watching the recorded feed from this summit is kind of nuts. You have Vint Cerf (co-inventor of TCP/IP) providing Q&A about a distributed digital archive of the internet and the first guy to ask a question is Carl Hewitt (developer of the Actor Model) and then the guy looking over his shoulder at Carl is Tim Berners-Lee (some web dude).
yes but apps? Apps are just a relict of the mobile 1.0 area, which should be dissolved by web 3.0. yes the monopolgies build their devices and used them to create more monopolies. It should be eventually by models of hardware/software symbioses which gets more benefits from collaborating (blockchain backends). Its very good to see a lot of blockchain related projects at the conference, the path to web 3.0 is open!
Out of these, I thought Zeronet had the best demo. Dat and IPFS are both very promising. I also thought Matrix and Interledger show a lot of potential.
So wish we could have made it to this. In any case ZeroTier was conceived with Internet decentralization motives in mind specifically with the goal of making edge device connectivity easy. It can be and is used for other things but that was the original motive.
Nice work you people are doing. I have no opinion on features and stuff ATM except to say that VPN's plus high usability and open source is a category I like seeing expand. Far as this list, I think what keeps you off is this:
"ZeroTier endpoint nodes form a peer to peer network and use a set of pre-configured nodes called root servers (currently run by us, federation is planned) as stable anchor points for near-instantaneous zero-configuration peer location and connection setup."
Still centralized. Solve that then you might get on the list.
We wanted to build something useful in mainstream, casual, and commercial applications, not just for geeks. For that reason we took on the following non-negotiable design requirements:
- An endpoint can join the network in <~10s. No bootstrapping time.
- Any endpoint can reach any other endpoint in the world in <~10s (or less).
- No configuration at all is required. It "just works." Any knob that must be tweaked or config that must be entered is a bug.
- The endpoint must be small enough to fit in an embedded device like a thermostat, light bulb, etc. (Or at least be able to be made that small without inordinate levels of pain.)
- Performance overhead must be on par with e.g. OpenVPN, GRE/IPSec, etc.
- Must be mobile-friendly. (phones, tablets, etc.)
- Must not conscript user devices into infrastructure roles without explicit opt-in.
- Very strong resistance to sybil and DDOS attacks, at least comparable to current Internet BGP community.
- It must be able to scale to Internet size (tens of billions of devices) without disproportionate levels of pain or discontinuities where the system suddenly "melts down."
- The design must be simple enough to fully describe in a relatively concise RFC.
- The design should be no more centralized than other common Internet systems like DNS and BGP.
The current design satisfies all those goals. It's zero-config, runs on phones with minimal battery life impact, could be scaled down to embedded code and memory footprints without too terribly much effort, and is no more centralized than DNS or BGP.
I'm not sure if I see the intrinsic advantage of trying to be less centralized than the Internet while still using the Internet for transport. A true decentralized new-Internet would have to use radio and user-provisioned DIY links. Centralization(X) = max(Centralization(all parts of(X)))
Pretty much everything popular right now in the decentralized Internet community is conclusively "out" for mobile and embedded use outside of niche applications where the user doesn't mind their phone becoming a hand-warmer and their battery life dropping to 45 minutes. In particular we almost certainly rule out:
- DHTs -- too much RAM, too slow, have a warmup/bootstrap time, hard to harden against sybil attacks, and solutions to these problems involve root-server-like centralization anyway so we're back where we started.
- Block chain -- way too compute and storage intensive by many orders of magnitude.
- Rumor mill and other noisy protocols -- way too bandwidth intensive for mobile and small devices, don't scale.
- Aggressive data replication and "raft consensus" type stuff -- too much storage and network overhead for mobile and embedded devices.
Right now our thinking revolves around making it possible to locally federate the root servers for on-site or in-personal-cloud use. But this has to be thought out very carefully so as not to negatively impact security or any of the other constraints above. We can't have people setting up sybil roots that can be used to DOS the network.
Our other thought is to create a separate community-driven institution to hold the root infrastructure. This is fraught with non-technical political difficulties of making sure this institution is well governed and sustainable.
The latter post makes excellent points and gives us significant pause about federation and delegation. We have to be able to keep improving things and to respond to threats (e.g. DDOS) rapidly.
--
Edit: meta:
I tend to disagree philosophically with the lack of pragmatism in the Internet decentralization community. It reminds me of OSI, which had some theoretically-superior ideas about networking but which never actually shipped anything that worked at scale. As a result we have IP, which works well but lacks some of the theoretical benefits of more throughly designed systems. Things that work always win over things that don't work. See also: semantic web vs. web+search, Project Xanadu vs. www.
Right now the dominant paradigm online is highly centralized cloud silo networks where all traffic is MITMed by design. I think making it trivially easy to network endpoints with an end-to-end encrypted network that "just works" is a huge improvement and could enable a lot of other things.
Also note that ZT carries standard protocols over standard virtualized networks: IPv4, IPv6, etc. This means that it doesn't impose lock-in on systems built with it. It's just neutral transport.
...who always points out that banking, auditing, database, and eCommerce fields already achieved many of Bitcoin's goals with more efficiency and simple algorithms. I particularly love your comment about how centralized version of Bitcoin could run on a RPI. Haha. Similar to my statements here:
Note: The tangent with "contingencies" has me describing how it boils down to politics, laws, and human cooperation in blockchain model anyway. So, why not apply that to more efficient model.
re ZeroTier design
Nice constraints. I'm going to copy your comment and excellent article for now to fully read and think on technical specifics at another time when I have more time. For now, I think you might be overstating the problem with decentralization but spot on about the crowds it attracts. ;) One thing that high-assurance taught me is we can't do everything perfectly. Our trick was to reduce our security, integrity, whatever problem down to some small component (eg TCB or kernel) everything else leveraged. It looks as if you reinvented the concept to a degree by minimizing centralization as its the "trusted" part. It might help, though, to tell (or remind) you of another thing high-assurance cemented in: often easier to do untrusted computation followed by trusted verifier which is simpler than computation.
Let's apply this principle to the trusted part of a scheme minimizing centralization. Instead of all in central or decentral, we can use my concept to do a central model that produces traces of what went in and came out. This is applied to as little of the scheme as possible. Maybe just registration, authentication, IP hopping, whatever. The supernodes are run by different non-profits, people, countries, and so on according to the same rules with their own financial support (or they drop out). Each receives updates on what others or a subset of others are doing in form of in/out states. Each performs fast, simple verification of that which, for some things, is basically just storing it into an in-memory database with disk persistence in case someone asks for it. Mismatches are corrected in standard ways, automated or by people. Like with banks, each organization is responsible for its users with mutually-suspicious auditing increasing their honesty. Users get a sane default on number of and which supernodes to contact with what thresholds and such. For remailer designs, I always made sure to force cooperation in two jurisdictions hostile to each other at a minimum.
Interestingly enough, the job that Google's F1 RDBMS is doing is much harder than what I just described. It's running Adwords with a strong-consistency model. CochroachDB is trying to clone it. Rather than using them, I'm just saying strong-consistency DB model with checking over computation might be get benefits of centralized and decentralized. Last benefit is replicating and checking essentially centralized programs gives us ability to use decades of work in reliability and security engineering on the implementations. Purely P2P and decentralized models are too new for high-security despite what their proponents wish. So many problems and solutions waiting to be discovered. "Tried and true beats novel or new" is my mantra for high-confidence systems.
Note: Might try out ZeroTier as well given you seemed to have open-sourced most critical part.
Everything in ZeroTier is open source except the web UI for my.zerotier.com and currently the Android and iOS GUIs. (The latter might change soon since we made the apps free.)
ZeroTier's root servers run exactly the same code as ordinary nodes. They're just "blessed" in something called a "world" (see world/ and node/World.hpp) and run at stable IPs with a lot of bandwidth. There are currently 12 of them divided into two six-node clusters:
The role of the root servers is pretty minimal. They relay packets and provide P2P RENDEZVOUS services for NAT traversal. All of this is built into the ZT protocol (see node/Packet.hpp). Technically any peer can do what the roots do but the roots exist to provide zero-conf/no-bootstrap operation and a secure always-on "center" to the network.
It would in theory be possible to create some kind of consensus system whereby the world could be defined by the community, but we'd want this to be extremely sybil-resistant otherwise someone could take down the whole net by electing a bunch of sham nodes.
ZeroTier is being used for Bitcoin stuff, payment networks, etc., and we do get attacked. We've had several DDOS attempts and other attempts to attack the system. So far nothing's succeeded in impacting it much.
"ZeroTier's root servers run exactly the same code as ordinary nodes. They're just "blessed" in something called a "world" (see world/ and node/World.hpp) and run at stable IPs with a lot of bandwidth. There are currently 12 of them divided into two six-node clusters:"
Looks good. I've seen similar things work in five 9 types of setups. Potential there. Some components and clustering might be simple enough for medium-to-high assurance. That nodes benefit from peer review of open code is good. That they're the same is a claim we can't check without trusted hardware plus attestation. You also can't verify that yourself unless you have endpoint and SCM security that can survive penetration and/or subversion by high-strength attackers. That problem applies to most products, though.
I overall like it at the high level I'm reviewing at. Only drawback I see is that it appears to have been written in C++. Is that correct? If so, people can't apply state-of-the-art tools to either prove absence of bugs in code (eg Astree, SPARK), verify its properties (eg Liquid Types, AutoCorres), or automatically transform it to be safer (eg Softbound+CETS, SAFEcode, Code Pointer Integrity). What few tools are available for C++ are expensive and more limited. A rewrite needs to happen at some time to deal with that. Perhaps Rust as it solves dynamic allocation and concurrency problems that even Ada ignores plus was partly validated by Dropbox's deployment in low-level, critical components.
"but we'd want this to be extremely sybil-resistant otherwise someone could take down the whole net by electing a bunch of sham nodes."
I could only speculate on that stuff. It's not my strong area and still a new-ish field. What I do know is that many systems work by (a) having simple, clear rules; (b) maintaining audit logs of at least what happens between mistrusting entities; (c) auditing that; (d) forcing financial or other corrections based on detected problems. Rules and organizations are the tricky part. From that point, it's just your code running on their servers or datacenter of their choosing.
One scheme I thought about was getting IT companies, Universities, or nonprofits involved that have long history of acting aboveboard on these things. Make sure their own security depends on it. Then, you have at least one per country in a series of countries where government can't or is unlikely to take it down. Start with privacy, tax, and data havens plus First World countries with best, cheapest backbone access. Knocks out most of the really subversive stuff right off the bat. What remains is a small amount of subversion potential plus the bigger problem of politics on protocol or rule decisions.
"and we do get attacked. We've had several DDOS attempts and other attempts to attack the system. So far nothing's succeeded in impacting it much."
Glad to see you're making it on that. Surviving those is another benefit of centralized models. Carries over to centralized with distributed checking as well if you use link encryptors and/or dedicated lines to at least key supernodes. That's for the consensus and administrative parts, I mean.
Replaced Astree with Saturn as most won't be able to afford Astree. Do test Softbound, SAFEcode, and CPI on various libraries and vulnerabilities to find what works or doesn't. Academics need feedback on that stuff and help improving those tools. There's a serious performance hit for full, memory safety like Softbound+CETS but knocking out that many vulnerabilities might easily be worth some extra servers. Have fun. :)
Thanks for that. Looks like only SPARK is available in the operating system that I use. That is one big problem with research software and software research, it is often disconnected from the programmer community.
Nearly all of these relate to blockchain tech. Which is interesting. I do ask: why blockchain? Are modern blockchains still cycling away on silicon to find hashes? Seems a terrifically ineffecient way to build a system. I can understand the distributed hash table / transaction tree idea, but I struggle to grasp the rationale for proof of work systems.
Proof of work is the only known way to achieve consensus among entities that don't trust anyone. It also provides a way to "fairly" distribute new currency among entities who don't trust anyone. It's not clear to me that the decentralized Web needs consensus, its own currency, or total lack of trust.
Which is why I like layered approaches like IPFS that separate the merkle-tree layer (hashed content) from the naming layer (IPNS, DNS, Namecoin, etc.).
There are definitely valid uses of blockchain tech but requiring it as central part of the architecture is a recipe for (re)centralization because eventually it will always grow beyond home-computer scale.
> It's not clear to me that the decentralized Web needs consensus, its own currency, or total lack of trust.
I'm pretty sure I would say that the above is not how things work in human society. Limited trust, fiat currency, and regular disagreements is the norm.
Sybil attacks do not work in small communities that members may choose to form where the members already know each other.
If a system forces all users to be part of some large, Borg-like, distributed hash table, or ledger, then by my definition it's not "fully decentralized".
Indeed, if you don't plan on writing distributed systems that work for more than a few people, you don't need to worry about Sybil attacks. However, the nice thing about the internet is that it connects billions of people, so here we are.
I think there's a lot of historical evidence over the last few thousand years that people naturally form small communities, or at least small groups within large communities.
Today, people can, in theory, choose from among billions of peers to form these small groups. And the groups can if they so choose connect with each other, via a network of networks.
This internet "connects billions of people". True. But your company's LAN probably does not connect that many.
If a user started creating numerous fake identities on the LAN, then it's likely she would be detected.
Is it possible to create distributed "LANs" over the internet?
(rhetorical question)
Another commenter questioned why a distributed Web needs "lack of trust".
People in small groups can and do trust each other. No computers are needed to make this happen.
Fortunately the two approaches are not mutually exclusive. There are no rules about how the "distributed Web" must be constructed. As the old saying goes, there's more than one way to do it.
While i love the idea of a Decentralised Internet, i would say that it does not solve the problems that well.
An awful lot of these techs are completely block-chain and consensus dependant. Which makes sense (you are in a byzantine environment after all) but is far from being a solved, ready to be used idea. Really far. And that is awfully unefficeint in term of power.
Additionally, except the wifi router project, none of them really solve anything. It just push the boundaries a bit.
I still like the fact we have people working on it, and i will keep working toward it, but we are so so far from solutions.
Still super nice to see some of the things here. Thanks for the hard work :)
Technically it uses hash chains which blockchains are based on, so it is tangentially related. But IPFS is basically just Git (also using hash chains) with networking addons, no concensus algorithm (none needed).
Take the blockchain out of Etheruem, and you still have a pretty cool deterministic state-transition machine. My understanding of Urbit is very limited, but I believe it has a similar concept of Ethereum without a consensus protocol.
They're probably worth distinguishing somehow for newcomers. They're both transaction logs.
Ethereum is a transaction log with a consensus mechanism. Anyone can append to it. It's a scroll: any group or society can use apps that check the scroll to come to conclusions about the state of their interactions.
Urbit is a transaction log that only the owner can append to. It's a journal: users can safely append to their journal with any app, and their apps can read the whole journal to display data from multiple apps in desirable ways.
Which begs the question, why is Urbit necessary? It would be relatively simple to port Ethereum's consensus protocol to "only a user with key X can sign new blocks" and each block would contain exactly 1 transaction.
Mainly because (as Vitalik says) "the whole Ethereum network has the power of a 1999 cell phone."
Although Urbit (like Ethereum) is precisely defined without dependencies, Urbit is not a consensus computation platform. It's for processing your own data on your own (virtual or physical) machine.
Consensus computing is incredibly inefficient and should be used only where absolutely required. Where consensus is not absolutely required, computing should be localized under the user's control. People often forget to include this component in their designs of the decentralized future. But in fact, Ethereum needs something like Urbit and vice versa.
One way to think about the difference between Ethereum and Urbit: it's like the difference between a superconductor and a regular conductor. On the one hand, superconductors are qualitatively different and fundamentally more powerful. On the other hand, there are no superconductors in your iPhone.
I'd actually put Blockstack in the same category as Ethereum. From a transactional log perspective, it uses the Bitcoin blockchain for consensus on the log and from an application development perspective, you can build apps using Blockstack (gives you naming, auth, and storage).
> IPFS, or InterPlanetary File System, is a distributed file storage system that aims to replace HTTP. Both the Internet Archive and Neocities serve web content with IPFS.
Can someone explain to someone less smart (i.e. me) why IPFS and HTTP are mutually exclusive, as the above description seems to suggest?
No sure, I understand they are different protocols, but unless IPFS turns into a protocol capable of describing application semantics (which is what HTTP is) then I don't see how it can possibly replace HTTP – file transfer is not its main purpose after all. What I'm saying, I guess, is that I don't understand why HTTP and IPFS can't coexist quite peacefully? I.e. IPFS being a lower level transport protocol upon which HTTP provides application semantics. Or is that the aim? I don't fully understand.
They coexist already. IPFS just gives you a standardized way to store files in a global p2p content-addressable filesystem (or private content-addressable filesystems, if you don't connect your IPFS server to the global network). Imagine if you tried to visit news.ycombinator.com via HTTPS while visiting Mars; the timeouts would have to be quite large.
If instead the static assets, like html, javascript, comments, and stories are stored in IPFS, then you can load them from your local IPFS server node instead. You would probably even access that node via HTTP(S), and in principle the URL in your urlbar would be exactly the same as it is now. The rendering would all have to be done in JS though, which is a bit of a change. You would configure your local IPFS node to periodically update it's local cache so that the site is as up to date as you desire. (There's some handwaving there, but I guess you could just use a cronjob to request the content, forcing it to be cached.)
In practice, posting a story or a comment might have to look a rather different in that implementation. It would probably look like a throwback to UUCP or FidoNet, where distant nodes would contact each other regularly to upload messages. You would use PKI to discard messages from non-users, and linking the good ones into your branch of the filesystem so that viewers could see them.
Things like this that require logins are where most of the handwaving is with IPFS; the actual filesystem parts seem to work pretty well.
HTTP is a centralized protocol. IPFS is a decentralized protocol. It's considered a core requirement of the replacement protocol by these people. So, HTTP can't fit the bill.