...who always points out that banking, auditing, database, and eCommerce fields already achieved many of Bitcoin's goals with more efficiency and simple algorithms. I particularly love your comment about how centralized version of Bitcoin could run on a RPI. Haha. Similar to my statements here:
Note: The tangent with "contingencies" has me describing how it boils down to politics, laws, and human cooperation in blockchain model anyway. So, why not apply that to more efficient model.
re ZeroTier design
Nice constraints. I'm going to copy your comment and excellent article for now to fully read and think on technical specifics at another time when I have more time. For now, I think you might be overstating the problem with decentralization but spot on about the crowds it attracts. ;) One thing that high-assurance taught me is we can't do everything perfectly. Our trick was to reduce our security, integrity, whatever problem down to some small component (eg TCB or kernel) everything else leveraged. It looks as if you reinvented the concept to a degree by minimizing centralization as its the "trusted" part. It might help, though, to tell (or remind) you of another thing high-assurance cemented in: often easier to do untrusted computation followed by trusted verifier which is simpler than computation.
Let's apply this principle to the trusted part of a scheme minimizing centralization. Instead of all in central or decentral, we can use my concept to do a central model that produces traces of what went in and came out. This is applied to as little of the scheme as possible. Maybe just registration, authentication, IP hopping, whatever. The supernodes are run by different non-profits, people, countries, and so on according to the same rules with their own financial support (or they drop out). Each receives updates on what others or a subset of others are doing in form of in/out states. Each performs fast, simple verification of that which, for some things, is basically just storing it into an in-memory database with disk persistence in case someone asks for it. Mismatches are corrected in standard ways, automated or by people. Like with banks, each organization is responsible for its users with mutually-suspicious auditing increasing their honesty. Users get a sane default on number of and which supernodes to contact with what thresholds and such. For remailer designs, I always made sure to force cooperation in two jurisdictions hostile to each other at a minimum.
Interestingly enough, the job that Google's F1 RDBMS is doing is much harder than what I just described. It's running Adwords with a strong-consistency model. CochroachDB is trying to clone it. Rather than using them, I'm just saying strong-consistency DB model with checking over computation might be get benefits of centralized and decentralized. Last benefit is replicating and checking essentially centralized programs gives us ability to use decades of work in reliability and security engineering on the implementations. Purely P2P and decentralized models are too new for high-security despite what their proponents wish. So many problems and solutions waiting to be discovered. "Tried and true beats novel or new" is my mantra for high-confidence systems.
Note: Might try out ZeroTier as well given you seemed to have open-sourced most critical part.
Everything in ZeroTier is open source except the web UI for my.zerotier.com and currently the Android and iOS GUIs. (The latter might change soon since we made the apps free.)
ZeroTier's root servers run exactly the same code as ordinary nodes. They're just "blessed" in something called a "world" (see world/ and node/World.hpp) and run at stable IPs with a lot of bandwidth. There are currently 12 of them divided into two six-node clusters:
The role of the root servers is pretty minimal. They relay packets and provide P2P RENDEZVOUS services for NAT traversal. All of this is built into the ZT protocol (see node/Packet.hpp). Technically any peer can do what the roots do but the roots exist to provide zero-conf/no-bootstrap operation and a secure always-on "center" to the network.
It would in theory be possible to create some kind of consensus system whereby the world could be defined by the community, but we'd want this to be extremely sybil-resistant otherwise someone could take down the whole net by electing a bunch of sham nodes.
ZeroTier is being used for Bitcoin stuff, payment networks, etc., and we do get attacked. We've had several DDOS attempts and other attempts to attack the system. So far nothing's succeeded in impacting it much.
"ZeroTier's root servers run exactly the same code as ordinary nodes. They're just "blessed" in something called a "world" (see world/ and node/World.hpp) and run at stable IPs with a lot of bandwidth. There are currently 12 of them divided into two six-node clusters:"
Looks good. I've seen similar things work in five 9 types of setups. Potential there. Some components and clustering might be simple enough for medium-to-high assurance. That nodes benefit from peer review of open code is good. That they're the same is a claim we can't check without trusted hardware plus attestation. You also can't verify that yourself unless you have endpoint and SCM security that can survive penetration and/or subversion by high-strength attackers. That problem applies to most products, though.
I overall like it at the high level I'm reviewing at. Only drawback I see is that it appears to have been written in C++. Is that correct? If so, people can't apply state-of-the-art tools to either prove absence of bugs in code (eg Astree, SPARK), verify its properties (eg Liquid Types, AutoCorres), or automatically transform it to be safer (eg Softbound+CETS, SAFEcode, Code Pointer Integrity). What few tools are available for C++ are expensive and more limited. A rewrite needs to happen at some time to deal with that. Perhaps Rust as it solves dynamic allocation and concurrency problems that even Ada ignores plus was partly validated by Dropbox's deployment in low-level, critical components.
"but we'd want this to be extremely sybil-resistant otherwise someone could take down the whole net by electing a bunch of sham nodes."
I could only speculate on that stuff. It's not my strong area and still a new-ish field. What I do know is that many systems work by (a) having simple, clear rules; (b) maintaining audit logs of at least what happens between mistrusting entities; (c) auditing that; (d) forcing financial or other corrections based on detected problems. Rules and organizations are the tricky part. From that point, it's just your code running on their servers or datacenter of their choosing.
One scheme I thought about was getting IT companies, Universities, or nonprofits involved that have long history of acting aboveboard on these things. Make sure their own security depends on it. Then, you have at least one per country in a series of countries where government can't or is unlikely to take it down. Start with privacy, tax, and data havens plus First World countries with best, cheapest backbone access. Knocks out most of the really subversive stuff right off the bat. What remains is a small amount of subversion potential plus the bigger problem of politics on protocol or rule decisions.
"and we do get attacked. We've had several DDOS attempts and other attempts to attack the system. So far nothing's succeeded in impacting it much."
Glad to see you're making it on that. Surviving those is another benefit of centralized models. Carries over to centralized with distributed checking as well if you use link encryptors and/or dedicated lines to at least key supernodes. That's for the consensus and administrative parts, I mean.
Replaced Astree with Saturn as most won't be able to afford Astree. Do test Softbound, SAFEcode, and CPI on various libraries and vulnerabilities to find what works or doesn't. Academics need feedback on that stuff and help improving those tools. There's a serious performance hit for full, memory safety like Softbound+CETS but knocking out that many vulnerabilities might easily be worth some extra servers. Have fun. :)
Thanks for that. Looks like only SPARK is available in the operating system that I use. That is one big problem with research software and software research, it is often disconnected from the programmer community.
You don't have to convince me. I'm "that guy" (probably)...
https://news.ycombinator.com/item?id=10845128
...who always points out that banking, auditing, database, and eCommerce fields already achieved many of Bitcoin's goals with more efficiency and simple algorithms. I particularly love your comment about how centralized version of Bitcoin could run on a RPI. Haha. Similar to my statements here:
https://news.ycombinator.com/item?id=11184214
Note: The tangent with "contingencies" has me describing how it boils down to politics, laws, and human cooperation in blockchain model anyway. So, why not apply that to more efficient model.
re ZeroTier design
Nice constraints. I'm going to copy your comment and excellent article for now to fully read and think on technical specifics at another time when I have more time. For now, I think you might be overstating the problem with decentralization but spot on about the crowds it attracts. ;) One thing that high-assurance taught me is we can't do everything perfectly. Our trick was to reduce our security, integrity, whatever problem down to some small component (eg TCB or kernel) everything else leveraged. It looks as if you reinvented the concept to a degree by minimizing centralization as its the "trusted" part. It might help, though, to tell (or remind) you of another thing high-assurance cemented in: often easier to do untrusted computation followed by trusted verifier which is simpler than computation.
Let's apply this principle to the trusted part of a scheme minimizing centralization. Instead of all in central or decentral, we can use my concept to do a central model that produces traces of what went in and came out. This is applied to as little of the scheme as possible. Maybe just registration, authentication, IP hopping, whatever. The supernodes are run by different non-profits, people, countries, and so on according to the same rules with their own financial support (or they drop out). Each receives updates on what others or a subset of others are doing in form of in/out states. Each performs fast, simple verification of that which, for some things, is basically just storing it into an in-memory database with disk persistence in case someone asks for it. Mismatches are corrected in standard ways, automated or by people. Like with banks, each organization is responsible for its users with mutually-suspicious auditing increasing their honesty. Users get a sane default on number of and which supernodes to contact with what thresholds and such. For remailer designs, I always made sure to force cooperation in two jurisdictions hostile to each other at a minimum.
Interestingly enough, the job that Google's F1 RDBMS is doing is much harder than what I just described. It's running Adwords with a strong-consistency model. CochroachDB is trying to clone it. Rather than using them, I'm just saying strong-consistency DB model with checking over computation might be get benefits of centralized and decentralized. Last benefit is replicating and checking essentially centralized programs gives us ability to use decades of work in reliability and security engineering on the implementations. Purely P2P and decentralized models are too new for high-security despite what their proponents wish. So many problems and solutions waiting to be discovered. "Tried and true beats novel or new" is my mantra for high-confidence systems.
Note: Might try out ZeroTier as well given you seemed to have open-sourced most critical part.