That's serendipitous. I've been reading Mark's dissertation [1] for the past week, as well as Rees's paper [2] on W7 (his full O-Cap Scheme language).
A lot of great ideas. The ideas behind Mark's E represent a critical part of half the solution. I think Google et al. are going in the wrong direction: churning hundreds of millions of lines of code and yet, they're not reducing the complication. (Talk to anyone who has tried to deploy a production-ready Kubernetes cluster.)
There's a wide gap in the market for a product that makes building distributed systems straightforward. Using Alan Kay's "Is it really 'Complex'? Or did we just make it 'Complicated'?" [3], I feel it can be done in <100k LOC. Therefore, a single developer could grok the whole system in under a month.
I'm still working through the details and building prototypes. Allow me to mind-dump the basic ideas and guiding principles.
It's a fully distributed, orthogonal, concurrent, object-capability, operating system built upon Scheme. Guided by the principle of isomorphism that code is data and data is code, and therefore both can be sharded and replicated as demand dictates.
So, let me break it down piece by piece. First, it's an operating system, which means this is what you boot. You boot directly into a Scheme runtime. Second, it's fully distributed at the operating system level, which means you can spin up new machines, in a cluster, on demand, and they'll join the rest of the group autonomously. Third, it's an orthogonal operating system, which eliminates the need for a file system; the "files" are your objects, i.e., structured data, and the operating system is responsible for serializing the machine's runtime. Fourth, it's an object-capability system with real object-oriented programming based upon messaging. Finally, the concurrency is achieved similarly to E, Erlang, Go, etc.
Now, one of the more interesting parts comes in the sharding and replication of data and code, which is built into the runtime. Some parts will require declarative user specification (think Kubernetes, replicating a process that represents a web server to all machines in the cluster), and other parts will be handled autonomously by the runtime (think sharding a list that's become too big to fit on disk) using split/concatenation plans described either by the runtime or the user. For example, a user may have a custom data structure that requires special splitting.
This is all glued together by an optimizer/planner/scheduler that's meant to maximize the performance of processes running on the entire distributed system. Of course, this must be solved by an online decision algorithm similar to demand paging/the ski rental problem.
A lot of properties that you find in a lot of different distributed systems fall out of this system for "free". And by eliminating a lot of the complications that have been built up and papered over in Linux and containers and orchestration and data management, etc. you arrive at the essential complexity of what's needed to build safe, secure, distributed systems.
Heck, I'm building a web crawler right now, and I salivate at the idea of having a system like this that would allow me to just build the damn web crawler and stop messing around in the complications.
P.S. I've glossed over A LOT of the details and it probably all sounds confusing. But that's cause I'm shoulder-deep in the details and I have yet to come up and think about the communication/messaging side of the product. I'd love to talk in more detail with anyone that's interested. Really excited about this space.