I'm using grpc/protobuf for the fact that it supports plenty of languages: C++, C#, Python, Dart, even Rust and others (Node, PHP, etc.). What is the state of CapnProto when comes to this? I've tried to look at the C# project, but the page was missing on github.
Admittedly, this is a huge weakness of Cap'n Proto. The C++ implementation (which is the one I use and work on personally) is mature. There are pretty solid implementations in Rust and Go, too. But it falls off after that, with most implementations being serialization-only and at various levels of (im)maturity.
There's not a lot I can do about this. Cap'n Proto adoption doesn't directly drive revenue for anyone in particular, so I can't hire an army of engineers to throw at it... People who want better Cap'n Proto support in each language need to step up to help make it happen.
One thing I am looking at doing is making it easier for per-language serialization implementations to bind to the C++ RPC implementation. This might make a lot of sense, since the serialization implementations have wide APIs but shallow implementation details, while the RPC implementation is a pretty narrow API with very complex implementation. And it turns out Cap'n Proto messages are super-easy to pass between languages since the in-memory format is by design the same across languages -- passing around byte buffers tends to be pretty easy.
Some of it is also that, much like in comparison to schema-less protocols, it's a bit of an apples-to-oranges comparision at the RPC level, since capnproto's RPC is so much more expressive -- but also complex to implement. I think only some of the difference in available implementations is due to reduced engineering effort; the long-tail of serialization-only implementations is testament to the fact that implementing cap'n proto rpc is Not Trivial. Unfortunately, I think a lot of this complexity is inherent to what cap'n proto rpc is trying to do.
> One thing I am looking at doing is making it easier for per-language serialization implementations to bind to the C++ RPC implementation
Depending on how much more mature the C++ implementation is, you might consider using the Rust version for this instead. I've toyed around with exposing Rust to C (for a TCP-based message protocol[0], as chance would have it), and it worked pretty great.
While the Rust implementation of Cap'n Proto is one of the better ones, it's still received only a tiny fraction of the engineering investment that the C++ implementation has.
I am wondering if D, Nim and Zig would be able to just leverage C++ version of Cap'n proto library directly ?
(I think D has built-in C++ API support across compilers, not sure about the others)
Probably not. Remember that Cap'n Proto (like Protobuf) involves defining protocol schemas in an IDL and then using a code generator to generate classes with getters and setters and such in each language. Programs that use Cap'n Proto often use these generated APIs throughout their codebase. While you could perhaps take these generated classes and wrap them wholesale, there are two big problems with doing so:
1) You end up with APIs that are not idiomatic for the calling language. For instance, D supports properties, where C++ uses separate getters and setters. Also, FFI wrappers tend to add an additional layer of ugliness in order to translate features that don't exist in the calling language. If it were an API you only used in one small part of your code maybe this would be fine, but spread all over your codebase would be awful.
2) The generated getters and setters are designed to be inlined for best performance, but cross-language inlining is often not possible. In fact, most FFI wrappers incur a runtime performance penalty to convert between different conventions, and this penalty is going to be extra-severe when calling functions that are intended to be lightweight.
So this is why I say that the serialization layer -- which includes all this generated code that apps interact with directly -- should be native to the language.
But, you could use the native serialization layer to construct messages, and then pass it off to the C++ RPC implementation. The RPC implementation has a fairly narrow API surface with an extremely complex implementation behind it, so it's a perfect candidate for this.
All the protobuf implementations I've worked with (especially protoc descendents) just feels like they wrapped the C implementation with some FFI and called it a day. They're all ugly and unidiomatic. So it's not exactly a high bar to meet.
For my needs, I'm ignoring the "ugly" bits. I'm looking for statically typed checks - e.g. avoid spelling errors. Also discoverability - e.g. start typing the name of your service press "." and it gives you the options, then Alt+Space and what you can provide - it's really easy with C# And Visual Studio.
I've heard of similar quality issues with other RPC libraries (either Thrift or Avro, I can't remember which). In my cross-language work, everything becomes very functional and non-idiomatic due to the overhead.
got it. thank you for the explanation.
I am planning to add 'multiplayer' feature, where multiple participants needs to quickly exchange positional and surrounding attributes.
Currently system is in Java+JS front end. I feel that JSON serialization that I currently use, is not the right thing..
But at the same time, I care about 'size in kb' of the js front end.
Therefore have been learning the options.
A big selling point for me, would be built-in IPC mechanism, instead of TCP, or UDP - be it mailslots (Windows), named pipes, shared files, etc. - does not matter. Now there are some projects that implement IPC over gRPC, but not part of the actual project.
Why I'm asking for this - for the simple reason - I don't want to deal with port allocation on a CI.
Cap'n Proto works great over unix sockets. For sandboxing usecases in Sandstorm and Cloudflare Workers, I've commonly used it over anonymous socketpairs -- definitely no ports involved there. :)
In fact, you can adapt the RPC system to operate over any kind of byte stream transport pretty easily, by implementing the kj::AsyncIoStream interface. Or if you already have a standard file descriptor (or iocp-compatible HANDLE in Windows), you can use that.
One fancier thing that's still on the roadmap is shared-memory IPC. Cap'n Proto's zero-copy serialization was really built for this, but so far for all my real-world projects, Unix sockets have been fast enough, so I haven't been forced to full implement a shared memory transport yet. Maybe soon?
Sure, you could do that. You'd need to write a little shim to bind separate input stream and output stream FDs into a single AsyncIoStream but that shouldn't be hard.
Have you looked into ZeroMQ? It has built-in shared memory IPC without having to worry about ports. Though I'm not sure exactly what you mean by "port allocation" in this context.