Wow ok... lots of incorrect assumptions here. Just for fun let's address some of them.
> but that has a name, it's called polling, not streaming.
It's not polling. The idea is that the callback is called multiple times to send all the chunks, for one invocation of `streamingCall()`. Sorry if that wasn't clear.
BTW, Promise Pipelining is only involved in the client -> server streaming example.
> - in the same paragraph an "object-capability model" is introduced as a concept, but not explained
> - `sendChunk @0 (chunk :Data)` doesn't make sense to a beginner without explaination; what's `@0` and why do I care?
> - second paragraph, vocab: what's a "temporary RPC object"? Contrary to the precise, albeit unexplained vocabulary, in the first pragraph, this is vague.
This is a news post about a new release of an existing tool. You're expected to be familiar with the tool already, or if you are not, you can go read the rest of the web site to learn about it.
> - second paragraph: "think of this like providing a callback function in an object-oriented language", when it should be "in a functional programming language" (callbacks aren't OOP, they are per definition functional programming)
As others have pointed out, you are taking a very superficial and literalist definition of OOP and FP. That said, I should have said "callback object", because that's what the example actually illustrates.
> is this service on the server side, or the client side?
Cap'n Proto is a peer-to-peer protocol, not a client-server protocol. Either side can export interfaces and either side can initiate calls.
> - Why name a message, `Data` when it's clearly NOT a chunk of data in layer-3/layer-4, but rather a layer-7 artifact with retries and checksumming implemented to ensure complete and accurate message delivery?
`Data` is a basic data type in Cap'n Proto. It means an array of bytes. You can use any other type here if you want, it's just an example.
> > because the protocol can only achieve optimum throughput if a sender sends a sufficiently large quantity of data before being required to stop and wait until a confirming message is received from the receiver, acknowledging successful receipt of that data
> Which is not the case for Cap'n'Proto
What is "not the case"? The whole point of this streaming feature is to do exactly what's described in your Wikipedia quote.
> And there's no discussion of end-to-end problems like BufferBloat
BufferBloat is mentioned several times in the post (though I called it "queuing latency", describing the symptom rather than the cause).
> Going to https://capnproto.org/rpc.html immediately puts me off by inventing "time travel", calling it "promise pipelining" and showing an impossible trace diagram (you can't have messages go backwards in time).
"Time travel" and "infinitely faster" are obviously tongue-in-cheek claims.
> However, even when using the example of a file system (which is about as exception-intense as you can imagine), exceptions are ignored.
They are not ignored. Exceptions propagate to dependent calls. So if you send a chain of pipelined calls and the first call throws, all the later calls resolve by throwing the same exception. Eventually the caller waits for something and discovers the exception.
> it turns out they haven't actually performed the compile-time indirection, but actually block the calling thread
No, the calling thread does not block.
> like any random do-it-yourself-RPC framework out there.
Haha yeah that's me, just some amateur that knows nothing about network protocols...
No, that part of the protocol defines mobile code -- code which an RPC caller can ask the remote callee to execute directly on the remote machine. It's intentionally limited because Cap'n Proto is not trying to be a general-purpose code interpreter. Most RPC systems don't have this at all.
KJ Promises -- the underlying async framework that Cap'n Proto's C++ implementation is built on -- let you write arbitrary code using monadic control flow. But that arbitrary code executes on your own machine, not the remote machine.
it would be nice to see how this would be interpreted into an AST and executed as an active message on the server (receiver).
That said, I brought it up because the copy alluded to it. It's a great time sink to build an interpreter, even if it's only acting on a unit of work whose variant is strictly decreasing, just look at Linq-to-SQL and IQbservable<T> a decade ago.
> KJ Promises
Side note: another copy that greatly frustrates me, as I now try to find the docs on the above async stuff:
> Essentially, in our quest to avoid latency, we’ve resorted to using a singleton-ish design, and singletons are evil [linked to a page that crashes for HTTPS-everywhere users (me) with PR_END_OF_FILE_ERROR].
(Besides the broken link,) singletons are not always evil. I know you know this, I know the copy is tounge-in-cheek again and being ironic — BUT POE's LAW FOR CRYING OUT LOUD :D https://en.wikipedia.org/wiki/Poe%27s_law — "Poe's law is an adage of Internet culture stating that, without a clear indicator of the author's intent, it is impossible to create a parody of extreme views so obviously exaggerated that it cannot be mistaken by some readers for a sincere expression of the views being parodied"
...so KJ Promises; I can't find that mentioned. I only find Promise Pipelining; but that must be your Op:s that allow for field traversal? The site has this copy:
> [RPC Page] With pipelining, our 4-step example can be automatically reduced to a single round trip with no need to change our interface at all.
But I must be from another planet, because I really don't understand:
- first you have a pretty decent design of files that mimic local files
- now you instead showcase what rich messages look like, calling that a "[message?] singleton", linking to a broken site
- path string manipulation exists in every standard lib, it's not something we implement
- if someone wants to perform multiple ops on a file - let's say read a chunk of it
* only Data needs to be resused for there to be no copies (contrary to the copy), but that's also a problem in the first decoupled example
* often, almost always, when I read about what an RPC system can do, I'm not in the memory-management mind-set, so I don't care about re-allocating resources
* caching is not a relevant solution, it's to the contrary; completely irrelevant in this context and it's detracting from understanding what you want me to understand
* caches aren't error-prone when used right, like with immutable data, or read-through caches as transparent proxies can do, but all of this is beside the point
- then there's a discussion about "giving a Filesystem" to someone, when it's really all in my program
* hard-coding a path is out of scope; that's about engineering process, not about the software. You ask "But what if they [have] hard-coded some path", I answer "yes, so what?"
* what if we don't trust them (our own code?) — no, it's not an AuthZ decision locally, it's remotely, so you want to be explicit about the error cases here, but there's nothing about it — instead the copy says "now we have to implement [authN authZ systems]" — but again, this has nothing to do with merging small interfaces into a larger interface; it's a problem even with the small interfaces
- the section ends with the broken link and then the next section states "Promise Pipelining solves all of this!" — but noo, there's so much mentioned above, and the premise is unclear and I have no idea what exactly promise pipelining solves!
And then you go with the calculator example; but the file is large and I still don't know what "Promise Pipelining" means, where to look. I see a lot of construction of values going on and then a blocking wait (polling of the event-loop, but that's also not the point). There are so many bugs in that copy, that it's really hard to know where to start detangling it! With that copy, I would never in my life touch the underlying code! It should be fixed! (sorry, I'm getting into a state here, but that copy... wow)
And this is the kicker that seals the deal:
> Didn’t CORBA prove
Ehm, WTF? Why not contrast with gRPC? But also, why not clarify the above first so I can use that understanding myself? If I'm not a newbie at this, why do you mention CORBA? Do I look like the kind of person that would ask that question? It's demeaning to the reader.
And you mention object capabilities; that's a HUGE area of research, of which I've worked with no live system using it. But here it's casually mentioned, like building such as system is a walk in the park without explaining how.
---
So here I am after another frustrated 40 minutes on the site, and I still haven't found the docs on KJ Promises.
Very nice of people to down-vote because they don't agree with what I write; I'm not being mean or evil here, I'm just stating an opinion and trying to back that opinion up with clear reasoning, references and links to actual research. Down-voting then seems very much like group-think to me.
> It's not polling. The idea is that the callback is called multiple times to send all the chunks, for one invocation of `streamingCall()`. Sorry if that wasn't clear.
The point of my comment was exactly that there were many things in the release notes (and subsequent visits to the site) which were unclear.
> As others have pointed out, you are taking a very superficial and literalist definition of OOP and FP.
Precisely, which is my point; why use that terminology at all? It's neither specifically OOP nor FP, but if it's something, it's more FP than OOP to program with functions. Either way, it's super-superficial (pun intended! :)) and detracts from your message in your release notes!
> `Data` is a basic data type in Cap'n Proto. It means an array of bytes. You can use any other type here if you want, it's just an example.
But my point is whether the RPC frameworks garantuees complete messages or only complete "chunks" to arrive atomically?
> What is "not the case"? The whole point of this streaming feature is to do exactly what's described in your Wikipedia quote.
You're not solving the problem as far as I can read in your article, you've only pointed it out. In my second link, you'll see there are both a) second-, third- etc hop buffers to take into consideration at the sending side, b) there's timing to take into consideration, c) there's the whole "send a large message initially" heuristic, to take into consideration; all in all it turns into the end-to-end argument — you need knowledge on both sides to optimise this, and preferrably an out of band channel.
> "Time travel" and "infinitely faster" are obviously tongue-in-cheek claims.
It's obvious the literal claims are, but it may not be obvious to readers that your trace diagram is also tounge-in-cheek; at least clearly break with the jokey-attitude afterwards and perhaps show an actual trace diagram of how it works (because the trace diagram is only partially a lie; the pipelineing is still an interesting concept that I think people want to understand!).
> No, the calling thread does not block.
In your sample here you block https://github.com/capnproto/capnproto/blob/master/c++/sampl... to get the result. What I'm saying is that it's very unclear how you compose your flow without doing the blocking wait to get intemediate results in between RPC calls. (Exactly: you seem to lack a monadic control flow, specifically the tie-in that endofunctors would give you, and given this lack, you cannot handle exceptions out of band to the happy-path via Choice/Either/Result types)
> Haha yeah that's me, just some amateur that knows nothing about network protocols...
No, it's not nothing about you. I'm sure you're very competent. I'm sure Can'n'Proto is competent too; what I'm talking about is the way the framework is described and how the messaging looks to someone with close to two decades of programming experience. I hope you take is like this; as constructure criticism. I'm already very happy that you replied, because just as I believe you're a stanch proponent of your RPC framework and believe in it, I believe in my gut feeling when it comes to software — and my gut is telling me the above about your marketing message.
> But my point is whether the RPC frameworks garantuees complete messages or only complete "chunks" to arrive atomically?
Method calls arrive atomically. There is no concept of "chunks" in the RPC system itself; that was just what I used in the example. In practice by far the most common use case for streaming I've seen is byte streaming, e.g. large file downloads, so that's what the example used, but it's not limited to that.
> you need knowledge on both sides to optimise this, and preferrably an out of band channel.
Yes, Cap'n Proto has knowledge from both sides. The "Return" message from each call serves as an application-level acknowledgment that the message has been received and processed. This is enough information for the sender to maintain a send window that places an upper bound on buffer bloat. Calculating an ideal window size is tricky and the current solution of stealing the OS's choice is, as admitted in the post, a hack. But all the necessary information is there to do something better in the future.
`.wait()` is a convenience method that means "run the event loop until this promise resolves". It can't be used recursively (so, can't be used from a callback called by the event loop), which means it can only be used at the top level of the program, e.g. in the program's main() function. Typically it is used in client code which really has one main task that it's trying to do. It turns out this pattern is a lot clearer and cleaner than the usual approach where you'd say "run the event loop now" and then have some other thing you call elsewhere to say "please break and exit from the event loop".
(Actually, with this release, the above isn't quite true anymore. .wait() can also be used in fibers. A fiber is an alternate call stack running in the same thread as the event loop, but not running the event loop itself. A fiber can call .wait() on a promise to switch back to the main stack and run the event loop until the promise resolves. This is a hack, not recommended for common use, but can be very useful for adapting synchronous libraries to work in an asynchronous program.)
Most KJ async code, though, does not use `.wait()`. It uses `promise.then(callback)`, which is exactly the monadic control flow you are looking for.
> someone with close to two decades of programming experience
Cool, I just hit the 30-year mark myself.
> I believe in my gut feeling when it comes to software — and my gut is telling me the above about your marketing message.
Exactly. Look, here's what I think happened here: You looked at this page, you saw the silly marketing, and your "gut feeling" told you that this guy is an amateur who needed to be put in his place. So then you started seeking out details that you could criticize as amateurish. But there's a lot to grok here, and instead of actually digging into every detail in depth, you started filling in the bits you didn't understand with your own assumptions. And in your assumptions, you assumed the details must be amateurish, because that's what your gut told you. So then you end up criticizing the amateurish details you yourself made up.
For example, you assumed that KJ promises don't use monadic control flow, when they most emphatically do. How did you get there? You looked at some code and found one example that didn't happen to use the monadic flow (unlike 99% of KJ async code), and then you assumed the rest because of course an amateur wouldn't know about monads.
This is why you were downvoted. (Not by me. Everyone else could see what was happening.)
To be fair, this is totally normal human behavior -- it's called "confirmation bias". This particular flavor of it is especially common among programmers and HN readers, in my experience. (I do it myself all the time.)
Now sure, maybe my post and the web site in general could have been better at explaining the details. But my advice for you is, next time, try to recognize when you don't actually know the details, and then try to assume the best possible details, or at least ask questions, rather than assuming the worst.
> Exactly. Look, here's what I think happened here: You looked at this page, you saw the silly marketing, and your "gut feeling" told you that this guy is an amateur who needed to be put in his place. So then you started seeking out details that you could criticize as amateurish.
Again, I'm not attacking you as a person or your work, I'm attacking the shoddy copy. But yes, this whole thread is about what the first impression of your web site is like, and it's amateurish. Perhaps you're someone who can live with that, and how that results in lost conversions (?) from visits, maybe you're not. In any case, from running businesses myself, I know that getting clear and honest feedback is worth its weight in gold.
> instead of actually digging into every detail in depth
Which is precisely my point that I won't do. If you can't communicate clearly to your visitors on their first visit, but expect them to dig into the details, you've lost that battle.
> For example, you assumed that KJ promises don't use monadic control flow, when they most emphatically do. How did you get there? You looked at some code and found one example that didn't happen to use the monadic flow.
Again, what I found has nothing to do with you. I'm yet to actually have seen the examples you mention in your docs, and the pages I've linked to and the calculator sample doesn't show your statements to be true (only based on that). Perhaps, instead of being so tounge-in-cheek on your introduction page, you assume the visitor is not an amateur, and show what you got (you can move the funny bits to after the introduction, when the reader is in on the joke)?
> The "Return" message from each call serves as an application-level acknowledgment that the message has been received and processed. This is enough information for the sender to maintain a send window that places an upper bound on buffer bloat.
THIS; is what kind of copy you could use. Also not using a confusing `Data` record name, which conflates L4 and L7 semantics (even if that data is identical to memory of a record), not using `my-` prefixed variables, but building a use-case like "suppose this was for streaming music", or something.
If uptake is an aim, focusing much more on other languages than C++ is probably wise, too. And some diagrams to explain what trade-offs have been made (can you load-balance CapNProto like you can gRPC? Is -Proto, Protobuf, so I can use an existing schema registry in my "enterprise"? Is it safe, "like HTTP2" — the newbie stuff in relation to the world around you).
> Cool, I just hit the 30-year mark myself.
Time flies, eh? ;)
> But my advice for you is, next time, try to recognize when you don't actually know the details, and then try to assume the best possible details, or at least ask questions, rather than assuming the worst.
I'm aware I don't know the details! I've said so repeatedly; if I read the details while I have a conversation about how someone who's never encountered Cap'n Proto before, reacts to its material, I'm not that person anymore.
From the above person's perspective, you have a bit of a problem with marketing; I've actually bounced off your front page about three times the last six (maybe?) years and the copy/messaging has always put me off. I'm sure I'm not the only one. Just ping me if you actually do like having someone review that copy for you.
> but that has a name, it's called polling, not streaming.
It's not polling. The idea is that the callback is called multiple times to send all the chunks, for one invocation of `streamingCall()`. Sorry if that wasn't clear.
BTW, Promise Pipelining is only involved in the client -> server streaming example.
> - in the same paragraph an "object-capability model" is introduced as a concept, but not explained
> - `sendChunk @0 (chunk :Data)` doesn't make sense to a beginner without explaination; what's `@0` and why do I care?
> - second paragraph, vocab: what's a "temporary RPC object"? Contrary to the precise, albeit unexplained vocabulary, in the first pragraph, this is vague.
This is a news post about a new release of an existing tool. You're expected to be familiar with the tool already, or if you are not, you can go read the rest of the web site to learn about it.
> - second paragraph: "think of this like providing a callback function in an object-oriented language", when it should be "in a functional programming language" (callbacks aren't OOP, they are per definition functional programming)
As others have pointed out, you are taking a very superficial and literalist definition of OOP and FP. That said, I should have said "callback object", because that's what the example actually illustrates.
> is this service on the server side, or the client side?
Cap'n Proto is a peer-to-peer protocol, not a client-server protocol. Either side can export interfaces and either side can initiate calls.
> - Why name a message, `Data` when it's clearly NOT a chunk of data in layer-3/layer-4, but rather a layer-7 artifact with retries and checksumming implemented to ensure complete and accurate message delivery?
`Data` is a basic data type in Cap'n Proto. It means an array of bytes. You can use any other type here if you want, it's just an example.
> > because the protocol can only achieve optimum throughput if a sender sends a sufficiently large quantity of data before being required to stop and wait until a confirming message is received from the receiver, acknowledging successful receipt of that data
> Which is not the case for Cap'n'Proto
What is "not the case"? The whole point of this streaming feature is to do exactly what's described in your Wikipedia quote.
> And there's no discussion of end-to-end problems like BufferBloat
BufferBloat is mentioned several times in the post (though I called it "queuing latency", describing the symptom rather than the cause).
> Going to https://capnproto.org/rpc.html immediately puts me off by inventing "time travel", calling it "promise pipelining" and showing an impossible trace diagram (you can't have messages go backwards in time).
"Time travel" and "infinitely faster" are obviously tongue-in-cheek claims.
> However, even when using the example of a file system (which is about as exception-intense as you can imagine), exceptions are ignored.
They are not ignored. Exceptions propagate to dependent calls. So if you send a chain of pipelined calls and the first call throws, all the later calls resolve by throwing the same exception. Eventually the caller waits for something and discovers the exception.
> it turns out they haven't actually performed the compile-time indirection, but actually block the calling thread
No, the calling thread does not block.
> like any random do-it-yourself-RPC framework out there.
Haha yeah that's me, just some amateur that knows nothing about network protocols...