Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Cap'n Proto 0.8: Streaming flow control, HTTP-over-RPC, fibers (capnproto.org)
259 points by kentonv on April 23, 2020 | hide | past | favorite | 72 comments


Kenton,

THANK YOU for not only designing Cap’n Proto but also continuing to work on it.

Before today, I had slight concern that in the face of Sandstorm, Cap’n Proto would fall in priority. Perhaps Cloudflare Workers are now making a great case for it?

My usecase has been Cap’n Proto w/ Python. Being able to do RPC is icing on the cake.

I will be looking into streaming fields of entities (classes) soon - the idea being decentralized microservices have local knowledge on how to transform and output a field and other modules/microservices just reach out to them and ask for the same.

A silly question: do you forsee any issues getting Cap’n Proto 0.8 to work over a completely locked down Docker environment that only allows HTTPS proxy as the connection between nodes that use Cap’n Proto RPC?

PS: Thoughts on why Cap’n Proto did not win over MessagePack? MessagePack had a better JS implementation? Is it the convenience of not needing to define a schema when using MessagePack vs Cap’n Proto schema requirement?


> A silly question: do you forsee any issues getting Cap’n Proto 0.8 to work over a completely locked down Docker environment that only allows HTTPS proxy as the connection between nodes that use Cap’n Proto RPC?

If it supports WebSocket, it should be relatively easy to layer Cap'n Proto RPC on top of that. Alternatively, some proxies allow full-duplex HTTP (request and response bodies streaming simultaneously), which could also be enough to bootstrap a connection on top of -- but in practice that tends to run into a lot of problems.

Otherwise, that's tough. HTTP is fundamentally a one-way, FIFO request-response protocol, whereas Cap'n Proto is multi-directional and asynchronous. Starting a separate HTTP connection for each call -- with connections initiated in both directions -- would be pretty ugly and have lots of issues with synchronization and routing.

> PS: Thoughts on why Cap’n Proto did not win over MessagePack? MessagePack had a better JS implementation? Is it the convenience of not needing to define a schema when using MessagePack vs Cap’n Proto schema requirement?

I don't really consider MessagePack a direct competitor to Cap'n Proto. It's more of a competitor to JSON. Schema-driven vs. non-schema-driven changes everything about how you use a serialization.

A more apples-to-apples comparison in Protobuf. Protobuf is much more popular for a simple reason: It has had a lot more engineering investment, leading to mature implementations in more languages and lots of great tooling that capnp doesn't have (yet). No amount of clever design can beat that.


> If it supports WebSocket, it should be relatively easy to layer Cap'n Proto RPC on top of that.

How would the new streaming functionality be implemented over WebSockets? WS has no flow control. You can check bufferedAmount but I found it to be fairly useless[0]. Maybe it's improved in the last 1.5 years or I was using it wrong.

> HTTP is fundamentally a one-way, FIFO request-response protocol, whereas Cap'n Proto is multi-directional and asynchronous. Starting a separate HTTP connection for each call -- with connections initiated in both directions -- would be pretty ugly and have lots of issues with synchronization and routing.

I think you could get a long way using HTTP/2 and server-sent events.

[0]: https://github.com/websockets/ws/issues/492

EDIT: added link to bufferedAmount issue


> How would the new streaming functionality be implemented over WebSockets? WS has no flow control.

Sure it does. WS is just a framing protocol on top of a regular TCP connection. Though it sounds like you're not talking about the protocol so much as the JavaScript API, which perhaps doesn't give you enough visibility into the underlying TCP socket state.

But a BBR-like algorithm could still work. Basically (massively oversimplified):

1) Determine the connection latency based on the fastest response you've seen.

2) Determine the connection throughput by tracking the highest throughput you've ever seen.

3) Set your window size to be a little bit more than latency * throughput.

This "should" saturate the connection with just a little bit of buffering. If it doesn't saturate the connection, then your measured throughput will go up until it does.

This way there is no need to ask the OS or browser to tell you how much is buffered...

Disclaimer: I have yet to actually implement something like this. Obviously, it gets tricky in the details.


> Sure it does. WS is just a framing protocol on top of a regular TCP connection. Though it sounds like you're not talking about the protocol so much as the JavaScript API, which perhaps doesn't give you enough visibility into the underlying TCP socket state.

Every WS implementation I've ever seen is nonblocking on both sides. Are you aware of any that aren't? I'm not even sure the spec allows for that.

But yes you are correct that you can implement more advanced algos on top.

EDIT: Sounds like blocking implementations do exist, but unfortunately I'm constrained to a browser environment, and all the browsers are nonblocking.


Heh. The WebSocket implementation I wrote for KJ does in fact provide backpressure (the send() method returns a promise that resolves when it's a good time to send the next message). I guess I'm surprised to hear that most don't...


I think it's more likely I've just been too deep in JS-land. Does KJ send() do any internal buffering, or wait for the OS to tell it to send more?


I've been looking into streams in detail recently because I'm preparing for the OpenJSF exam... maybe this is relevant to you too: https://nodejs.org/es/docs/guides/backpressuring-in-streams/


Thanks for the link. I'm actually quite familiar with that article. It's been very useful for me when designing omnistreams. You may find some of the links on the bottom of this page useful:

https://github.com/omnistreams/omnistreams-spec


It waits for the OS.


off the top of my head https://godoc.org/golang.org/x/net/websocket has a synchronous read/write api


Does that block until messages are actually transferred, or just shove them in a golang buffer and return? Now that you mention it I feel like maybe I have seen blocking implementations in golang and Python a while back, which would make sense.


Skimming the implementation, there is buffering, but it looks like it's bounded, so tight looping Write() you should still end up blocking on the OS at some point.


Makes sense. Thanks for correcting my misconception!


> HTTP is fundamentally a one-way, FIFO request-response protocol...

I have to disagree partially with this... HTTP allows a server to start streaming a response back as soon as it has received all header fields from the client, and since HTTP/1.1, it can stream back an infinite sequence of chunks, which the client can be doing at the same time - effectively resulting in a two-way communication channel as long as the underlying TCP connection allows that (which normally is true, unless you run into bad proxy implementations in the middle). Server-sent events is a incomplete implementation of this idea. I have used this in the past , before websockets, and I fail to see the advantage of using websockets over this.


You're quoting me from two posts up, but I also specifically called out full-duplex HTTP in that post.

HTTP/1.1 does indeed support full duplex. However, most proxies don't. E.g. if you put nginx in front of your application server, you might lose the ability to do full duplex streaming right there. Or, if your client is behind a normal forward proxy, they might not be able to get full duplex. But, it can be pretty hard to figure out what's wrong. So, in practice I don't recommend relying on HTTP/1.1 full duplex. To be honest, I don't recommend HTTP/2 full duplex either, because HTTP/2 is just an optimization applying to individual network hops, and you still have the same proxy software in between screwing things up (and maybe even downgrading the request to HTTP/1.1 for the next hop).

Personally I recommend using WebSocket when you expect full duplex. That way, all intervening proxy servers either know that you are expecting full duplex (because they recognize WebSocket), or connection setup will fail fast.


I realized I replied to the wrong comment :) but I had some issue with the suggestion regarding server-sent events... anyway, thanks for answering, I really appreciate your work and it's nice to hear your input on this.


Not sure a straight comparison to MessagePack is really fair, since capnproto provides a lot more functionality. Unless you know what it's gaining you the syntax probably comes across as more complicated than necessary.


I'm using grpc/protobuf for the fact that it supports plenty of languages: C++, C#, Python, Dart, even Rust and others (Node, PHP, etc.). What is the state of CapnProto when comes to this? I've tried to look at the C# project, but the page was missing on github.


Admittedly, this is a huge weakness of Cap'n Proto. The C++ implementation (which is the one I use and work on personally) is mature. There are pretty solid implementations in Rust and Go, too. But it falls off after that, with most implementations being serialization-only and at various levels of (im)maturity.

There's not a lot I can do about this. Cap'n Proto adoption doesn't directly drive revenue for anyone in particular, so I can't hire an army of engineers to throw at it... People who want better Cap'n Proto support in each language need to step up to help make it happen.

One thing I am looking at doing is making it easier for per-language serialization implementations to bind to the C++ RPC implementation. This might make a lot of sense, since the serialization implementations have wide APIs but shallow implementation details, while the RPC implementation is a pretty narrow API with very complex implementation. And it turns out Cap'n Proto messages are super-easy to pass between languages since the in-memory format is by design the same across languages -- passing around byte buffers tends to be pretty easy.


Some of it is also that, much like in comparison to schema-less protocols, it's a bit of an apples-to-oranges comparision at the RPC level, since capnproto's RPC is so much more expressive -- but also complex to implement. I think only some of the difference in available implementations is due to reduced engineering effort; the long-tail of serialization-only implementations is testament to the fact that implementing cap'n proto rpc is Not Trivial. Unfortunately, I think a lot of this complexity is inherent to what cap'n proto rpc is trying to do.


> One thing I am looking at doing is making it easier for per-language serialization implementations to bind to the C++ RPC implementation

Depending on how much more mature the C++ implementation is, you might consider using the Rust version for this instead. I've toyed around with exposing Rust to C (for a TCP-based message protocol[0], as chance would have it), and it worked pretty great.

[0]: https://github.com/anderspitman/messend-rs


While the Rust implementation of Cap'n Proto is one of the better ones, it's still received only a tiny fraction of the engineering investment that the C++ implementation has.


I am wondering if D, Nim and Zig would be able to just leverage C++ version of Cap'n proto library directly ? (I think D has built-in C++ API support across compilers, not sure about the others)


Probably not. Remember that Cap'n Proto (like Protobuf) involves defining protocol schemas in an IDL and then using a code generator to generate classes with getters and setters and such in each language. Programs that use Cap'n Proto often use these generated APIs throughout their codebase. While you could perhaps take these generated classes and wrap them wholesale, there are two big problems with doing so:

1) You end up with APIs that are not idiomatic for the calling language. For instance, D supports properties, where C++ uses separate getters and setters. Also, FFI wrappers tend to add an additional layer of ugliness in order to translate features that don't exist in the calling language. If it were an API you only used in one small part of your code maybe this would be fine, but spread all over your codebase would be awful.

2) The generated getters and setters are designed to be inlined for best performance, but cross-language inlining is often not possible. In fact, most FFI wrappers incur a runtime performance penalty to convert between different conventions, and this penalty is going to be extra-severe when calling functions that are intended to be lightweight.

So this is why I say that the serialization layer -- which includes all this generated code that apps interact with directly -- should be native to the language.

But, you could use the native serialization layer to construct messages, and then pass it off to the C++ RPC implementation. The RPC implementation has a fairly narrow API surface with an extremely complex implementation behind it, so it's a perfect candidate for this.


All the protobuf implementations I've worked with (especially protoc descendents) just feels like they wrapped the C implementation with some FFI and called it a day. They're all ugly and unidiomatic. So it's not exactly a high bar to meet.


For my needs, I'm ignoring the "ugly" bits. I'm looking for statically typed checks - e.g. avoid spelling errors. Also discoverability - e.g. start typing the name of your service press "." and it gives you the options, then Alt+Space and what you can provide - it's really easy with C# And Visual Studio.

That to be said, it's really ugly as an API.


I've heard of similar quality issues with other RPC libraries (either Thrift or Avro, I can't remember which). In my cross-language work, everything becomes very functional and non-idiomatic due to the overhead.


got it. thank you for the explanation. I am planning to add 'multiplayer' feature, where multiple participants needs to quickly exchange positional and surrounding attributes.

Currently system is in Java+JS front end. I feel that JSON serialization that I currently use, is not the right thing..

But at the same time, I care about 'size in kb' of the js front end. Therefore have been learning the options.


A big selling point for me, would be built-in IPC mechanism, instead of TCP, or UDP - be it mailslots (Windows), named pipes, shared files, etc. - does not matter. Now there are some projects that implement IPC over gRPC, but not part of the actual project.

Why I'm asking for this - for the simple reason - I don't want to deal with port allocation on a CI.


Cap'n Proto works great over unix sockets. For sandboxing usecases in Sandstorm and Cloudflare Workers, I've commonly used it over anonymous socketpairs -- definitely no ports involved there. :)

In fact, you can adapt the RPC system to operate over any kind of byte stream transport pretty easily, by implementing the kj::AsyncIoStream interface. Or if you already have a standard file descriptor (or iocp-compatible HANDLE in Windows), you can use that.

One fancier thing that's still on the roadmap is shared-memory IPC. Cap'n Proto's zero-copy serialization was really built for this, but so far for all my real-world projects, Unix sockets have been fast enough, so I haven't been forced to full implement a shared memory transport yet. Maybe soon?


It's bit frivolous question, but would it be easy to use stdin/stdout as transport for capnp?


Sure, you could do that. You'd need to write a little shim to bind separate input stream and output stream FDs into a single AsyncIoStream but that shouldn't be hard.


Have you looked into ZeroMQ? It has built-in shared memory IPC without having to worry about ports. Though I'm not sure exactly what you mean by "port allocation" in this context.


Kenton, have you looked into what Fuchsia is doing with FIDL? https://fuchsia.googlesource.com/docs/+/ea2fce2874556205204d... (not sure if recent page, but good enough)

Just wondering about your opinion... Thanks!


I've heard it mentioned but haven't had the chance to look closely.


I just want to say, I've been following Capn for a few years now, excellent work, keep it up! I feel it's a better architecture overall than protobuf and wish I had more time to contribute to the project.


Thanks!


kenton, I admire your work on Cap'n Proto and Sandstorm.

Question about time traveling promises - I read the documentation, and it sounds to me like - taken to the logical extreme, that would effectively mean an interpreter - because at some point, you don't just want to use return values from one call as parameters in the other - you'd also want e.g. to build a one-network-roundtrip "create-if-not-exists" call from "exists(name)" and "create(name)", which would require the second call's activation to be dependent on the first call's result (rather than its parameters) - which is just a small change.

But if you consider that, and error handling, and a few other relatively simple cases, you quickly end up with an informally specified, bug ridden, incomplete implementation of Emacs Lisp inside the RPC implementation.

So, my question is - where do you draw the line, and how do you decide to draw it? I don't believe there's a right answer, but I wonder about your philosophy.


Indeed, the comments allude at this possibility (note the TODO):

https://github.com/capnproto/capnproto/blob/77f20b4652e51b5a...

But in practice, the only kind of "script" we support currently is following a chain of named fields followed by invoking a new RPC method, unconditionally, as in:

    fooResult = cap.foo();
    quxResult = fooResult.bar.baz.qux();
This seems to satisfy the vast majority of real-world use cases.

I would only add other operations if I identified some use case where it turns out to be a really big performance win. So far I haven't seen any.


Does anybody know if the Go implementation is still maintained? The author is unresponsive and there haveNot been any updates in quite a while.

https://github.com/capnproto/go-capnproto2


There was a commit yesterday? But it does seem to have slowed down.


Go figure, just as soon as I post my concern to HN! Oh well, here’s to hoping development picks up again :)

Edit: unfortunately the commit seems to be to a funding.yml. While I really can’t afford to contribute myself, I hope someone does.


Impressive work! I just wish it had a less awkward API for serialisation and deserialisation. Compare the Rust example for Capnp with Protobuf:

https://github.com/capnproto/capnproto-rust/blob/master/exam...

https://docs.rs/prost/0.6.1/prost/trait.Message.html#method....

(Ok I couldn't actually find an example for Prost because all you do is create a normal Rust `struct` and call `encode()` on it.)


Yeah, this is kinda the cost of not having an encode/decode step. The Haskell implementation (of which I am the primary author) provides a higher-level API with "normal" data types for cases where performance requirements aren't stringent enough to merit the extra burden on the developer. I'm mostly interested in RPC, so I rarely use the low-level API myself...


This is awesome. The new streaming stuff is particularly interesting to me, and it's very impressive that you managed to implement it with no protocol changes. I have a couple questions. Please forgive any misconceptions as I've never used capnproto myself, since all the streaming I've done has to work in the browser, and as far as I know capnproto doesn't work over WebSocket or WebRTC transports. But I've long been impressed with and inspired by capnproto and sandstorm.

Main question: is there a reason you opted for traditional window flow control a la TCP, as opposed to "request-N" style like in reactive streams[0] (see rsocket[1] for a great implementation)?

So with request-N, a server->client stream would look something like this:

interface MyInterface {

  streamingCall @0 (callback :Callback) -> (requester :Requester);

  interface Callback {
    sendChunk @0 (chunk :Data) -> ();
  }

  interface Requester {
    request @0 (n :int) -> ();
  }
}

And the server will only ever send as much data as has been requested by the client with requester calls. This results in really elegant flow control that takes into account both the network, and the client's capacity to consume, without the necessity of tracking windows. The receiver simply calls request(1) each time it processes a message. If you want a buffer you can just start with an assumed N=10, 100 etc.

I've found this worked really well when implementing omnistreams[2], which is basically a very thin streaming/multiplexing layer for WebSockets, since WS doesn't have any flow control. (fibridge[3] is a good example of it in action). I started with a window-style but once I learned about reactive streams the request model was much easier to reason about for me.

[0]: https://github.com/reactive-streams/reactive-streams-jvm

[1]: https://github.com/rsocket/rsocket

[2]: https://github.com/omnistreams/omnistreams-spec

[3]: http://iobio.io/2019/06/12/introducing-fibridge/


Hmm, to me, what you describe still sounds window-based, it's just that the receiver chooses the window size. The question then is: how does the receiver decide on a good size? If it chooses a window that is too small, it won't fully utilize the available bandwidth. If it chooses one too big, it'll create queuing delay.

This is a very hard question to answer and many academic papers have been written on the subject. But the strategies I thought about seemed easy enough to compute on the sender side, and the sender is the one that ultimately needs to know the window size in order to decide when to send more data.

But I can totally imagine that there are applications where the receiver knows better how much data it wants to request at a time. You can, of course, use a pattern like you suggest to accomplish that, without any help from the RPC system.

Regarding WebSockets, you could totally make Cap'n Proto RPC run over WebSocket. It wouldn't even be much work to hook up the C++ RPC implementation to KJ's HTTP library which supports WebSocket. The harder problem is that there isn't currently a JavaScript implementation of capnp RPC... :/


> If it chooses a window that is too small, it won't fully utilize the available bandwidth. If it chooses one too big, it'll create queuing delay.

Yeah, that's a valid concern, and one I've run into in practice.

It's true that in environments where the server has access to TCP socket information, traditional windowing will have an advantage for performance. You may even be able to do some sort of detection as to how saturated the interface is from other processes.

As I see it the main advantage of the pull-based backpressure I described is the simpler mental model, making it easier to reason about and implement. So in environments with limited system information for the sender (ie WebSockets, which knows basically nothing about how full the buffers are), you don't have to pay the extra complexity cost with no benefit.


Hmm, but if the puller doesn't actually know what value of `n` is ideal, then what benefit is there to a pull-based model vs. having the pusher choose an arbitrary `n`?


The network isn't the only resource in play. The puller is hypothetically more aware of the size of it's buffers, processing capacity, internet connection speed, etc. But again, to me the primary advantage is the mental model. For omnistreams the implementation ended being almost the same as the ACK-based system I started with, but shifting the names around and inverting the model in my head made it much easier to work with.


Fair enough.

FWIW, Cap'n Proto's approach provides application-level backpressure as well. The application returns from the RPC only when it's done processing the message (or, more precisely, when it's ready for the next message). The window is computed based on application-level replies, not on socket buffer availability.

My experience was that in practice, most streaming apps I'd seen were doing this already (returning when they wanted the next message), so turning that into the basis for built-in flow control made a lot of sense. E.g. I can actually go back and convert Sandstorm to use streaming without actually introducing any backwards-incompatible protocol changes.


Ah I think I misread the announcement to mean you were using the OS buffer level information. But if I understand correctly you're just using the buffer size as a heuristic for the window size, then doing all the logic at the application level?

If that's the case, then implementation-wise these approaches are probably very similar, and window/ACK is the normal way of doing this, and also the pragmatic approach in your case.


Yep. I probably should have gone into more detail on that, and about the problem of slow-app-fast-connection. Oh well.


I would hope you'd be amused by: https://blog.apnic.net/2020/01/22/bufferbloat-may-be-solved-...

and I am curious if you have considered a fq_codel-like approach to message queuing? Sending a whole socketbuf kind of scares me.


>The harder problem is that there isn't currently a JavaScript implementation of capnp RPC... :/

Does capnproto (C++ or Rust implementation) compile against wasm?


Probably not without some work, but I think it'd be possible to get there, and I've definitely been considering that as a way forward. I worry that getting the code footprint down to the point of being reasonable for a web app might be tricky but we'll see.


I'd really like to read an experience report of using the promise pipelined / time-travelling RPC mechanism in production.


For me, new to cap'n'proto, this blog post doesn't cut the mustard because of multiple red flags:

- to start, it's self-congratulatory at stating that streaming already exists "[via] promise pipelining" — but that has a name, it's called polling, not streaming. Making asynchronicity explicit doesn't make a protocol streaming.

- in the same paragraph an "object-capability model" is introduced as a concept, but not explained

- second paragraph: "think of this like providing a callback function in an object-oriented language", when it should be "in a functional programming language" (callbacks aren't OOP, they are per definition functional programming)

- second paragraph, vocab: what's a "temporary RPC object"? Contrary to the precise, albeit unexplained vocabulary, in the first pragraph, this is vague.

- creating examples with "MyInterface" being the service shows a lack of creativity and a rather low ability to communicate; is this service on the server side, or the client side? Noone knows, and "Callback" is not a good name for a callback, it should be "SendEmailWithData" or something that makes sense.

- `sendChunk @0 (chunk :Data)` doesn't make sense to a beginner without explaination; what's `@0` and why do I care?

Here's what made me write this comment despite the threshold annoyance in commenting:

- Why name a message, `Data` when it's clearly NOT a chunk of data in layer-3/layer-4, but rather a layer-7 artifact with retries and checksumming implemented to ensure complete and accurate message delivery?

Finally, the articles goes on and discusses control flow via a proxy variable; your OS'es TCP send buffer size. But the linked Wikipedia article states:

> because the protocol can only achieve optimum throughput if a sender sends a sufficiently large quantity of data before being required to stop and wait until a confirming message is received from the receiver, acknowledging successful receipt of that data

Which is not the case for Cap'n'Proto (admittedly it states it uses a hack). And there's no discussion of end-to-end problems like BufferBloat, which are very hard to solve by only looking at your own buffer https://en.wikipedia.org/wiki/Bufferbloat#Solutions_and_miti... — or even the semantics of "blocking on server's return value" (Is it enough for the receiving process to have the message in memory? The type system showcased seems to tell that story)

The article also doesn't state how a simple RPC call works. Going to https://capnproto.org/rpc.html immediately puts me off by inventing "time travel", calling it "promise pipelining" and showing an impossible trace diagram (you can't have messages go backwards in time).

But when explaining it, it's really RPC message coalescing and compile-time reference indirection, from the promise to the underlying object instance as it is after executing the coalesced message pipeline. However, even when using the example of a file system (which is about as exception-intense as you can imagine), exceptions are ignored.

Looking through the Calculator example (https://github.com/capnproto/capnproto/blob/master/c++/sampl...) it turns out they haven't actually performed the compile-time indirection, but actually block the calling thread like any random do-it-yourself-RPC framework out there.

What a RPC framework should do is give you:

- an extremely clear serialisation model that is outside of the framework

- a clear API

- clear garantuees / invariants on how it manages the complexities of network programming

In short: it must be very clear in what it promises. Cap'n'proto is not.


> callbacks aren't OOP, they are per definition functional programming

You can't just make things up on the spot.


Passing around pointers to functions is more functional programming than object oriented programming. You're literally programming by passing functions around with callbacks. But obviously, this is not the main point of the comment; the argument that it's "like OOP" (which it's not), is what I'm attacking.


I know very little about functional programming (so I should probably just stop and wait for a more knowledgable commenter, but here I go...) but even I know that functional programming doesn't mean "programming that involves functions".

It's a totally different programming paradigm that involves specifying a collection of facts relating function inputs to their outputs and letting the compiler/interpreter figure out how to turn that into a program. There are lots of ways that these sorts of program are different from the type of programming you're used to, including pattern matching, lazy evaluation, and, yes, passing around functions as values. But a particularly surprising one is that often the order of statements don't make any difference because the program is not just executed sequentially starting at the top and working its way down. OOP is subtype of imperative programming, which is the usual programming you're used to where you just write a series of instructions that are executed top-to-bottom (except for function calls and flow control like "if" and "for" but even there you're explicitly specifying what should be executed next).

Apologies if you knew all of that, but it seemed you like didn't because surely no one that really understands functional programming would confuse it with imperative programming involving some callbacks.

---

I think the parent commenter called you out on this even though it's not the main thrust of your argument because it's a really significant misuse of terminology and shows a lack of understanding of fundamental programming concepts. Pedantically it's a bit of an ad hominem attack, but you seem to be coming down hard on Cap'n Proto especially because of its misuse of terminology and it's ironic that you're the one misusing it.

(Another example of this is where you object to "streaming", which means what the document say it does, which you call "polling" but that really means having to proactively check every so often whether something is done rather than getting a callback. [Edit: these are actually orthogonal concepts because even with a non-streaming request you could either be notified or have to poll for the single response. I think you have just totally missed what "streaming" means here.])

I am having to fight really hard the urge to dig into more detail of your comment. But I will leave it at one more thing that you seem to have missed: You seem to be talking as if this page is a first introduction to Cap'n Proto, when it's not, it's just a changelog entry intended for people that already know what the library is. Of course features are mentioned without a proper introduction, changelogs are typically of the form "add cancellation parameter to the floog() function" without explaining what "floog()" is. Adding all that detail would actually make them less useful, because it's just noise to the target audience that buries the real content, which is what's actually changed.


I do really appreciate that you try to explain this from first principles. First, to explain myself, I'm primarily programming in F#, which is a hybrid functional AND object oritented lang (also use: TS, Elm, Haskell, C#, Python, Ruby etc). I prefer to write the software in a functional style though, so I rather use closures and self-recusive functions acting as message loops over the dynamic dispatch of "telling" objects to do things.

> and, yes, passing around functions as values

If we go back to the foundations of what I would call functional programming, Lambda Calculus as defined by Church, I like to give the evaluation of booleans as a prime example of how functional programming computes by passing callbacks around: https://en.wikipedia.org/wiki/Lambda_calculus#Logic_and_pred...

Since this was arguably the very first functional programming "language" that existed in the world (even: one of the very ways to do structured computation!), and is almost exclusively uses callbacks, I'm very confident in saying that callbacks is more functional programming than anything else.

Functional programming still computes heavily via function composition; the principle of passing callbacks around is at the very center of functional programming. To name a few instances of callbacks being passed around; via currying "context" values, by passing a function to "act in a context" (hole-in-the-middle/delegator) or closing (closures) over mutable state to enable runtime re-configuration.

What most people don't realise about functional programming (until they've done it a long time) is that it's not in the syntactic features, nor in the type system, that functional programming really shines, it's in the compositionality of it.

> But a particularly surprising one is that often the order of statements don't make any difference

This is generally a false statement. You can mean many things here: dataflow programming (https://en.wikipedia.org/wiki/Dataflow_programming) or the ability of monadic computation expression to choose when in time it executes its built-up construction, or how lazy evaluation can share chunks (bits of compute) that are beta-(normalised)-normal to each other.

What is true, however, is that callbacks is not ONLY a core concept in functional programming (even if it's arguably MORE core in FP); even PL/1 and Cobol have address pointers that gets passed as callbacks, but there it's more used for cross-system interop than for composition. Assembly and even C also heavily use callbacks, but those are not OOP. The dynamic dispatch you have in object hierarchies in OOP, is not even such a close concept to callbacks (https://en.wikipedia.org/wiki/Dynamic_dispatch).

> write a series of instructions that are executed top-to-bottom

That's procedural programming, not OOP: https://en.wikipedia.org/wiki/Procedural_programming

---

> I think the parent commenter called you out on this even though it's not the main thrust of your argument because it's a really significant misuse of terminology and shows a lack of understanding of fundamental programming concepts. Pedantically it's a bit of an ad hominem attack, but you seem to be coming down hard on Cap'n Proto especially because of its misuse of terminology and it's ironic that you're the one misusing it.

Obviously I disagree with this (see above, with references!). The way he answered is unconstructive and only leads to hard feelings. I'm coming down hard on the way the Cap'n Proto article is written, and I can back it up and argue my point. I'm in no way attacking the author, or objectively saying that Cap'n Proto is bad — only that I think the way it's making its case is.

> Another example of this is where you object to "streaming"

I'm objecting not to "streaming" but to how the code makes it look. It's really unclear to someone who doesn't know the Cap'n Proto syntax. I think you're missing the point that my review is from someone who really understands these concepts, reading the article and forming an opinion of whether to give Cap'n Proto a chance based on it. Also, the article only says streaming means "returning multiple responses", which is an API-level concern, not a protocol concern (* * not a protocol concern _ necessarily _ * * : the protocol is what should be explained; for example gRPC uses HTTP2 push to do streaming responses)

> You seem to be talking as if this page is a first introduction to Cap'n Proto, when it's not, it's just a changelog entry intended for people that already know what the library is

It is the first introduction I had to Cap'n Proto, and I also after reading the article read the main site's introduction. I'm giving comments on how I perceive Cap'n Proto based on that.


> ... foundations of what I would call functional programming, Lambda Calculus as defined by Church ...

Ah! Now we’re getting to the nub of it. Yes, the lambda calculus, absolutely correct.

> ... almost exclusively uses callbacks ...

Whoops! This is absolutely wrong. The functions in the lambda calculus cannot have side effects (like saving a file or displaying a message to the user) and they can only depend on their inputs (e.g. they cannot return a number entered by a user, data from a socket, the current time, etc.). In the software and computer science world they are called "pure functions". Mathematicians (including Church) would simply call them "functions" because that, by definition, is what a mathematical function is (it is a subset of the Cartesian product of the domain and codomain, very much like a lookup table in computer programming except it can be infinite or even uncountably infinite).

These are very different from the callbacks being discussed in the original post. A callback - as the name suggests - is a function passed from a higher-level abstraction to a lower level abstraction to be notified when something happen. Those exist purely for their side effects - that's the whole point of a callback - and usually have no return value at all.

> Functional programming still computes heavily via function composition; the principle of passing callbacks around ... [e.g.] via currying "context" values, by passing a function to "act in a context" (hole-in-the-middle/delegator)

Sure, except you're passing (pure) functions around, not "callbacks" (at least for it to count as true functional programming).

> closing (closures) over mutable state

As you said, F# is a hybrid language, and mutable state is on the imperative side (admittedly a concession made even by stricter functional languages like LISP).

> This is generally a false statement. ... how lazy evaluation can share chunks (bits of compute)

It would be true in a pure functional language, and is only false to the extent that a real-world "functional" languages actually have some imperative features.

> > write a series of instructions that are executed top-to-bottom

> That's procedural programming, not OOP:

No, it's imperative programming, as I said in my previous comment. Procedural programming and OOP are both subtypes of imperative programming. On the Wikipedia page you linked to, look at the box on the right hand side, and note how imperative says "(contrast declarative)" next to it, and vice-versa. The box shows that declarative includes functional and dataflow programming.

> Assembly and even C also heavily use callbacks

Agreed, and they're a lot more similar to the type of callbacks being described in the original post than pure functions: they're totally imperative and allowed side effects. But they don't normally have included state - you usually get a void* for your extra data but no functions to copy and destroy that, unlike C++ lambdas and std::function (Python uses garbage collection to avoid needing those for state attached to function objects). So using them ends up with a slightly different feeling to the developer than callbacks in OOP languages, which is why the article specified callbacks in OOP languages rather than callbacks in general.

----

There are other details I could pick apart in your original comment or other replies. But there is a broader point. I know this isn't terribly constructive, but I think it's important to address the elephant in the room: frankly, none of your comments make any sense. You misuse terminology all over the place, but then pedantically pick apart individual technical terms in other people’s comments, mostly because they've used the terms with their real meanings rather than your imagined meanings. The commenter _pmf_ chose to highlight your incorrect criticism of "callbacks" and I followed up on it, but almost everything else in your original post deserves the same treatment; that was just one example. I really think people downvoted your original comment rather than replying because they took a look at it and thought "this is a lost cause".

In the comment I'm replying to, you say "I'm very confident in saying that" and complete the sentence with something that is wrong, and several people have already told you so. But this attitude comes across implicitly elsewhere. I implore you, please stop trying to just push your understanding of things, and instead lean back and consider that the comments contradicting you could actually be right.

This especially applies to looking back at Kenton's replies, even if you ignore what I'm saying (fair enough). I realise he gave up replying to you eventually but his replies were very reasonable and clear explanations and you responded with just an explosion of more confusion that would take more and more effort to clear up - I think he has the patience of a saint for addressing as much of your comments as he did.

Bear in mind that no one else in the comments had trouble with either the original article or the introduction to Cap'n Proto in general. No one came along and agreed with anything you said, which does happen when bad articles are posted here. In general (of course there are exceptions), Hacker News commenters are clever and reasonable. If you find yourself surrounded by clever reasonable people and they're all wrong and you alone are right, it is time to consider if it is really the other way round.


I ended up putting the wrong reply here to this frament, sorry about that:

> > This is generally a false statement. ... how lazy evaluation can share chunks (bits of compute)

> It would be true in a pure functional language, and is only false to the extent that a real-world "functional" languages actually have some imperative features.

What I had intended to say was that, yes, this is the sort of feature of functional languages I was talking about. In a true (perhaps theoretical) functional programming language, all functions are pure, so you can arbitrarily reorder function calls (relative to other calls of the same function and to calls to other functions) and this cannot possibly have an effect on the meaning of the program. You can even coalesce multiple calls to the same function with the same parameters.

As I'm sure you realise, this can't work for functions that have side effects e.g. reading bytes from a socket. Again, these are things that can happen for functions, including callbacks, in imperative programming languages (or the imperative bit of mixed or mostly-functional programming langauges).


Wow ok... lots of incorrect assumptions here. Just for fun let's address some of them.

> but that has a name, it's called polling, not streaming.

It's not polling. The idea is that the callback is called multiple times to send all the chunks, for one invocation of `streamingCall()`. Sorry if that wasn't clear.

BTW, Promise Pipelining is only involved in the client -> server streaming example.

> - in the same paragraph an "object-capability model" is introduced as a concept, but not explained

> - `sendChunk @0 (chunk :Data)` doesn't make sense to a beginner without explaination; what's `@0` and why do I care?

> - second paragraph, vocab: what's a "temporary RPC object"? Contrary to the precise, albeit unexplained vocabulary, in the first pragraph, this is vague.

This is a news post about a new release of an existing tool. You're expected to be familiar with the tool already, or if you are not, you can go read the rest of the web site to learn about it.

> - second paragraph: "think of this like providing a callback function in an object-oriented language", when it should be "in a functional programming language" (callbacks aren't OOP, they are per definition functional programming)

As others have pointed out, you are taking a very superficial and literalist definition of OOP and FP. That said, I should have said "callback object", because that's what the example actually illustrates.

> is this service on the server side, or the client side?

Cap'n Proto is a peer-to-peer protocol, not a client-server protocol. Either side can export interfaces and either side can initiate calls.

> - Why name a message, `Data` when it's clearly NOT a chunk of data in layer-3/layer-4, but rather a layer-7 artifact with retries and checksumming implemented to ensure complete and accurate message delivery?

`Data` is a basic data type in Cap'n Proto. It means an array of bytes. You can use any other type here if you want, it's just an example.

> > because the protocol can only achieve optimum throughput if a sender sends a sufficiently large quantity of data before being required to stop and wait until a confirming message is received from the receiver, acknowledging successful receipt of that data

> Which is not the case for Cap'n'Proto

What is "not the case"? The whole point of this streaming feature is to do exactly what's described in your Wikipedia quote.

> And there's no discussion of end-to-end problems like BufferBloat

BufferBloat is mentioned several times in the post (though I called it "queuing latency", describing the symptom rather than the cause).

> Going to https://capnproto.org/rpc.html immediately puts me off by inventing "time travel", calling it "promise pipelining" and showing an impossible trace diagram (you can't have messages go backwards in time).

"Time travel" and "infinitely faster" are obviously tongue-in-cheek claims.

> However, even when using the example of a file system (which is about as exception-intense as you can imagine), exceptions are ignored.

They are not ignored. Exceptions propagate to dependent calls. So if you send a chain of pipelined calls and the first call throws, all the later calls resolve by throwing the same exception. Eventually the caller waits for something and discovers the exception.

> it turns out they haven't actually performed the compile-time indirection, but actually block the calling thread

No, the calling thread does not block.

> like any random do-it-yourself-RPC framework out there.

Haha yeah that's me, just some amateur that knows nothing about network protocols...


> it turns out they haven't actually performed the compile-time indirection

Here's how much monadic control flow he actually has in Cap'n Proto: https://github.com/capnproto/capnproto/blob/77f20b4652e51b5a... — nothing except fields. Gut feeling was correct then.

https://news.ycombinator.com/item?id=22972728


No, that part of the protocol defines mobile code -- code which an RPC caller can ask the remote callee to execute directly on the remote machine. It's intentionally limited because Cap'n Proto is not trying to be a general-purpose code interpreter. Most RPC systems don't have this at all.

KJ Promises -- the underlying async framework that Cap'n Proto's C++ implementation is built on -- let you write arbitrary code using monadic control flow. But that arbitrary code executes on your own machine, not the remote machine.


It doesn't have to have a turing complete interpreter; it just have to be provably terminating, and you can build most use-cases as an active message.

What I'm after with the monadic control flow are the error-cases; let's say you have

> music.getPlaylist(ps => ps.userId == "u123").findTopSongs(10).enqueue(qInstance) => Result<C, Error>

it would be nice to see how this would be interpreted into an AST and executed as an active message on the server (receiver).

That said, I brought it up because the copy alluded to it. It's a great time sink to build an interpreter, even if it's only acting on a unit of work whose variant is strictly decreasing, just look at Linq-to-SQL and IQbservable<T> a decade ago.

> KJ Promises

Side note: another copy that greatly frustrates me, as I now try to find the docs on the above async stuff:

> Essentially, in our quest to avoid latency, we’ve resorted to using a singleton-ish design, and singletons are evil [linked to a page that crashes for HTTPS-everywhere users (me) with PR_END_OF_FILE_ERROR].

(Besides the broken link,) singletons are not always evil. I know you know this, I know the copy is tounge-in-cheek again and being ironic — BUT POE's LAW FOR CRYING OUT LOUD :D https://en.wikipedia.org/wiki/Poe%27s_law — "Poe's law is an adage of Internet culture stating that, without a clear indicator of the author's intent, it is impossible to create a parody of extreme views so obviously exaggerated that it cannot be mistaken by some readers for a sincere expression of the views being parodied"

...so KJ Promises; I can't find that mentioned. I only find Promise Pipelining; but that must be your Op:s that allow for field traversal? The site has this copy:

> [RPC Page] With pipelining, our 4-step example can be automatically reduced to a single round trip with no need to change our interface at all.

But I must be from another planet, because I really don't understand:

- first you have a pretty decent design of files that mimic local files

- now you instead showcase what rich messages look like, calling that a "[message?] singleton", linking to a broken site

- path string manipulation exists in every standard lib, it's not something we implement

- if someone wants to perform multiple ops on a file - let's say read a chunk of it * only Data needs to be resused for there to be no copies (contrary to the copy), but that's also a problem in the first decoupled example

  * often, almost always, when I read about what an RPC system can do, I'm not in the memory-management mind-set, so I don't care about re-allocating resources

  * caching is not a relevant solution, it's to the contrary; completely irrelevant in this context and it's detracting from understanding what you want me to understand

  * caches aren't error-prone when used right, like with immutable data, or read-through caches as transparent proxies can do, but all of this is beside the point
- then there's a discussion about "giving a Filesystem" to someone, when it's really all in my program

  * hard-coding a path is out of scope; that's about engineering process, not about the software. You ask "But what if they [have] hard-coded some path", I answer "yes, so what?"

  * what if we don't trust them (our own code?) — no, it's not an AuthZ decision locally, it's remotely, so you want to be explicit about the error cases here, but there's nothing about it — instead the copy says "now we have to implement [authN authZ systems]" — but again, this has nothing to do with merging small interfaces into a larger interface; it's a problem even with the small interfaces
- the section ends with the broken link and then the next section states "Promise Pipelining solves all of this!" — but noo, there's so much mentioned above, and the premise is unclear and I have no idea what exactly promise pipelining solves!

And then you go with the calculator example; but the file is large and I still don't know what "Promise Pipelining" means, where to look. I see a lot of construction of values going on and then a blocking wait (polling of the event-loop, but that's also not the point). There are so many bugs in that copy, that it's really hard to know where to start detangling it! With that copy, I would never in my life touch the underlying code! It should be fixed! (sorry, I'm getting into a state here, but that copy... wow)

And this is the kicker that seals the deal:

> Didn’t CORBA prove

Ehm, WTF? Why not contrast with gRPC? But also, why not clarify the above first so I can use that understanding myself? If I'm not a newbie at this, why do you mention CORBA? Do I look like the kind of person that would ask that question? It's demeaning to the reader.

And you mention object capabilities; that's a HUGE area of research, of which I've worked with no live system using it. But here it's casually mentioned, like building such as system is a walk in the park without explaining how.

---

So here I am after another frustrated 40 minutes on the site, and I still haven't found the docs on KJ Promises.


Very nice of people to down-vote because they don't agree with what I write; I'm not being mean or evil here, I'm just stating an opinion and trying to back that opinion up with clear reasoning, references and links to actual research. Down-voting then seems very much like group-think to me.

> It's not polling. The idea is that the callback is called multiple times to send all the chunks, for one invocation of `streamingCall()`. Sorry if that wasn't clear.

The point of my comment was exactly that there were many things in the release notes (and subsequent visits to the site) which were unclear.

> As others have pointed out, you are taking a very superficial and literalist definition of OOP and FP.

Precisely, which is my point; why use that terminology at all? It's neither specifically OOP nor FP, but if it's something, it's more FP than OOP to program with functions. Either way, it's super-superficial (pun intended! :)) and detracts from your message in your release notes!

> `Data` is a basic data type in Cap'n Proto. It means an array of bytes. You can use any other type here if you want, it's just an example.

But my point is whether the RPC frameworks garantuees complete messages or only complete "chunks" to arrive atomically?

> What is "not the case"? The whole point of this streaming feature is to do exactly what's described in your Wikipedia quote.

You're not solving the problem as far as I can read in your article, you've only pointed it out. In my second link, you'll see there are both a) second-, third- etc hop buffers to take into consideration at the sending side, b) there's timing to take into consideration, c) there's the whole "send a large message initially" heuristic, to take into consideration; all in all it turns into the end-to-end argument — you need knowledge on both sides to optimise this, and preferrably an out of band channel.

> "Time travel" and "infinitely faster" are obviously tongue-in-cheek claims.

It's obvious the literal claims are, but it may not be obvious to readers that your trace diagram is also tounge-in-cheek; at least clearly break with the jokey-attitude afterwards and perhaps show an actual trace diagram of how it works (because the trace diagram is only partially a lie; the pipelineing is still an interesting concept that I think people want to understand!).

> No, the calling thread does not block.

In your sample here you block https://github.com/capnproto/capnproto/blob/master/c++/sampl... to get the result. What I'm saying is that it's very unclear how you compose your flow without doing the blocking wait to get intemediate results in between RPC calls. (Exactly: you seem to lack a monadic control flow, specifically the tie-in that endofunctors would give you, and given this lack, you cannot handle exceptions out of band to the happy-path via Choice/Either/Result types)

> Haha yeah that's me, just some amateur that knows nothing about network protocols...

No, it's not nothing about you. I'm sure you're very competent. I'm sure Can'n'Proto is competent too; what I'm talking about is the way the framework is described and how the messaging looks to someone with close to two decades of programming experience. I hope you take is like this; as constructure criticism. I'm already very happy that you replied, because just as I believe you're a stanch proponent of your RPC framework and believe in it, I believe in my gut feeling when it comes to software — and my gut is telling me the above about your marketing message.


> But my point is whether the RPC frameworks garantuees complete messages or only complete "chunks" to arrive atomically?

Method calls arrive atomically. There is no concept of "chunks" in the RPC system itself; that was just what I used in the example. In practice by far the most common use case for streaming I've seen is byte streaming, e.g. large file downloads, so that's what the example used, but it's not limited to that.

> you need knowledge on both sides to optimise this, and preferrably an out of band channel.

Yes, Cap'n Proto has knowledge from both sides. The "Return" message from each call serves as an application-level acknowledgment that the message has been received and processed. This is enough information for the sender to maintain a send window that places an upper bound on buffer bloat. Calculating an ideal window size is tricky and the current solution of stealing the OS's choice is, as admitted in the post, a hack. But all the necessary information is there to do something better in the future.

> In your sample here you block https://github.com/capnproto/capnproto/blob/master/c++/sampl.... to get the result.

`.wait()` is a convenience method that means "run the event loop until this promise resolves". It can't be used recursively (so, can't be used from a callback called by the event loop), which means it can only be used at the top level of the program, e.g. in the program's main() function. Typically it is used in client code which really has one main task that it's trying to do. It turns out this pattern is a lot clearer and cleaner than the usual approach where you'd say "run the event loop now" and then have some other thing you call elsewhere to say "please break and exit from the event loop".

(Actually, with this release, the above isn't quite true anymore. .wait() can also be used in fibers. A fiber is an alternate call stack running in the same thread as the event loop, but not running the event loop itself. A fiber can call .wait() on a promise to switch back to the main stack and run the event loop until the promise resolves. This is a hack, not recommended for common use, but can be very useful for adapting synchronous libraries to work in an asynchronous program.)

Most KJ async code, though, does not use `.wait()`. It uses `promise.then(callback)`, which is exactly the monadic control flow you are looking for.

> someone with close to two decades of programming experience

Cool, I just hit the 30-year mark myself.

> I believe in my gut feeling when it comes to software — and my gut is telling me the above about your marketing message.

Exactly. Look, here's what I think happened here: You looked at this page, you saw the silly marketing, and your "gut feeling" told you that this guy is an amateur who needed to be put in his place. So then you started seeking out details that you could criticize as amateurish. But there's a lot to grok here, and instead of actually digging into every detail in depth, you started filling in the bits you didn't understand with your own assumptions. And in your assumptions, you assumed the details must be amateurish, because that's what your gut told you. So then you end up criticizing the amateurish details you yourself made up.

For example, you assumed that KJ promises don't use monadic control flow, when they most emphatically do. How did you get there? You looked at some code and found one example that didn't happen to use the monadic flow (unlike 99% of KJ async code), and then you assumed the rest because of course an amateur wouldn't know about monads.

This is why you were downvoted. (Not by me. Everyone else could see what was happening.)

To be fair, this is totally normal human behavior -- it's called "confirmation bias". This particular flavor of it is especially common among programmers and HN readers, in my experience. (I do it myself all the time.)

Now sure, maybe my post and the web site in general could have been better at explaining the details. But my advice for you is, next time, try to recognize when you don't actually know the details, and then try to assume the best possible details, or at least ask questions, rather than assuming the worst.


> Exactly. Look, here's what I think happened here: You looked at this page, you saw the silly marketing, and your "gut feeling" told you that this guy is an amateur who needed to be put in his place. So then you started seeking out details that you could criticize as amateurish.

Again, I'm not attacking you as a person or your work, I'm attacking the shoddy copy. But yes, this whole thread is about what the first impression of your web site is like, and it's amateurish. Perhaps you're someone who can live with that, and how that results in lost conversions (?) from visits, maybe you're not. In any case, from running businesses myself, I know that getting clear and honest feedback is worth its weight in gold.

> instead of actually digging into every detail in depth

Which is precisely my point that I won't do. If you can't communicate clearly to your visitors on their first visit, but expect them to dig into the details, you've lost that battle.

> For example, you assumed that KJ promises don't use monadic control flow, when they most emphatically do. How did you get there? You looked at some code and found one example that didn't happen to use the monadic flow.

Again, what I found has nothing to do with you. I'm yet to actually have seen the examples you mention in your docs, and the pages I've linked to and the calculator sample doesn't show your statements to be true (only based on that). Perhaps, instead of being so tounge-in-cheek on your introduction page, you assume the visitor is not an amateur, and show what you got (you can move the funny bits to after the introduction, when the reader is in on the joke)?

> The "Return" message from each call serves as an application-level acknowledgment that the message has been received and processed. This is enough information for the sender to maintain a send window that places an upper bound on buffer bloat.

THIS; is what kind of copy you could use. Also not using a confusing `Data` record name, which conflates L4 and L7 semantics (even if that data is identical to memory of a record), not using `my-` prefixed variables, but building a use-case like "suppose this was for streaming music", or something.

If uptake is an aim, focusing much more on other languages than C++ is probably wise, too. And some diagrams to explain what trade-offs have been made (can you load-balance CapNProto like you can gRPC? Is -Proto, Protobuf, so I can use an existing schema registry in my "enterprise"? Is it safe, "like HTTP2" — the newbie stuff in relation to the world around you).

> Cool, I just hit the 30-year mark myself.

Time flies, eh? ;)

> But my advice for you is, next time, try to recognize when you don't actually know the details, and then try to assume the best possible details, or at least ask questions, rather than assuming the worst.

I'm aware I don't know the details! I've said so repeatedly; if I read the details while I have a conversation about how someone who's never encountered Cap'n Proto before, reacts to its material, I'm not that person anymore.

From the above person's perspective, you have a bit of a problem with marketing; I've actually bounced off your front page about three times the last six (maybe?) years and the copy/messaging has always put me off. I'm sure I'm not the only one. Just ping me if you actually do like having someone review that copy for you.


> lost conversions

I'm not selling anything. This isn't a business.

> I've actually bounced off your front page about three times the last six (maybe?) years and the copy/messaging has always put me off.

Working as intended. :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: