Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If it chooses a window that is too small, it won't fully utilize the available bandwidth. If it chooses one too big, it'll create queuing delay.

Yeah, that's a valid concern, and one I've run into in practice.

It's true that in environments where the server has access to TCP socket information, traditional windowing will have an advantage for performance. You may even be able to do some sort of detection as to how saturated the interface is from other processes.

As I see it the main advantage of the pull-based backpressure I described is the simpler mental model, making it easier to reason about and implement. So in environments with limited system information for the sender (ie WebSockets, which knows basically nothing about how full the buffers are), you don't have to pay the extra complexity cost with no benefit.



Hmm, but if the puller doesn't actually know what value of `n` is ideal, then what benefit is there to a pull-based model vs. having the pusher choose an arbitrary `n`?


The network isn't the only resource in play. The puller is hypothetically more aware of the size of it's buffers, processing capacity, internet connection speed, etc. But again, to me the primary advantage is the mental model. For omnistreams the implementation ended being almost the same as the ACK-based system I started with, but shifting the names around and inverting the model in my head made it much easier to work with.


Fair enough.

FWIW, Cap'n Proto's approach provides application-level backpressure as well. The application returns from the RPC only when it's done processing the message (or, more precisely, when it's ready for the next message). The window is computed based on application-level replies, not on socket buffer availability.

My experience was that in practice, most streaming apps I'd seen were doing this already (returning when they wanted the next message), so turning that into the basis for built-in flow control made a lot of sense. E.g. I can actually go back and convert Sandstorm to use streaming without actually introducing any backwards-incompatible protocol changes.


Ah I think I misread the announcement to mean you were using the OS buffer level information. But if I understand correctly you're just using the buffer size as a heuristic for the window size, then doing all the logic at the application level?

If that's the case, then implementation-wise these approaches are probably very similar, and window/ACK is the normal way of doing this, and also the pragmatic approach in your case.


Yep. I probably should have gone into more detail on that, and about the problem of slow-app-fast-connection. Oh well.


I would hope you'd be amused by: https://blog.apnic.net/2020/01/22/bufferbloat-may-be-solved-...

and I am curious if you have considered a fq_codel-like approach to message queuing? Sending a whole socketbuf kind of scares me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: