Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One of my sideprojects is intended to address this: https://lwn.net/Articles/976836/

The idea is a syscall for getting a ringbuffer for any supported file descriptor, including pipes - and for pipes, if both ends support using the ringbuffer they'll map the same ringbuffer: zero copy IO, potentially without calling into the kernel at all.

Would love to find collaborators for this one :)



At least for user space usage, I'm not sure a new kernel thing is needed. Quite a while ago I have implemented a user space (single producer / single consumer) ring buffer, which uses an eventfd to mimic pipe behavior and functionality quite closely (i.e. being able to sleep & poll for ring buffer full/empty situations), but otherwise operates lockless and without syscall overhead.


> and for pipes, if both ends support using the ringbuffer they'll map the same ringbuffer

Is there planned to be a standardized way to signal to the other end of the pipe that ring buffers are supported, so this could be handled transparently in libc? If not, I don't really see what advantage it gets you compared to shared memory + a futex for synchronization—for pipes that is.


Presumably the same interface still works if the other side is using read/write.


correct


Presumably ringbuffer_wait() can also be signalled through making it 'readable' in poll()?


yes, I believe that's already implemented; the more interesting thing I still need to do is make futex() work with the head and tail pointers.


I wonder if existing ring buffer interfaces will consider using this or if we'll see an xkcd927 situation. Regardless, this seems like an interesting endeavour.


Buffering is there for a reason and this approach will lead to weird failure modes and fragility in scripts. The core issue is that any stream producer might go slower than any given consumer. Even a momentary hiccup will totally mess up the pipe unless there is adequate buffering, and the amount needed is system-dependent.


I think the OP's proposal has buffering.

It is different from a pipe - instead of using read/write to copy data from/to a kernel buffer, it gives user space a mapped buffer object and they need to take care to use it properly (using atomic operations on the head/tail and such).

If you own the code for the reader and writer, it's like using shared memory for a buffer. The proposal is about standardizing an interface.


What makes this any different than other buffer implementations that have a max size? Buffer fills, writes block. What failure mode are you worried about that can't occur with pipes which are also bounded?


Maybe I misunderstand, but if the ring buffer is full isn't it ok for the sender to just block?


Yeah, and if the ring buffer is empty it's okay for the receiver to just block... exactly as happens today with pipes




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: