Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Buffering is there for a reason and this approach will lead to weird failure modes and fragility in scripts. The core issue is that any stream producer might go slower than any given consumer. Even a momentary hiccup will totally mess up the pipe unless there is adequate buffering, and the amount needed is system-dependent.


I think the OP's proposal has buffering.

It is different from a pipe - instead of using read/write to copy data from/to a kernel buffer, it gives user space a mapped buffer object and they need to take care to use it properly (using atomic operations on the head/tail and such).

If you own the code for the reader and writer, it's like using shared memory for a buffer. The proposal is about standardizing an interface.


What makes this any different than other buffer implementations that have a max size? Buffer fills, writes block. What failure mode are you worried about that can't occur with pipes which are also bounded?


Maybe I misunderstand, but if the ring buffer is full isn't it ok for the sender to just block?


Yeah, and if the ring buffer is empty it's okay for the receiver to just block... exactly as happens today with pipes




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: