> This isn't just about the filesystem being full btw. If you imagine a command like ./foo.py | head -n 10, it only makes sense for the 'head' command to close the pipe when it's done, and foo.py should be able to detect this and stop printing any more output.
The usual way of handling this is by not (explicitly) handling it. Writes to a closed pipe are special, they do not normally fail with a status that the program then all too often ignores, they result in a SIGPIPE signal that defaults to killing the process. Extra steps are needed to not kill the process. No other kind of write error gets this special treatment that I am aware of.
That's a POSIX thing. It doesn't apply to all C implementations but it does apply to many more than just Linux-based ones. You've not got a closed pipe so you wouldn't see it, you've just got a closed file descriptor. Try running it as
./a.out | :
and you will probably see it. I say probably because there is a timing aspect as well, the write may happen before the pipe gets closed in which case it will not fail, but it is unlikely to.
Yeah I should've said POSIX, my bad. But yeah my point was it's not plain C behavior.
And yes on Linux I do see it with your no-op example now. Though for some reason not with 'head'... what's going on? Is it not closing the pipe when it exits?
$ printf '%s\n' '#include <stdio.h>' '#include <unistd.h>' 'int main() { setvbuf(stdout, NULL, _IONBF, 0); int r = puts("Starting...\n"); r += fputs("First\n", stdout); fflush(stdout); usleep(1000000); fprintf(stderr, "%d\n", r); }' | cc -x c - && ./a.out | head -n 1
Starting...
19
We know that b and c both happen after a, and that d happens after c. However, we do not know whether b happens before c, between c and d, or after d. Your a.out process will only get killed by SIGPIPE if it happens after d.
On my system, running a.out under strace is enough to slow it down enough to affect the timing and see the SIGPIPE you were expecting. You may alternatively insert artificial delays in your test program such as by calling the sleep() function between the two lines of output to see the same result.
Sorry, I think I edited my comment while you were replying. But I just noticed the problem in the most recent version was that I didn't write to stdout after the usleep(), so it never raised SIGPIPE. Thanks.
The usual way of handling this is by not (explicitly) handling it. Writes to a closed pipe are special, they do not normally fail with a status that the program then all too often ignores, they result in a SIGPIPE signal that defaults to killing the process. Extra steps are needed to not kill the process. No other kind of write error gets this special treatment that I am aware of.