I have a project that uses a proprietary SDK for decoding raw video. I output the decoded data as pure RGBA in a way FFMpeg can read through a pipe to re-encode the video to a standard codec. FFMpeg can't include the Non-Free SDK in their source, and it would be wildly impracticable to store the pure RGBA in a file. So pipes are the only way to do it, there are valid reasons to use high throughput pipes.
What percentage of CPU time is used by the pipe in this scenario? If pipes were 10x faster, would you really notice any difference in wall-clock-time or overall-cpu-usage, while this decoding SDK is generating the raw data and ffmpeg is processing it? Are these video processing steps anywhere near memory copy speeds?
You have the dependency either way, but if you use the library you can have one big executable with no external dependencies and it can actually be fast.
If there wasn't a problem to solve they wouldn't have said anything. If you want something different you have to do something different.
It looks like FFmpeg does support reading from sockets natively[1], I didn't know that. That might be a better solution in this case, I'll have to look into some C code for writing my output to a socket to try that some time.
At some point, I had a similar issue (though not related to licensing), and it turned out it was faster to do a high-bitrate H.264-encode of the stream before sending it over the FFmpeg socket than sending the raw RGBA data, even over localhost… (There was some minimal quality loss, of course, but it was completely irrelevant in the big picture.)
No, because I had hardware H.264 encoder support. :-) (The decoding in FFmpeg on the other side was still software. But it was seemingly much cheaper to do a H.264 software decode.)