Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have a project that uses a proprietary SDK for decoding raw video. I output the decoded data as pure RGBA in a way FFMpeg can read through a pipe to re-encode the video to a standard codec. FFMpeg can't include the Non-Free SDK in their source, and it would be wildly impracticable to store the pure RGBA in a file. So pipes are the only way to do it, there are valid reasons to use high throughput pipes.


What percentage of CPU time is used by the pipe in this scenario? If pipes were 10x faster, would you really notice any difference in wall-clock-time or overall-cpu-usage, while this decoding SDK is generating the raw data and ffmpeg is processing it? Are these video processing steps anywhere near memory copy speeds?


So pipes are the only way to do it

Lets not get carried away. You can use ffmpeg as a library and encode buffers in a few dozen lines of C++.


The parent comment mentioned license incompatibility, which I guess would still apply if they used ffmpeg as a library.


If the license is incompatible, it would still be incompatible regardless of whether you use library API calls or pipes.


And you go from having a well defined modular interface that’s flexible at runtime to a binary dependency.


You have the dependency either way, but if you use the library you can have one big executable with no external dependencies and it can actually be fast.

If there wasn't a problem to solve they wouldn't have said anything. If you want something different you have to do something different.


The context of this discussion is that it would be better if pipes were faster. Then you would have more options.


I replied to them saying "So pipes are the only way to do it".


ffmpeg's library is notorious for being a complete and utter mess


It worked extremely well when I did something almost exactly like this. I gave it buffers of pixels in memory and it spit out compressed video.


What about domain sockets?

It's clumsier, to be sure, but if performance is your goal, the socket should be faster.


It looks like FFmpeg does support reading from sockets natively[1], I didn't know that. That might be a better solution in this case, I'll have to look into some C code for writing my output to a socket to try that some time.

[1] https://ffmpeg.org/ffmpeg-protocols.html#unix


Why should sockets be faster?


Sockets remap pages without moving any data while pipes have to copy the data between fds.


Why not just store the output of the proprietary codec in an AVFrame that you'd pass to libavcodec in your own code?


At some point, I had a similar issue (though not related to licensing), and it turned out it was faster to do a high-bitrate H.264-encode of the stream before sending it over the FFmpeg socket than sending the raw RGBA data, even over localhost… (There was some minimal quality loss, of course, but it was completely irrelevant in the big picture.)


> There was some minimal quality loss, of course, but it was completely irrelevant in the big picture

But then the solutions are not comparable anymore, are they? Would a lossless codec instead have improved speed?


No, because I had hardware H.264 encoder support. :-) (The decoding in FFmpeg on the other side was still software. But it was seemingly much cheaper to do a H.264 software decode.)


H.264 has lossless mode.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: