This article outlines how to run FFmpeg on AWS Lambda:
https://intoli.com/blog/transcoding-on-aws-lambda/
With favorable cost comparisons to Elastic Transcoder. Not self-hosted, but looks like significant cost improvement. Also, one could set up a 'transcode farm' with virtual/real machines using FFmpeg running parallel with some light scripting for automation. Looking up 'render farm' might yield some ideas for distributed image computation. Just some ideas...
I ended up building a transcoder within the past few months using Lambda that is handling, as of today, 500k+ videos per month and growing. This was replacing a set of EC2 instances similar to what you described.
A few caveats I ran into:
1) You are limited in how much /tmp on a Lambda instance can hold (512MB total). For the transcode I'm doing on these boxes, some of the videos are that size just to download. It will fail. I have a backup using the older method to handle these very large instances.
2) These, obviously, need a pretty decent amount of RAM to run.
But the benefits are really worth it for us. Elastic Transcoder, for our needs, was going to be >$15k/mo). Our current transcoder cost is around $400/mo. The previous, EC2-based iteration, was a step up for us, but it sometimes took a while to start. Doing it with Lambda has actually been less expensive, provided a faster experience for our customers, and was significantly easier to build than the EC2-based option.
Not bad for a project to see if I could write something in Go.
Couldn’t you attach an EBS or EFS volume and use that for temporary storage?
Sure, even if it’s on SSD, it’s still likely to be more expensive than local storage on the lambda instance, but that might still be better than using ec2.