65% of humans have lactose intolerance, so depending on where exactly you teleport them to it might be a completely normal thing. I'd imagine the immune system will have the capacity to develop in the same way too, so really it should work out fine.
Lactose tolerance in Europeans likely arose with early PIE groups as they began domesticating horses and oxen. Perhaps several time independently in different groups.
Lactose tolerance in populations is linked with pastoralism, and if I am remembering correctly colder climates as well.
Most humans today are not lactose tolerant as adults - it’s actually the exception.
It's restricted to constant QP rate, and I haven't managed to get it to produce a playable file yet. Maybe I'm holding it wrong. But anyway, it's exciting seeing bits of this land.
This uses the hash muxer in ffmpeg, which consolidates all streams into one. Use the streamhash muxer to emit hashes per-stream, which can isolate any changes to specific streams.
--only=<ONLY>
hash the an input file container's first audio or video stream only, if available. dano will fall back to default behavior, if no stream is available. [possible values: audio, video]
No, 16-bit PCM is the default audio codec. If no `-c` is specified for a stream, ffmpeg will encode using the default codec. But if `-c X` is declared where X=`copy` or something else, then that is honored.
Thanks Yes I meant officially. Was hoping the we could set stage for VVC little earlier. I know VVC is not popular on HN or literally anywhere on Internet but I do hope to see it moving forward instead of something like MPEG-5 EVC which is somewhat dead in the water.
I don't know that having so many codecs is a good thing unless they really add something. How does it compare to av1 (which I was under the impression is coming to be the natural successor of hevc, with hardware support)?
Comparing to AV1, VVC / H.266 is expected to offer 20-30% reduction in Bit-Rate with similar quality at similar level of computational complexity. And it is already deployed and used in real world in China and India. I believe Brazil are looking to use it as their next generation codec for broadcasting along with LCEVC.
First of, ffmpeg is amazing, I'm very thankful to everyone involved in it.
> dnn filter libtorch backend
What's ffmpeg's plan regarding ML based filters? When looking through the filter documentation it seems like filters use three different backends: tensorflow, torch, and openvino. Doesn't seem optimal, is there any discussion about consolidating on one backend?
ML filters need model files, and the filters take a path to a model file as one of their arguments. This makes them really difficult to use, if you're lucky you can find a suitable model and download somewhere, otherwise you need to find a separate model training project and dataset and run that first. Are there any plans on streamlining ML filters and model handling for ffmpeg? Maybe a model file repository with an option of installing these in an official models path on the system?
Most image and video research use ML now, but I don't get the impression that ffmpeg tries to integrate the modern technologies well yet. Being able to do for instance spatial and temporal super resolution using standard ffmpeg filters would be a big improvement, and I think things like automatic subtitles using whisper would be a good fit too. But it should start with a coherent ML strategy regarding inference backend and model management.
I think I read about this a few months ago but don't remember the details. What exactly does this do? Does it result in faster encoding/decoding if you have multiple filter graphs (for example a single cmd line that transcodes to new audio, extracts image, creates a low res)
Loopback decoders are a nice concept. So could I use this to create a single ffmpeg command to extract images periodically (say 1/s) and then merge them into a horizontal strip (using the loopback decoder for this part)?
You don't need a loopback decoder for that. The periodic extraction will depend on a filter, and you can just clone and send the output of that filter to the tiling filter.
Had to go to ChatGPT for help. It appears that you need to know how many tiles to stitch. I was hoping to have that dynamically determined. Not sure if loopback will help.
I wonder if this also means that Chrome and Edge will be able to use this acceleration for their ffmpeg backend (instead of relying on MediaFoundation)?
reply