Hacker News new | past | comments | ask | show | jobs | submit | gyan's comments login

Except for their immune systems or lactose tolerance.

65% of humans have lactose intolerance, so depending on where exactly you teleport them to it might be a completely normal thing. I'd imagine the immune system will have the capacity to develop in the same way too, so really it should work out fine.

As an immunologist, I see no reason why the newborn from tens of thousands of years ago wouldn't be perfectly suited for the modern world.

Lactose tolerance in Europeans likely arose with early PIE groups as they began domesticating horses and oxen. Perhaps several time independently in different groups.

Lactose tolerance in populations is linked with pastoralism, and if I am remembering correctly colder climates as well.

Most humans today are not lactose tolerant as adults - it’s actually the exception.


Remittances for 2023 made up 23% of state GDP.

Table 5.2, pg 51 of https://iimad.org/wp-content/uploads/2024/06/KMS-2023-Report...


> You can’t solve the problem of understanding norms by writing copy

You can mitigate the problem by adding a short summation before or in between something they're likely to read.


Not quite frosted, but I went for a glass pane effect on the sidebar at https://www.gyan.dev/ffmpeg/builds/


Your site is very relaxing and pretty on pc.


For real, I'm not a fan of flashy colors in websites and that palette looks very calming


> There's a feature request for the open source & cross-platform x265 codec to support transparency, but it doesn't seem to be going anywhere.

x265 has added support for alpha very recently, but only using their standalone CLI. https://bitbucket.org/multicoreware/x265_git/commits/c8c9d22...


Ohhhh! I totally missed this. Thanks for the heads up! I wonder if it was triggered by this article, or a weird coincidence.


It's restricted to constant QP rate, and I haven't managed to get it to produce a playable file yet. Maybe I'm holding it wrong. But anyway, it's exciting seeing bits of this land.


This uses the hash muxer in ffmpeg, which consolidates all streams into one. Use the streamhash muxer to emit hashes per-stream, which can isolate any changes to specific streams.


See the --only flag too:

    --only=<ONLY> 
    hash the an input file container's first audio or video stream only, if available.  dano will fall back to default behavior, if no stream is available. [possible values: audio, video]


I noticed that both muxers convert audio to signed 16-bit PCM by default. Is there a way to avoid this behavior without specifying a codec?


No, 16-bit PCM is the default audio codec. If no `-c` is specified for a stream, ffmpeg will encode using the default codec. But if `-c X` is declared where X=`copy` or something else, then that is honored.


My upload for the source pkg only finished 20 mins. ago. The Winget maintainer should update their manifest shortly.


API users shouldn't pay attention to the ffmpeg version but those of the libraries. API and ABI breaks only happen at major version bumps of those.


VVC decoder is available, but it's flagged as experimental, so you have to prefix `-strict experimental` before the VVC input `-i`.


Thanks Yes I meant officially. Was hoping the we could set stage for VVC little earlier. I know VVC is not popular on HN or literally anywhere on Internet but I do hope to see it moving forward instead of something like MPEG-5 EVC which is somewhat dead in the water.


I don't know that having so many codecs is a good thing unless they really add something. How does it compare to av1 (which I was under the impression is coming to be the natural successor of hevc, with hardware support)?


Comparing to AV1, VVC / H.266 is expected to offer 20-30% reduction in Bit-Rate with similar quality at similar level of computational complexity. And it is already deployed and used in real world in China and India. I believe Brazil are looking to use it as their next generation codec for broadcasting along with LCEVC.




VVC is marked as experimental as fuzzing continues on it.


Changelog:

- DXV DXT1 encoder

- LEAD MCMP decoder

- EVC decoding using external library libxevd

- EVC encoding using external library libxeve

- QOA decoder and demuxer

- aap filter

- demuxing, decoding, filtering, encoding, and muxing in the

- ffmpeg CLI now all run in parallel

- enable gdigrab device to grab a window using the hwnd=HANDLER syntax

- IAMF raw demuxer and muxer

- D3D12VA hardware accelerated H264, HEVC, VP9, AV1, MPEG-2 and VC1 decoding

- tiltandshift filter

- qrencode filter and qrencodesrc source

- quirc filter

- lavu/eval: introduce randomi() function in expressions

- VVC decoder (experimental)

- fsync filter

- Raw Captions with Time (RCWT) closed caption muxer

- ffmpeg CLI -bsf option may now be used for input as well as output

- ffmpeg CLI options may now be used as -/opt <path>, which is equivalent

- to -opt <contents of file <path>>

- showinfo bitstream filter

- a C11-compliant compiler is now required; note that this requirement

- will be bumped to C17 in the near future, so consider updating your

- build environment if it lacks C17 support

- Change the default bitrate control method from VBR to CQP for QSV encoders.

- removed deprecated ffmpeg CLI options -psnr and -map_channel

- DVD-Video demuxer, powered by libdvdnav and libdvdread

- ffprobe -show_stream_groups option

- ffprobe (with -export_side_data film_grain) now prints film grain metadata

- AEA muxer

- ffmpeg CLI loopback decoders

- Support PacketTypeMetadata of PacketType in enhanced flv format

- ffplay with hwaccel decoding support (depends on vulkan renderer via libplacebo)

- dnn filter libtorch backend

- Android content URIs protocol

- AOMedia Film Grain Synthesis 1 (AFGS1)

- RISC-V optimizations for AAC, FLAC, JPEG-2000, LPC, RV4.0, SVQ, VC1, VP8, and more

- Loongarch optimizations for HEVC decoding

- Important AArch64 optimizations for HEVC

- IAMF support inside MP4/ISOBMFF

- Support for HEIF/AVIF still images and tiled still images

- Dolby Vision profile 10 support in AV1

- Support for Ambient Viewing Environment metadata in MP4/ISOBMFF

- HDR10 metadata passthrough when encoding with libx264, libx265, and libsvtav1


First of, ffmpeg is amazing, I'm very thankful to everyone involved in it.

> dnn filter libtorch backend

What's ffmpeg's plan regarding ML based filters? When looking through the filter documentation it seems like filters use three different backends: tensorflow, torch, and openvino. Doesn't seem optimal, is there any discussion about consolidating on one backend?

ML filters need model files, and the filters take a path to a model file as one of their arguments. This makes them really difficult to use, if you're lucky you can find a suitable model and download somewhere, otherwise you need to find a separate model training project and dataset and run that first. Are there any plans on streamlining ML filters and model handling for ffmpeg? Maybe a model file repository with an option of installing these in an official models path on the system?

Most image and video research use ML now, but I don't get the impression that ffmpeg tries to integrate the modern technologies well yet. Being able to do for instance spatial and temporal super resolution using standard ffmpeg filters would be a big improvement, and I think things like automatic subtitles using whisper would be a good fit too. But it should start with a coherent ML strategy regarding inference backend and model management.


> - ffmpeg CLI now all run in parallel

I think I read about this a few months ago but don't remember the details. What exactly does this do? Does it result in faster encoding/decoding if you have multiple filter graphs (for example a single cmd line that transcodes to new audio, extracts image, creates a low res)

> - ffmpeg CLI loopback decoders

No idea what this is...

Edit: threading => https://ffmpeg.org//index.html#cli_threading, loopback => https://ffmpeg.org/ffmpeg.html#Loopback-decoders

Loopback decoders are a nice concept. So could I use this to create a single ffmpeg command to extract images periodically (say 1/s) and then merge them into a horizontal strip (using the loopback decoder for this part)?


You don't need a loopback decoder for that. The periodic extraction will depend on a filter, and you can just clone and send the output of that filter to the tiling filter.


Had to go to ChatGPT for help. It appears that you need to know how many tiles to stitch. I was hoping to have that dynamically determined. Not sure if loopback will help.

CGPT said: ffmpeg -i input.mp4 -vf "fps=1,tile=3x1" -frames:v 1 output_stitched.png

Gemini: ffmpeg -i input_video.mp4 -vf "fps=1,scale=220:-1" -c:v png output.png


> - D3D12VA hardware accelerated H264, HEVC, VP9, AV1, MPEG-2 and VC1 decoding

I wonder if this also means that Chrome and Edge will be able to use this acceleration for their ffmpeg backend (instead of relying on MediaFoundation)?


And also, thank you for packaging.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: