Hacker News new | past | comments | ask | show | jobs | submit login

Most consumer-grade audio hardware really only does playback. We've been doing software audio since around the turn of the century.

In Chrome's implementation, none of the mixing, DSP, etc. go through the hardware, and I'm more than certain that's the case for every other browser out there.




Audio controllers do at least do hardware-accelerated decoding of audio streams in e.g. H.264, though, yes?

But my question was more like: is Web Audio a mess mostly because it's an attempt to expose the features of the twenty-odd different OS audio backends on Windows/Mac/Linux, where the odd inclusions and exclusions map to the things that all the OS audio backends happen to share that Chrome can then expose?


> is Web Audio a mess mostly because it's an attempt to expose the features of the twenty-odd different OS audio backends

That is a good guess, but no. The main features of the Web Audio API (built-in nodes, etc.) are not backed by any kind of OS-level backend, it's all implemented in software in the browser. The spec design was based on what someone thought were useful units of audio processing. It's not a wrapping/adaptation of some pre-existing functionality.


H.264 is a video codec.

If you mean AAC or MP3, which are usually used in the audio track along side H.264 in an MP4 or MP2-TS container, nobody outside of low power/embedded bothers to decode the audio codec in hardware, it's just not worth it.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: