Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Very impressive.

It'd be interesting to see what the lift would be to get encoding & decoding running in webassembly/wasm. Further, it'd be really neat to try to take something like the tflife_model_wrapper[1] and to get it backed by something like tsjs-tflite[2] perhaps running atop for example tfjs-backend-webgpu[3].

Longer run, the web-nn[4] spec should hopefully simplify/bake-in some of these libraries to the web platform, make running inference much easier. But there's still an interesting challenge & question, that I'm not sure how to tackle; how to take native code, compile it to wasm, but to have some of the implementation provided else-where.

At the moment, Lyra V2 can already use XNNPACK[4], which does have a pretty good wasm implementation. But trying to swap out implementations, so for example we might be able to use the GPU or other accelerators, could still have some good benefits on various platforms.

[1] https://github.com/google/lyra/pull/89/files#diff-ed2f131a63...

[2] https://www.npmjs.com/package/@tensorflow/tfjs-tflite

[3] https://www.npmjs.com/package/@tensorflow/tfjs-backend-webgp...

[4] https://www.w3.org/TR/webnn/

[5] https://github.com/google/XNNPACK



Why would you want to run codecs I'm WASM? Makes no sense to me.


Forwards and backwards compatibility and not having to rely on every vendor to ship support for the codec you want to use in your software.


Cause the web is awesome and anyone could use something built for it with zero friction.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: