Pretty disappointing to see it using XLA. Anything Tensorflow is notoriously difficult to build on a local machine and generally considered a dependency nightmare. I would've thought libtorch bindings would be much easier.
Our compiler backends are pluggable. We went with XLA because that seemed the most accessible to us 3 months ago but you should be able to bring in libtorch or any other tensor compiler.
That's actually one of the things I am really looking forward to: see what other compilers people will integrate with. I am aware some of the neural network libs have pluggable compilers but it will be interesting to see it done at the tensor level. :)
> Nx = Project for a collection of libraries. Nx is the core library, the other libraries depend on this core library
> If you come from Python, it can be though of as kind of like Numpy. Long way to go but working on that.
> “Slowness in Elixir” due to immutability and copying etc. Performance comes from module that is part of Nx called Numerical Definitions
> In the definition you write with a subset of Elixir. Elixir kernel is replaced with Numerical kernel.
> Based on Google XLA (accelerated linear algebra)
> You can give it a computation graph and exla compiles it to run efficiently on CPU or GPU