Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thanks, that actually explains it pretty well:

> Nx = Project for a collection of libraries. Nx is the core library, the other libraries depend on this core library

> If you come from Python, it can be though of as kind of like Numpy. Long way to go but working on that.

> “Slowness in Elixir” due to immutability and copying etc. Performance comes from module that is part of Nx called Numerical Definitions

> In the definition you write with a subset of Elixir. Elixir kernel is replaced with Numerical kernel.

> Based on Google XLA (accelerated linear algebra)

> You can give it a computation graph and exla compiles it to run efficiently on CPU or GPU



Pretty disappointing to see it using XLA. Anything Tensorflow is notoriously difficult to build on a local machine and generally considered a dependency nightmare. I would've thought libtorch bindings would be much easier.


Our compiler backends are pluggable. We went with XLA because that seemed the most accessible to us 3 months ago but you should be able to bring in libtorch or any other tensor compiler.

That's actually one of the things I am really looking forward to: see what other compilers people will integrate with. I am aware some of the neural network libs have pluggable compilers but it will be interesting to see it done at the tensor level. :)


That's great to hear. I'd love to see the Bazel monstrosity that is Tensorflow XLA get easier to build because of Nx. Not holding my breath though!


Yeah, I would love if we could depend only on XLA. I think other communities like Julia could benefit from it too.


So, two questions. Is this a standard library module, and does this run on the GPU?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: