Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

have you talked/written about this in a longer form anywhere? I'd love to read how you got into this if so.

There's plenty of Rust-talk here on HN, and plenty of numerical/ML talk, but they seems like very non-overlapping user groups. Particularly curious why you chose Rust over Julia or even Go, given that (to an outsider) both seem to have a lot more people excited to do ML with them.



I currently don't really have time currently to write this up in a blog post or article. As the state of the field is, it is often handy to be able to bind to C or C++ libraries (such as liblinear, Tensorflow, etc.).

I used to use Go when Rust 1.0 was not out yet. I was generally happy with Go. My primary gripes were:

(1) C calls are relatively expensive, which is ok for long-running functions (e.g. run a Tensorflow session), but now nice for calling C code in a tight loop.

(2) Go gives no control over memory alignment, while C++ machine learning libraries generally prefer alignment on a certain boundary. IIRC Go aligns slices on a 16-byte boundary and Eigen (which is used on Tensorflow) wants arrays that are aligned on a 32-bit boundary. I don't know if this is still true, but Tensorflow used to make a copy of an array that was incorrectly aligned. So, you had to resort to doing allocation in C and then using the SliceHeader hack to give the array somewhat of a native Go interface. But since the backing memory is allocated in C-land and Go does not have general destructors (only defer), you have to rely on finalizers to do deallocation (which provide no guarantee when they are called) and/or relying on the user to explicitly call a resource deallocation method. Of course, that does not really gel well with slices.

(3) Rust + LLVM optimizes far better than the Go compiler. This is not a secret. Parametric polymorphism provides much more opportunities for inlining than dynamic dispatch and LLVM inlines and optimizes aggressively. I spent a lot of time compiling compiled Rust and compiled Go assembly and the differences are quite shocking. That said, the Go people have been working on better inlining as of recently and with the possibility of generic in Go 2.0, things will probably get better.

That said, I think that in larger teams, Go may still be the better choice. It is easier to pick up. It is harder to make code that is 'too clever'. It has good packages, such as gonum, which provides a lot of linear algebra functionality and functionality for plotting. Tensorflow has direct support for Go, including for building computation graphs, etc.

I would love to try Julia! But I did not progress much beyond the first few steps of the tutorial ;). I think I would like it much for quick exploration in the vain of Python + numpy.


We do have people interested, but it's not a ton so far.

One big reason that numerics aren't the best in Rust right now is the lack of integer generics. Some people are waiting for that before bothering to write libraries, as they'd change the libraries so much anyways.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: