Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

100%.

With that said... there is a reason why ML went with Python. GPU programming requires C-based libraries. NodeJS does not have a good FFI story, and neither does Rust or Go. Yes, there's support, but Python's FFI support is actually better here. Zig is too immature here.

The world deserves a Python-like language with a better type system, a better distribution system, and not nearly as much dynamism footguns / rope for people to hang themselves with.





What about Elixir?

https://pragprog.com/titles/smelixir/machine-learning-in-eli...

A Practical Guide to Machine Learning in Elixir - Chris Grainger

https://www.youtube.com/watch?v=Es08MRtSkoE


I've actually done a fair bit of ML work in Elixir, in practice I found:

1) It's generally harder to interface with existing libraries and models (example: whisperX [0] is a library that combines generic whisper speech recognition models with some additional tools like discrete-time-warping to create a transcription with more accurate time stamp alignment - something that was very helpful when generating subtitles. But because most of this logic just lives in the python library, using this in Elixir requires writing a lot more tooling around the existing bumblebee whisper implementation [1]).

but,

2) It's way easier to ship models I built and trained entirely with Elixir's ML ecosystem - EXLA, NX, Bumblebee. I trained a few models doing basic visual recognition tasks (detecting scene transitions, credits, title cards, etc), using the existing CLIP model as a visual frontend and then training a small classifier on the output of CLIP. It was pretty straightforward to do with Elixir, and I love that I can run the same exact code on my laptop and server without dealing with lots of dependencies and environment issues.

Livebook is also incredibly nice, my typical workflow has become prototyping things in Livebook with some custom visualization tools that I made and then just connecting to a livebook instance running on EC2 to do the actual training run. From there shipping and using the model is seamless, and I just publish the wrapping module as a library on our corporate github, which lets anyone else import it straight into livebook and use it.

[0] https://github.com/m-bain/whisperX

[1] https://hexdocs.pm/bumblebee/Bumblebee.Audio.Whisper.html


Thanks for sharing your experience with Elixir and ML.

Hopefully over time Elixir's ML ecosystem will become even better.


> NodeJS does not have a good FFI story, and neither does Rust or Go. Yes, there's support, but Python's FFI support is actually better here.

Huh. I've found Rust's FFI very pleasant to work with. I understand that Zig's is second to none, but what does Python offer in this domain that Rust (or Go) doesn't?


Rust's problem is similar to Go's - the language makes some very strong guarantees, and FFI breaks those guarantees, so trying to work with FFI in those languages "infects" the codebase and breaks the value-add of working with the codebase to begin with.

In Rust's case, it's the necessity of wrapping FFI with unsafe. Memory deallocation e.g. cudaFree() is just part of the underlying reality; trying to handle memory management in a language with a borrow checker rather defeats the purpose of using a language with a borrow checker in the first place. Python lets library authors write __enter__ and __exit__ dunder methods to ensure that memory deallocation is handled correctly via Python context managers, which is a much more elegant abstraction. Yes, in Rust you can implement the Drop trait, but then the caller needs to remember to put the object in its own block... like I said, it's definitely possible with Rust, it's just not as nice of a story.


I don't see how what you describe doesn't in general apply to FFI between any languages with different resource management philosophies. In particular:

> Yes, in Rust you can implement the Drop trait, but then the caller needs to remember to put the object in its own block...

Why would you need to remember to put the object in its own block? If you want to manually control deallocation, just call drop manually (or put the object in its own block if you really prefer). If you don't care, just let the Rust compiler pick a time to drop. In both cases, the most important guarantee – that drop doesn't happen while references to the object live – is still upheld.


> Python lets library authors write __enter__ and __exit__ dunder methods to ensure that memory deallocation is handled correctly via Python context managers, which is a much more elegant abstraction

Whats stopping you from writing a WrapperPtr and the drop trait for it in Rust? This would achieve the same as the dunder methods in python


Real world experience for the author which they probably lack in other langs. It would be an absurd statement otherwise.

> The world deserves a Python-like language with a better type system, a better distribution system, and not nearly as much dynamism footguns / rope for people to hang themselves with.

Nim. The tooling is still immature though.


C#/.Net? (Their too strong focus on worthless backwards compatibility and slow (very slow) development speed on basic language features not withstanding.)

Admittedly I haven't used C# in a few years, but to my knowledge it is much more ergonomic than java and personally it's my preferred language. Only thing stopping me from using it more is it has a much smaller community than java/python etc. Wondering what you think is missing.

It’s called Java.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: