I don't think this is right. The process will crash, and the Supervision strategy you are using will determine what happens from there. This is what the BEAM is all about. The thing with NIFs is that they can crash the entire VM if they error.
Erlang's (Elixirs) error management approach is actually "Let it crash"
This is based on the acknowledgment that if you have a large number of longer running processes at some point something will crash anyway, so you may quite as well be good at managing crashes ;-)
Yes, but that's not Rust's error management strategy. Most Rust code isn't written with recovery from panics in mind, so it can have unintended consequences if you catch panics and then retry.
How so? The whole point of unwinding is to gracefully clear up on panics, how did it peak for you?
It's also not like there is much of a choice here. Unwinding across FFI boundaries (e.g. out of the NIF call) is undefined behaviour, so the only other option is aborting on panics.
Yeah, this would at least cause me to email my boss and say "Can you just confirm, you want me to transfer $25 million to this account? I'll hold off until you give me confirmation in writing"
hell i do this if our tester hasn't managed to go over some aspect of our release. That way i get in writing from the product owner that he has OKd it, and if he sends me a teams message i ask him to email me confirmation.
Reminds me of that old urban legend about the trader who ordered the coal futures that eventually showed up as actual coal. In that story there's always an element where the subordinates who have to carry out the transactions have been abused to the point of never questioning his decisions.
Yep sometimes I go so far as printing the email and sticking it in a meatspace folder on my desk. Just depends on how important that sign off really is and the consequences of not being able to produce it.
I think we need to give it time. Python had a slow and steady growth from 1991 until today and it has eaten so much of the data analysis and ML world (backed by C++, C and more recently Rust).
But python doesn't vertically scale very well. A language like Elixir can grow to fit the size of the box it is on, making use of all the cores, and then without too much additional ceremony scale horizontally aswell with distributed elixir/erlang.
Elixir getting a good story around webdev (phoenix and liveview) and more recently the a good story around ML is going to increase it's adoption, but it's not going to happen overnight. But maybe tomorrows CTOs will take the leap.
Strong disagree, this is a skill issue. I have written C++ modules for Python with pybind11 to speed up some code significantly but ended up reverting to pure Python once I learned how to move memory efficiently in Python. Numpy is very good at what it does and if you really have some custom code that needs to go faster you can run it externally through something like pybind11. It is a skill issue mostly. If you are writing ultra low latency code then you're right. You can make Python really fast if you are hyper aware of how memory is managed; I recommend tracemalloc. For instance, instead of pickling numpy arrays to send them to child processes, you can use shared memory and mutexes to define a common buffer which can be represented as a numpy array and shared between parent and child processes. Massive performance win right there, and most people simply would have never realized Python is capable of such things.
I said python doesn't scale well and you say "it does if you use an escape hatch to a faster language"
Sure. Writing C++ that utilises your resources effectively then writing bindings so you can use that in python is great. But with elixir, if I've got 8 cores and 8 processes, those 8 processes run in parallel.
If I want raw cpu speed I can write something in rust, c, cpp or zig and then call it still using elixir semantics.
Not to mention that with Nx you can write elixir code that compiles to run on the GPU. Without writing any bindings.
If you are going to iterate through some of the resulting thing but not all of it then the generator means you aren't throwing away a bunch of the work that you've done.
It can also be more cache friendly. It doesn't need to allocate a whole new lists worth of memory.
One of the downsides of it being lazy is that if list x is mutated between when you create the generator and when you consume it then those changes are reflected in the generator.
I've done some micro benchmarks and it really depends on what you are doing. Profiling it with cProfile, Pyspy or using %timeit in an Ipython shell will tell you if it makes a difference.