Hacker News new | past | comments | ask | show | jobs | submit | el_oni's comments login

Rustler catches panics before they crash the VM and raises them on the elixir side as an exception. So your process might crash but the vm wont


That's a neat way to get corrupted state in your application, especially when users of said language don't realize that their language has exceptions.

I wrote this recently about Go, but it equally applies to any Rust application that tries to recover from a panic.

https://kristoff.it/blog/go-exceptions-unconvinced/


I don't think this is right. The process will crash, and the Supervision strategy you are using will determine what happens from there. This is what the BEAM is all about. The thing with NIFs is that they can crash the entire VM if they error.


Erlang's (Elixirs) error management approach is actually "Let it crash"

This is based on the acknowledgment that if you have a large number of longer running processes at some point something will crash anyway, so you may quite as well be good at managing crashes ;-)

https://dev.to/adolfont/the-let-it-crash-error-handling-stra...


Yes, but that's not Rust's error management strategy. Most Rust code isn't written with recovery from panics in mind, so it can have unintended consequences if you catch panics and then retry.


This is terrible, actually. And I've run into it, causing a memory leak.


How so? The whole point of unwinding is to gracefully clear up on panics, how did it peak for you?

It's also not like there is much of a choice here. Unwinding across FFI boundaries (e.g. out of the NIF call) is undefined behaviour, so the only other option is aborting on panics.


Yes. Abort early in unit tests, core dump so it never makes it to prod


The panic is converted to an Erlang error exception. You have to explicitly ignore it to make unit tests pass in spite of it.

I am still interested in the situation you observed.


It's sufficiently well known that as a British 30 something I understood what was being alleged just from the headline


Very cool, this should help me to be able to separate my different testing environments.

Thanks for sharing!


The boring company released a flamethrower[0] presumably to raise some capital.

[0]https://www.boringcompany.com/not-a-flamethrower


I've been using d2 recently [0] It's similar enough to mermaid but with the CLI you can output svg and png and have some decent looking diagrams.

[0] https://d2lang.com/


Yeah, this would at least cause me to email my boss and say "Can you just confirm, you want me to transfer $25 million to this account? I'll hold off until you give me confirmation in writing"

hell i do this if our tester hasn't managed to go over some aspect of our release. That way i get in writing from the product owner that he has OKd it, and if he sends me a teams message i ask him to email me confirmation.


Reminds me of that old urban legend about the trader who ordered the coal futures that eventually showed up as actual coal. In that story there's always an element where the subordinates who have to carry out the transactions have been abused to the point of never questioning his decisions.


There is a very well written version of this from many years ago on the Daily WTF:

https://thedailywtf.com/articles/special-delivery

It is written well enough that I could just about convince myself this actually happened!


Yep sometimes I go so far as printing the email and sticking it in a meatspace folder on my desk. Just depends on how important that sign off really is and the consequences of not being able to produce it.


I think we need to give it time. Python had a slow and steady growth from 1991 until today and it has eaten so much of the data analysis and ML world (backed by C++, C and more recently Rust).

But python doesn't vertically scale very well. A language like Elixir can grow to fit the size of the box it is on, making use of all the cores, and then without too much additional ceremony scale horizontally aswell with distributed elixir/erlang.

Elixir getting a good story around webdev (phoenix and liveview) and more recently the a good story around ML is going to increase it's adoption, but it's not going to happen overnight. But maybe tomorrows CTOs will take the leap.


> Python doesn't vertically scale very well

Strong disagree, this is a skill issue. I have written C++ modules for Python with pybind11 to speed up some code significantly but ended up reverting to pure Python once I learned how to move memory efficiently in Python. Numpy is very good at what it does and if you really have some custom code that needs to go faster you can run it externally through something like pybind11. It is a skill issue mostly. If you are writing ultra low latency code then you're right. You can make Python really fast if you are hyper aware of how memory is managed; I recommend tracemalloc. For instance, instead of pickling numpy arrays to send them to child processes, you can use shared memory and mutexes to define a common buffer which can be represented as a numpy array and shared between parent and child processes. Massive performance win right there, and most people simply would have never realized Python is capable of such things.


I said python doesn't scale well and you say "it does if you use an escape hatch to a faster language"

Sure. Writing C++ that utilises your resources effectively then writing bindings so you can use that in python is great. But with elixir, if I've got 8 cores and 8 processes, those 8 processes run in parallel.

If I want raw cpu speed I can write something in rust, c, cpp or zig and then call it still using elixir semantics.

Not to mention that with Nx you can write elixir code that compiles to run on the GPU. Without writing any bindings.


I went back and rewrote the systems in Python after learning more about how to manage memory more efficiently using common tools, methods, libraries


> Strong disagree, this is a skill issue.

I think the point with Elixir and Nx is that any difficulty in parallel performance is abstracted away


You can read the transcript.

It's over training syndrome, caused by too much training without enough rest. The athletes that try to push through struggle to perform as well.


It depends what you are doing with the value.

If you are going to iterate through some of the resulting thing but not all of it then the generator means you aren't throwing away a bunch of the work that you've done.

It can also be more cache friendly. It doesn't need to allocate a whole new lists worth of memory.

One of the downsides of it being lazy is that if list x is mutated between when you create the generator and when you consume it then those changes are reflected in the generator.

I've done some micro benchmarks and it really depends on what you are doing. Profiling it with cProfile, Pyspy or using %timeit in an Ipython shell will tell you if it makes a difference.


P.s. to make that function an actual generator you could use

yield from

Instead of

return


the function isn't a generator, but that value is a generator expression: https://peps.python.org/pep-0289/


What other books did you read that you would recommend?


I enjoyed “Data Structures and Algorithms in Python”!


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: