Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is there any update on BEAMJIT?

It was super promising 3 or so years ago. But I haven't seen an update.

Erlang is amazing in numerous ways but raw performance is not one of them. BEAMJIT is a project to address exactly that.

https://www.sics.se/projects/beamjit




Still ongoing work. My personal bet is a bit more on modernizing HiPE however (by using the LLVM backend more).


Amy ETA on when we can start using beamjit?


Given that it has been postponed a couple of times, no. JITs are hard to pull off and it will probably have a period of worse stability as well before it matures. Another problem is getting a JIT to be faster than the interpreter. Erlang's BEAM is threaded code and also macro-instructions, so it almost looks like a JIT internally.

The big gains would be in inlining across module boundaries and type speculation. But I hold that if we could compile bundles of modules in HiPE, we would have the same gain for a fraction of the development and maintenance effort.

The biggest lure of native code generation would be that we could get rid of a lot of C code in the system as the native cogen would be able to rival the C code in speed. Many Erlang programs spend shockingly little time in the emulator loop, especially if they are communication heavy.

If you need speed today, don't underestimate a port-program. My test is that you can pipeline about a million requests back and forth to an OCaml program per second per core. So if your work is on the order of 1+ milliseconds, this is usually a feasible strategy. Espcially because OS isolation means you can handle exceptions in the OCaml program from the Erlang side by restarting the port.


Would you say the OCaml-port-as-an-optimization-strategy only makes sense if the program is compute bound, though?

The reason I ask is because we're running a Erlang+JInterface program and the performance advantage the JVM has over BEAM is less than I would have expected. Even batching requests up into big pieces, we still see it's about 30% slower than running the same stuff in Elixir, without so much copying. But the reason we're doing JVM stuff at all is so we can re-use a whole bunch of code we already had written, and I would have expected it would have been a marginal performance win as well, but it's not.

Perhaps we're doing it wrong, too.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: