Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Just from my point of view the actor model seems ideal for simulating three dimensional systems at various levels of fidelity. An actor could exist that does maths for one region (say air in a climate change simulation) and that cube could send messages about its state to various surrounding regions/actors (cubes at the top, bottom, left, right, front and back) and affect them. The more cubes in the system the higher quality the simulation.

Obviously this being on the beam you’d get the ability to run these actors across different computers fairly easily, hopefully scaling quite well.

You could also build FDTD systems like this and even acoustics simulations. I wonder if this is something the authors here are thinking about?



Modelling a problem like this with tons of Erlang processes tends not to go very well:

- It's slow, really slow, and memory heavy. All communication between processes involves copying and context switch.

- It's easy to introduce deadlocks (A calls B calls C calls A - now system is stuck) or unbounded message queue growth (if you use casts rather than calls)

- Everything is a single unit of fault tolerance (if one cell crashes, the entire simulation should be re-initialized from a known state) so all the processes need to be linked together in a massive web, which more or less defeats the point of process isolation.

I think it might make sense in the abstract actor model, it just doesn't map cleanly to Erlang processes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: