Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The creator of Soketi here. Even if I have 50-100 concurrent connections on most of the projects, I don't wanna be throttled by how many messages I can distribute to my end users - I might have < 100 users, but maybe I have to distribute fresh updates each 5 seconds, what do I do?


You use a stack which supports it. Elixir and Erlang support millions of concurrent websocket connections on a single, albeit beefy, server.

https://phoenixframework.org/blog/the-road-to-2-million-webs...


The library is built on top of uWebsockets which is fairly well-known as being highly optimised and performant, especially if you skip the Node.js wrapper and use the underlying C/C++ lib directly. Does anyone know what tradeoffs or different performance characteristics one would expect to see from e.g. Elixir/Erlang vs uWebsockets?


Elixir/Erlang are surely slower but the BEAM is a fantastic runtime and compensates with fault-tolerance and predictable performance. I know I prefer my server to be as responsive at 1 million connections as at 1 hundred.


How much slower though, I think we’re talking everyone getting the message in a Phoenix channel in < 100ms?


I can't believe this article is 7 years old.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: