Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

50000 concurrent users? That's small enough to do with a single thread on a modern server.... What are they scaling?


Your point is valid. This isn't ultra-high scale, just the name of the series I intend to publish more into. It's more about building a simple service for instant usage upon launch.

We also used Pusher for global in-app notifications, which had to scale past 1,000,000 concurrent connections.

Aside: 50000 concurrent requests on a single thread, huh? :) Unless I'm missing something, that's 50 seconds assuming a request takes only 1ms to process. Sounds like magic.


1ms is high for a request, assuming that you touch only RAM in a pattern with high locality (which you can usually manage when routing chat messages). Take a look at the TechEmpower benchmarks: most of the Java and C++ frameworks can manage 1M req/sec for JSON serialization:

https://www.techempower.com/benchmarks/#section=data-r10&hw=...


> Aside: 50000 concurrent requests on a single thread, huh? :) Unless I'm missing something, that's 50 seconds assuming a request takes only 1ms to process. Sounds like magic.

1ms to route a chat message is plenty.

e.g. on a single AWS m1.small prosody[1] can process 40,000 stanzas per second.

[1] http://prosody.im


It's not just routing. It hS to read modify write from DB first


Clarified the 1MM number in the post.


It really depends on a lot of things such as how often users send messages on average and what kinds of messages (JSON or plain text?). The latest version of SocketCluster can handle 25000 concurrent users per CPU core each sending a JSON message every 6 seconds so the 50K number does match my own benchmarks if you assume that each user sends a JSON message every 12 seconds on average.


50,000 people chatting + persistence, albeit temporary.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: