Hacker News new | past | comments | ask | show | jobs | submit login

Conversely, rent or buy 1 bare metal server. That's how we went until we hit around 300k users. Back in 2008.



I think it’s kind of crazy that we have 64 core processors available, but still need so many servers to handle only a hundred thousand users. That’s what, a few thousand requests per second max?

Having many servers gives you redundancy and horizontal scalability, but also comes at a high complexity and maintenance cost. Also, with many machines communicating over the network, latency and reliability can become much harder to manage.

Most smaller companies can probably get away with having a single powerful server with one extra server for failover, and probably two more for the database with failover as well. I think this would also result in better performance and reliability as well. I’m curious to know whether the author tried vertical scaling first or went straight to horizontal scaling.


The bottleneck on a single big server setup is available network bandwidth to serve all those 100k users. If you run a simple site you can probably slap some cdn to serve all your static assets so it won't clog your network, but if your app uses more bandwidth per user than a typical website that can't be offloaded to cdn, then your single server might not have enough available bandwidth to serve all those 100k users and you'll be forced to scale horizontally even though your server still have plenty of cpu and i/o capacity. You might be able to increase your bandwidth but your mileage may vary as dedicated server vendors usually cap their offering to 1-3gbps per server.


That would have to be 100k concurrent users all streaming data at to 100kbps to saturate a 10gbps connection. Which are not hard to find these days. At least I came across several offerings when browsing bare metal server options recently. And they were not that expensive either.

And as a side note, anyone who is using up that kind of data is not going to be able to afford cloud egress prices unless they are making a mint on those users. Saturating a 10gbps connection would cost you around $450 an hour at AWS rates.


European providers seem to be more generous with bandwidth. On other locations if you want 10gbps per server you probably need to talk to someone first, and there are nonzero chance that they can't fulfill that if their datacenter is not that big.


You can have multiple NICs on a single server


Can one add additional network interfaces?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: