Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You need to do load testing to determine this - a request's time includes many delays that are not related to the work the server does, and thus it's not as simple as 1/0.03 - it's possible that 0.0001 second of that time is actually server time, or 0.025 - plus you also have to consider if there are multiple cores working, or non-linear algorithms running, or who knows what else.

Best way to figure it out is to use an application like Apache Bench from a powerful computer with a good internet connection, throw a lot of concurrent connections at the site, and see what happens.



I think it makes sense to test from the server itself because otherwise I would test network infrastructure. While that is interesting too, I am trying to figure out what the server (VM) can handle first.

I just tried Apache Bench:

ab -n 1000 -c 100 'https://www.mysite.com'

    Concurrency Level:      100
    Time taken for tests:   1.447 seconds
    Complete requests:      1000
    Failed requests:        0
    Requests per second:    691.19 [#/sec] (mean)
    Time per request:       144.679 [ms] (mean)
    Time per request:       1.447 [ms] (mean, across all concurrent requests)
Wow, that is fast. Around 700 requests per second!

Upping it 10x times to 10k requests ...

    Requests per second:    844.99 [#/sec] (mean)
Even faster!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: