Consider, though, that in the days past (when your server would be probably on an equivalent of dual-socket Pentium 166-MMX), most clients would be coming from slow links like 33.4-56.1kbps dialup, and it wouldn't be a problem to serve them at all. Links were slow, users were patient, timeouts were high, webpages were sort of slim. Although if you ask me, they always have been heavy, just within the constraints of their time.
Then, of course, there was ISDN and xDSL, which would give you true to god whopping 128 kbits/s for a while. 64 kpbs if you were cheap. It took a while to get to affordable multiples of Mbits per second.
Now that there's at least 10 Mbps uplink from each residential subscriber, doesn't take long to DoS even a beefy server.
And I'd say that server-side, things improved vastly with advent of FastCGI and its equivalents. Back in that heyday of your P166-MMX server, it was CGI with Perl, spawning a process for each incoming request, or "blazing-fast" Apache's server-side includes, or other things like that. Maybe mod_perl with its caveats on memory sharing.
Anyway, you're right in that whenever you show them a wider pipe, they will find more stuff to congest it with.
Then, of course, there was ISDN and xDSL, which would give you true to god whopping 128 kbits/s for a while. 64 kpbs if you were cheap. It took a while to get to affordable multiples of Mbits per second.
Now that there's at least 10 Mbps uplink from each residential subscriber, doesn't take long to DoS even a beefy server.
And I'd say that server-side, things improved vastly with advent of FastCGI and its equivalents. Back in that heyday of your P166-MMX server, it was CGI with Perl, spawning a process for each incoming request, or "blazing-fast" Apache's server-side includes, or other things like that. Maybe mod_perl with its caveats on memory sharing.
Anyway, you're right in that whenever you show them a wider pipe, they will find more stuff to congest it with.