I'd add GPUs and RAM sizes to that last. There are a huge number of computing tasks which fall into either the "throw more FPU at it" or the "throw more IO at it" categories. A lot of various techniques have been developed to optimize performance around certain bottlenecks but in the last 10 years we've gotten to a place where FPU cores cost a few cents each and thousands of them can be crammed into a single system, and where hundreds of gigabytes up to several terabytes of RAM are generally affordable and can be equipped into a single system. Going from a system that is hugely IO bound due to data living on spinning disks to a system where a huge multi-gigabyte database can just live in RAM 100% of the time and all other data is stored on SSDs is a several orders of magnitude speedup. And going from "you get one or two FPU ops per clock cycle" to "here's thousands of FPUs per clock cycle" has also translated into orders of magnitude improvements as well.
Additionally, software has gotten better. Nginx is just plain much more streamlined than Apache, and simple caching techniques have really increased the amount of boom for your buck you get with hardware these days, at least in the server space.
Additionally, software has gotten better. Nginx is just plain much more streamlined than Apache, and simple caching techniques have really increased the amount of boom for your buck you get with hardware these days, at least in the server space.