The reason to want small pages is that the page is often the smallest unit that the operating system can work with, so bigger pages can be less efficient – you need more ram for the same number of memory mapped files, tricks like guard pages or mapping the same memory twice for a ring buffer have a bigger minimum size, etc.
The reason to want pages of exactly 4k is that software is often tuned for this and may even require this from not being programmed in a sufficiently hardware agnostic way (similar to why running lots of software on big median systems can be hard).
The reasons to want bigger pages are:
- there is more OS overhead tracking tiny pages
- as well as caches for memory, CPUs have caches for the mapping between virtual memory and physical memory, and this mapping is page-size granularity. These caches are very small (as they have to be extremely fast) so bigger pages means memory accesses are more likely to go to pages in the cache, which means faster memory accesses.
- CPU caches are addressed based on the index into the minimum page size so the max size of a cache is page-size * associativity. I think it can be harder to increase the latter than the former so bigger pages could allow for bigger caches, which can make some software perform better.
These things you see in practice are:
- x86 supports 2MB and 2GB pages, as well as 4KB pages. Linux can either directly give you pages in this larger size (a fixed number are allocated at startup by the OS) or there is a feature called ‘transparent hugepages’ where sufficiently aligned contiguous smaller pages can be merged. This mostly helps with the first two problems
- I think the Apple M-series chips have an 8k minimum page size, which might help with the third problem but I don’t really know about them
I believe this is true for x86 as a whole, but on NT any large page must be mapped with a single protection applied to the entire page, so if the page contains read-only code and read-write data, the entire page must be marked read-write.
4K seems appropriate for embedded applications. Meanwhile 4M seems like it would be plenty small for my desktop. Nearly every process is currently using more than that. Even the lightest is still coming in at a bit over 1M
Yet when I reference the running processes on my desktop something like 90% of them have more than 16M resident. So it doesn't appear that even an 8M page size would waste much on a modern desktop during typical usage.
If I'm mistaken about some low level detail I'd be interested to learn more.
64k is the largest page size that the ARM architecture supports. The large page size provides advantages for applications which allocate large amounts of memory.
Yes! Data workloads fare considerably better with larger pages, less TLB pressire, and a higher cache hit rate. I wrote a tutorial about this and how to figure out whether it will be a good trade-off for your use-case: https://amperecomputing.com/tuning-guides/understanding-memo...