At university we designed an architecture[1] where you had to test for page not present yourself. It was all about seeing if we could make a simpler architecture where all interrupts could be handled synchronously, so you'd never have to save and restore the pipeline. Also division by zero didn't trap - you had to check before dividing. IIRC the conclusion was it was possible but somewhat tedious to write a compiler for[2], plus you had to have a trusted compiler which is a difficult sell.
[1] But sadly didn't implement it in silicon! FPGAs were much more primitive back then.
[2] TCG in modern qemu has similar concerns in that they also need to worry about when code crosses page boundaries, and they also have a kind of "trusted" compiler (in as much as everything must go through TCG).
> where you had to test for page not present yourself.
I would think that implies you need calls “lock this page for me” and “unlock this page for me”, as using a page after getting a “yes” on “is this page present?” is asking for race condition problems.
This is why you need a trusted compiler. Basically it's "insecure by design" since the whole point of this optimization is to avoid any asynchronous exception so there's no need to implement that in the pipeline. The machine code must be forced somehow to implement these checks.
There have been architectures which have required a trusted compiler (eg. the Burroughs mainframes) or a trusted verifier (the JVM, NaCl). But it certainly brings along a set of problems.
It's unclear from here whether this is even an optimization. It looked a lot more compelling back in the mid 90s.
You’re thinking about computer architecture as designed today. There’s no reason there isn’t a common data structure defined that the CPU can use to select a backup process, much how it uses page table data structures in main memory to resolve TLB misses.
It was slow so operating system devs didn't use it. So it was removed. Probably becouse hardware saved, properly, all registers and software can save only needed few (and sometimes miss something).
In effect: we don't know how secure it was...
But if it was good and Intel removed it then why Intel keeps so many useless crap in ?? Good parts - remove, bad - needed for backward compatibility... Can, finally, someone tell backward compatibility with WHAT ? DOS 4.0 ? Drivers for pre-winonly modems using plain ISA or PCI slots ??
Or maybe just like with EVE Online code (few years ago?) - no one anymore knows how some parts works...
That was just an example, there are many other things the CPU can do that will generate a fault (for example, trying to execute an illegal instruction).