All of these things were either rejected by the market or didn't even make it to the market.
Binary compatibility is one of the major if not the major reason that x86 has hung around so long. In the 1980's and 90's x86 was slower than the RISC workstation competitors but Intel and AMD really took the performance crown around 2000.
I think he's suggesting something more like the 80376, an obscure embedded 386 that booted straight into protected mode. So you'd have an x86-64 CPU that boots straight into Long Mode and thus could remove stuff like real mode and virtual 8086 mode. AFAIK with UEFI it's the boot firmware that handles switching to 32/ 64 bit mode, not the OS loader or kernel, so it would be transparent to the OS and programs.
But in order to not break a lot of stuff on desktop Windows (and ancient unmaintained custom software on corprate servers) you'd still have to implement the "32 bit software on 64 bit OS" support. That probably means you don't actually simplfy the CPU much.
Of course some x86 extensions do get dropped occasionally, but only things like AMD 3DNow (I guess AMD market share meant few used it anyway) and that Intel transactional memory thing that was just broken.
I think the idea is to go ahead and break lots of stuff on desktop Windows (and ancient unmaintained custom software on corprate[sic] servers). Let that software keep running on x86_64 hardware. But offer an additional choice--an x64-ng--that can only run a subset of software, but it can run it even better than x86_64 can. You don't fill an entire datacenter with these. Just a few aisles. Then you let people choose them for their whizbang modern workloads. Every year you replace an aisle of x86_64 racks with x64-ng racks. Twenty years from now, 25% of your datacenter is still the latest generation of x86_64 and they rent for a premium.
Just as if a datacenter today allocated some of its rackspace to zSeries or ARM or what have you. For workloads that gain advantages on those platforms.
A “64 bit clean” CPU would be nice, but practically you’d also want new 32 bit compatible CPUs for markets that need it. Gamers are still going to want to play old games with MORE POWER, business will want to throw more CPU at some process that is reliant on ancient code and so on. Apple has tried forcing the issue, and it didn’t exactly make everyone happy, and Apples view on compatibility is rather different to Microsoft’s or Linux / Linus’s.
So now you need to design and verify two CPU cores instead of one. The most efficient for an engineering staff and resource allocation perspective would be to just have the “x64-ng” core being the normal AMD64 core with legacy support lasered off. So probably not much in performance gains. If you had the designs actually diverge you’re going to end up with with duplicated work by more people / teams and thus less profit.
With the trend for dedicated low power cores, the companies already have two lines of core design to maintain, they aren’t going to want more.
I’m not saying it’s impossible, and a legacy free x86 core would be nice, but the business case for getting rid of 32 bit support probably isn’t there (yet?).
(You do also mentions Z series which has backwards comparability in some form going back to System 360 from the 60s - getting rid of this stuff is hard once it’s entrenched).
Binary compatibility kept x86 dominant, coupled with competing platforms not offering enough of a performance or price benefit to make them worth the trouble.
That formula has completely changed. With the tremendous improvement in compilers, and the agility of development teams, the move has been long underway. People are firing up their Gravitron2 instances at a blistering pace, my Mac runs on Apple Silicon (already, just months in, with zero x86 apps -- I thought Rosetta2 would be the lifevest, but everyone transitioned so quickly I could do fine without it).
It didn't already support arm64? Was nobody using it on iOS?
I remember trying to run Mercury on M1 recently and having problems getting it to build - some of that was because it had very old style probably wrong approaches to atomics written in x86 asm.
Also, a lot of games are still on x86, even constantly updated ones like Minecraft - and that's not even native code.
Both cpu you mentioned are locked in ... I hope less locks in the future , right now you can't buy graviton CPUs just rent in a Amazon datacenter, neither m1 cpu or install Linux on it..
Apple sell those cpu because they are "buying" users for their locked ecosystem witch will pay "lifetime" subscription in their services , probably in Future enriched with apple search and apple ads.. 1984 .. but it started with a good CPU.. Amazon is settling.. look how good is our cpu and how cheap.. while they are probably have 0 margin on those and 50% on competition .. look the argument for alternatives in CPU space is not bad.. it's been just the examples that you have chosen that are imo
Pricewise and performancewise a CPU can look better if the seller have secondary interest to sell those to you that are not monetary.. (take that 3nm tr7990wx at 300$.. in a locked system that we sell you for 1500$, we will take lifetime 30% cut from what you buy with it anyway ;) if you want 2tb ssd it's another 2k.. but hey the CPU is just 300$ ... ) I will judge m1 only if it will be compatible with Linux and sold separately from the apple (eco)system
Talking about architecture arm did a good job.. but under Nvidia.. I'm not so faithful for the future..
Sorry for the rant but some things are not comparable i see m1 and graviton vs Intel or AMD CPU and what i see really is like saying like as image hosting solution nextcloud+owned nas is bad and costly why Google photos is cheaper.. (free.. Until it's not..) ! The two are two different things one can be your.. the other.. not
No, there's a lot of PCisms that that can be removed and still allow for x86 cores. User code doesn't care about PC compat really anymore (see the PS4 Linux port for the specifics of a non PC x86 platform that runs regular x86 user code like Steam, albeit one arguably worse designed than the PC somehow). Cleaning up ring 0 in a way that ring 3 code can't tell the difference with a vaguely modern kernel could be a huge win.
https://en.wikipedia.org/wiki/Intel_i860
Or the Intel Itanium?
https://en.wikipedia.org/wiki/Itanium
Or the AMD Am29000?
https://en.wikipedia.org/wiki/AMD_Am29000
Or the AMD K12 which was a 64-bit ARM?
https://www.anandtech.com/show/7990/amd-announces-k12-core-c...
All of these things were either rejected by the market or didn't even make it to the market.
Binary compatibility is one of the major if not the major reason that x86 has hung around so long. In the 1980's and 90's x86 was slower than the RISC workstation competitors but Intel and AMD really took the performance crown around 2000.