If it were even 20% faster than DRAM, there would be a market for it at the higher price. The post I replied to was asserting that there was a physical limit of 400MHz for DRAM entirely due to the capacitor. If SRAM could run with lower latency, memory-bound workloads would get comparably faster.
This is sort of the role that L3 cache plays already. Your proposal would be effectively an upgradable L4 cache. No idea if the economics on that are worth it vs bigger DRAM so you have less pressure on the nvme disk.
Coreboot and some other low-level stuff uses cache-as-RAM during early steps of the boot process.
There was briefly a product called vCage loading a whole secure hypervisor into cache-as-RAM, with a goal of being secure against DRAM-remanence ("cold-boot") attacks where the DIMMs are fast-chilled to slow charge leakage and removed from the target system to dump their contents. Since the whole secure perimeter was on-die in the CPU, it could use memory encryption to treat the DRAM as untrusted.
Yeah, you’re basically betting that people will put a lot of effort in trying to out/optimize the hardware and perhaps to some degree the OS. Not a good bet.
When SMP first came out we had one large customer that wanted to manually handle scheduling themselves. That didn’t last long.
Effort? It's not like it's hard to map an SRAM chip to whatever address you want and expose it raw or as a block device. That's a 100 LOC kernel module.