DDR4 CAS latency is 15 clocks, which is about 9ns for highest clock frequency. Add to that cache latencies and memory controller latencies and you easily can get about 25-50 ns of latency and more. And to overcome that you have to use stream-friendly algorithms just as with SSD and spinning disks.
The technology of persistent memory is no different than NAND or whateger SSD tech de jour is used. This means that wear will be same as with other SSD drives and may be even worse due to different abmient temperature conditions.
Persistent memory is also more expensive.
The conflicting transactions must be ordered as if they were processed serially, the actual processing order can be different and their committment to disk also can be different than "commit transaction1 and then commit transaction 2". Having these two transactions commit at once or none at all is also permitted by serializable isolation order. Which is heavily used in any semidecent transactional storage engine.
If you go all-in to persistent memory, you lose cheaper multi-tier storage opportunity. You can use heavily read-optimized storage structures on the spinning disks, for example, by rearranging writes to these disks using SSDs and just plain old RAM. If I remember correctly, currently spinning disks are 5 times cheaper than SSD and you can either increase bandwidth or storage volume by using them.
> DDR4 CAS latency is 15 clocks, which is about 9ns for highest clock frequency. Add to that cache latencies and memory controller latencies and you easily can get about 25-50 ns of latency and more.
On my 3rd gen Xeon E5 NUMA machine (two nodes) accessing adjacent memory bank is roughly around 90ns under no load. Accessing non-adjacent memory was around 120ns.
> 300ns is a latency of PCIe, not DRAM.
I'd figure that this number improved across different PCIe generations or? Have you got any tool to recommend that can measure this kind of thing? I found https://github.com/andre-richter/pcie-lat but haven't tried it yet.
DDR4 CAS latency is 15 clocks, which is about 9ns for highest clock frequency. Add to that cache latencies and memory controller latencies and you easily can get about 25-50 ns of latency and more. And to overcome that you have to use stream-friendly algorithms just as with SSD and spinning disks.
The technology of persistent memory is no different than NAND or whateger SSD tech de jour is used. This means that wear will be same as with other SSD drives and may be even worse due to different abmient temperature conditions.
Persistent memory is also more expensive.
The conflicting transactions must be ordered as if they were processed serially, the actual processing order can be different and their committment to disk also can be different than "commit transaction1 and then commit transaction 2". Having these two transactions commit at once or none at all is also permitted by serializable isolation order. Which is heavily used in any semidecent transactional storage engine.
If you go all-in to persistent memory, you lose cheaper multi-tier storage opportunity. You can use heavily read-optimized storage structures on the spinning disks, for example, by rearranging writes to these disks using SSDs and just plain old RAM. If I remember correctly, currently spinning disks are 5 times cheaper than SSD and you can either increase bandwidth or storage volume by using them.