I disagree that 300 computers was simply not enough data to be useful benchmark. But, you can find studies using more RAM if you go looking.
Supercomputers of that era had far more than 300 nodes, and you can find studies of their memory error rates. The Roadrunner supercomputer for example had 19,440 compute nodes with 4GB of ram each.
PS: The architecture was odd with 6,480 Opteron processors and 12,960 Cell processors + 216 System x3755 I/O nodes, but it’s still using commodity RAM.
Just one example among many for the Jaguar supercomputer, with 18,866 nodes using DDR2. https://arch.cs.utah.edu/arch-rd-club/dram-errors.pdf. 250,000 correctable memory errors per month so quite a bit of data to work with.
This is simply a low effort type of paper on an important issue. So, there are plenty of them out there.
Supercomputers of that era had far more than 300 nodes, and you can find studies of their memory error rates. The Roadrunner supercomputer for example had 19,440 compute nodes with 4GB of ram each.
PS: The architecture was odd with 6,480 Opteron processors and 12,960 Cell processors + 216 System x3755 I/O nodes, but it’s still using commodity RAM.