The problem isn't storing or inserting 50M rows, it querying 50M rows in non trivial ways. And the difference in performance between doing that 'right' and 'wrong' is orders of magnitude.
Eh, intelligent table design should knock most of that out. If you've got an 8 page query implementing a naïve solution to the knapsack problem (I've seen this in the wild) several mistakes have been made.
I've scaled 300M rows in just one of many similarly sized tables to 1M users... in 2007... on a single box with spinning rust in it. Heck, my laptop could handle 100x the production load.
It amazes me that my comment (while admittedly flippant) got voted down.
It really is true that your phone can update a 50M row table about 10K times per second!
That people are incredulous of this is in itself a stunning admission that developers these days don't have the faintest idea what computers can or cannot actually do.
Just run the numbers: 50M rows with a generous 1 KB per row is 50 GB. My iPhone has 1TB of flash storage that has a random access latency of something like 50 microseconds, which equates to 200K IOPS. An ordinary NVMe laptop SSD can now do 2M. Writing even 10K random locations every second is well within mobile device capability, with 50% headroom to "scale". At 1 KB per row, this is just 10 MB/s, which is hilariously low compared to the device peak throughput of easily a few GB/s.
It's not that good usually, e.g. PostgreSQL writes data in pages (8 KB by default), and changing 10K random rows in the 50M rows table can be quite close to the worst case of 1 changed page per changed row, so 8x of your estimate. Also need to multiply x2 to account for WAL writes. Also indexes. It's not hard to hit a throughput limit, especially with HDDs or networked storage. Although local SSDs are crazy fast indeed.
Agreed: 80MB/s for the random 8K page updates. However, transaction logs in modern databases are committed to disk in batches, and each log entry is smaller than a page size. So a nice round number would be 100 MB/s for both.[1]
For comparison, that's about 1 gigabit per second in the era of 200 Gbps networking becoming common. That's a small fraction of SSD write throughput of any modern device, mobile or not. Nobody in their right mind would use HDD storage if scaling was in any way a concern.
[1] Indexes add some overhead to this, obviously, but tend to be smaller than the underlying tables.