If you look at DB-engines.com/ranking and look at all of the collective interest in all of the databases listed, you will see that the aggregate "score" of all databases combined 7105.84. Postgres is indeed popular; but it is only ranked 4th on the list, with its own score of 648.96. MySQL currently is still 50% larger in terms of interest, with a score of 998.15.
Which means interest in Postgres (specifically) is only 9.13% of overall interest in databases; MySQL another 14.04%. Combined 23.27%.
Is that a significant percentage of interest? Yes. Many others are a fraction of 1% of mindshare in the market.
Yet the reason there are 423 systems ranked in DB-Engines is because no one size fits all data, or data query patterns, or workloads, or SLAs, or use cases.
PostgreSQL and MySQL are, at the end of the day, oriented towards OLTP workloads. While you can stretch them to be used for OLAP, these are "unnatural acts." They were both designed in days long ago for far smaller datasets than typical for modern-day petabyte-scale, real-time (streaming) ingestion, cloud-native deployments. While many engineering teams have cobbled together PostgreSQL and MySQL frankenservers designed for petabyte-scale workloads, YMMV for your data ingest, and for p99s and QPS.
The dynamic at play here is that there are some projects that lend themselves to "general services" databases, where MySQL or PostgreSQL or anything else to hand is useful for them. And then there are specialized databases designed for purpose for certain types of workloads, data models, query patterns, use cases, and so on.
So long as "chaos" fights against "law" in the universe, you will see this desire to have "one" database standard rule them all, versus a Cambrian explosion of options for users and use cases.
While you’re not wrong re: Postgres and MySQL not being necessarily designed for PB-scale, IME many shops with huge datasets are just doing it wrong. Storing numbers as strings, not using lookup tables for low-cardinality data, massive JSON blobs everywhere, etc.
I’m not saying it fixes everything, but knowing how a DB works, and applying proper data modeling and normalization could severely reduce the size of many datasets.
Which means interest in Postgres (specifically) is only 9.13% of overall interest in databases; MySQL another 14.04%. Combined 23.27%.
Is that a significant percentage of interest? Yes. Many others are a fraction of 1% of mindshare in the market.
Yet the reason there are 423 systems ranked in DB-Engines is because no one size fits all data, or data query patterns, or workloads, or SLAs, or use cases.
PostgreSQL and MySQL are, at the end of the day, oriented towards OLTP workloads. While you can stretch them to be used for OLAP, these are "unnatural acts." They were both designed in days long ago for far smaller datasets than typical for modern-day petabyte-scale, real-time (streaming) ingestion, cloud-native deployments. While many engineering teams have cobbled together PostgreSQL and MySQL frankenservers designed for petabyte-scale workloads, YMMV for your data ingest, and for p99s and QPS.
The dynamic at play here is that there are some projects that lend themselves to "general services" databases, where MySQL or PostgreSQL or anything else to hand is useful for them. And then there are specialized databases designed for purpose for certain types of workloads, data models, query patterns, use cases, and so on.
So long as "chaos" fights against "law" in the universe, you will see this desire to have "one" database standard rule them all, versus a Cambrian explosion of options for users and use cases.