There is so much ink spilled about going through conniptions to scale mysql and postgres. I really wonder if anyone has actually bothered to confirm that it's not better to just buy some big iron and run db2 or oracle once you hit scalability problems. There are a ton of corporate data centers running oracle/db2 handling WAY more throughput than your web app ever will, and they're not using memcached or sharding.
You might be surprised about the number of database "experts" who have never heard of Postgres. They know about MySQL because Slashdot uses it, and they vaguely know about Access, tho' they've never used it and don't know what it's good for.
Sharding, for instance, is a hot topic right now but "big iron" DBAs like myself are completely mystified... Isn't this just the partitioning/shared nothing feature that we've had for over a decade? (Answer: yes of course it is).
When you're seven guys sitting in a garage eating pizza, you build it because it makes the most sense.
If you've got a million records going through an hour and you're making the big bucks, get the commodity solution as close to wholesale as you can.
The problem is that if you're giving away your app for free, you're spending dough -- perhaps lots of it. Then you're stuck on the back-end of the curve, scrambling to tweak every little bit you can.
Exactly, but the entry level DB2 and Oracle solutions are not super cost prohibitive and time is money. We're talking about a pretty low man-hour investment in futzing with MySQL before you'd have been better off calling an IBM rep. If you're doing some popular free web app maybe you can stick a "powered by IBM" icon in the corner and get the stuff for free. Remember those?