Hacker Newsnew | past | comments | ask | show | jobs | submit | Caitin_Chen's commentslogin

This series has 4 episodes. Episodes 1-3:

- Episode 1: The Rust Compilation Model Calamity https://pingcap.com/blog/rust-compilation-model-calamity

- Episode 2: Generics and Compile-Time in Rust https://pingcap.com/blog/generics-and-compile-time-in-rust

- Episode 3: Rust's Huge Compilation Units https://pingcap.com/blog/rust-huge-compilation-units


For database selection, there is a blog post about why choose TiDB over CockroachDB and other MySQL-based solutions: https://pingcap.com/success-stories/why-we-chose-a-distribut...


For the impatient, they were using MySQL, and so Tikv was the easiest migration path while meeting most of their other requirements since it uses the MySQL protocol.

The other databases mentioned are either Postgres protocol or NoSQL.


It's interesting to see how crazy complex some of these scale-out solutions end up being, especially when they start off with something like MySQL and are forced to maintain compatibility while scaling far beyond what it was designed to do.

You've got to wonder if simply sharding by either player or game would have worked well enough, even with MySQL. Worked well enough for Blizzard for over a decade!

Similarly, I wonder if a single active write server paired with a handful of readable secondaries could have taken this query volume in its stride if using something like MS SQL Server on high-spec bare metal kit. Think In-Memory processing on dual AMD EPYC 2 servers with 128 cores per box.

They mention billions of rows and terabytes of data, but I've heard of similar scale systems as far back as over decade ago! The Australian phone company Telstra did all their billing in a single IBM DB2 database, for example, and the incoming data in that system is just as real time.

I'm probably wrong, but this smells like the company missed some basic optimisation opportunities somewhere...


> You've got to wonder if simply sharding by either player or game would have worked well enough, even with MySQL. Worked well enough for Blizzard for over a decade!

With all the respect I have for blizzard, I would not take old Blizzard as an example of scalability handling. I remember a time where there authentication system was well known to give up under load after every update.


The headline Performance numbers better speak for "when not to use sharding" use case. 40K QPS is trivial for even medium size MySQL Deployments with high end servers being able to handle 1M QPS (simple queries)

20Bil rows is also quite trivial - with 100 Bytes rows this will be just 2TB of data.

We probably look at long rows in this case and complicated queries but as they read numbers are rather unimpressive :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: