To be more specific it's "don't pay in runtime cost for what you don't use". As the OP says that courtesy isn't extended to compile times - simply enabling C++17/20 can balloon your compile times even if you don't touch a single new feature, because the standard headers get more and more bloated with each new version.
Which is exacerbated even more by the leaky #include system, where you can easily end up pulling enormous standard headers into translation units that don't even reference them directly. The only reprieve is to ban most of the standard library from your project and write your own leaner version from scratch, as most big C++ projects end up doing.
There are ways to speed up compilation. It can be done in parallel, and you can get a lot of cores into a developer desktop these days. Plus you can get unlimited amounts of distributed cores in the cloud. Compilation can also be sped up with pre-compiled headers, which will soon be replaced with C++20 modules which will help compilation improve in the future as compilers improve their support for modules. With incremental compilation, you do not need recompile the whole project after every change. Linking in large statically linked binaries is a big bottleneck, but incremental linking is possible in some cases. You can also break your project into multiple shared libraries (DLLs) , and link dynamically at run-time for debugging. It can then be re-built with static linking for a release build. Continuous Integration systems can also help hide the latency of compilation from developers by continuously running build and test processes in the background on a cluster.
> There are ways to speed up compilation. It can be done in parallel, and you can get a lot of cores into a developer desktop these days.
I'd start with the very basics, such as using forward declarations, encapsulation, and the pimpl idiom to not drag unnecessary #includes into your translation units.
Also, the compilation bottleneck sometimes lies in IO, thus moving your build folder to a RAM drive can speed up things significantly with zero effort.
Number two rule is staying similar to C and not breaking legacy features. That's the fundamental difference with Rust (although you can argue that when languages get bigger you are always bound to become a slave of the language)
But indeed, when the option is between maintaining backward compatibility or further reducing overhead, the C++ committee usually choses the former, at least on the library side. See for example the less then ideal unique_ptr constructor or the invalidation guarantees of the unordered containers.
No. The number one rule in C++ is "don't pay for what you don't use".