Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It think this is attributed to fundamental rule of not breaking legacy language features. That is the number one rule

No. The number one rule in C++ is "don't pay for what you don't use".



To be more specific it's "don't pay in runtime cost for what you don't use". As the OP says that courtesy isn't extended to compile times - simply enabling C++17/20 can balloon your compile times even if you don't touch a single new feature, because the standard headers get more and more bloated with each new version.

https://build-bench.com/b/FW3EPgB1t0fmpIr1TB_vJbbb0Vw

Which is exacerbated even more by the leaky #include system, where you can easily end up pulling enormous standard headers into translation units that don't even reference them directly. The only reprieve is to ban most of the standard library from your project and write your own leaner version from scratch, as most big C++ projects end up doing.


There are ways to speed up compilation. It can be done in parallel, and you can get a lot of cores into a developer desktop these days. Plus you can get unlimited amounts of distributed cores in the cloud. Compilation can also be sped up with pre-compiled headers, which will soon be replaced with C++20 modules which will help compilation improve in the future as compilers improve their support for modules. With incremental compilation, you do not need recompile the whole project after every change. Linking in large statically linked binaries is a big bottleneck, but incremental linking is possible in some cases. You can also break your project into multiple shared libraries (DLLs) , and link dynamically at run-time for debugging. It can then be re-built with static linking for a release build. Continuous Integration systems can also help hide the latency of compilation from developers by continuously running build and test processes in the background on a cluster.


ccache can considerably cut down compile times. Simple to install and minimal config, no change to tooling or workflow...

https://ccache.dev/


> There are ways to speed up compilation. It can be done in parallel, and you can get a lot of cores into a developer desktop these days.

I'd start with the very basics, such as using forward declarations, encapsulation, and the pimpl idiom to not drag unnecessary #includes into your translation units.

Also, the compilation bottleneck sometimes lies in IO, thus moving your build folder to a RAM drive can speed up things significantly with zero effort.


It's possible, and I know first hand it is. It requires quite a bit of work though.

It might be easier to just have quick compile times though!


Number two rule is staying similar to C and not breaking legacy features. That's the fundamental difference with Rust (although you can argue that when languages get bigger you are always bound to become a slave of the language)


I'd argue not breaking old code has precedence over zero cost abstraction.


It's not a contest. They're both fundamental requirements. Mom doesn't like one of them best.


But indeed, when the option is between maintaining backward compatibility or further reducing overhead, the C++ committee usually choses the former, at least on the library side. See for example the less then ideal unique_ptr constructor or the invalidation guarantees of the unordered containers.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: