Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We used to have such fast compile times with Turbo Pascal, and other dialects, Modula-2, Oberon dialects, across 16 bit and early 32 bit home computers.

Then everything went south, with the languages that took over mainstream computing.



Not to disagree with you, but even C++ is going through great efforts to improve compile-times through C++20 modules and C++23 standard library modules (import std;). Although no compiler fully supports both, you can get an idea of how they can improve compile-times with clang and libc++

    $ # No modules
    $ clang++ -std=c++23 -stdlib=libc++ a.cpp # 4.8s
    $ # With modules
    $ clang++ -std=c++23 -stdlib=libc++ --precompile -o std.pcm /path/to/libc++/v1/std.cppm # 4.6s but this is done once
    $ clang++ -std=c++23 -stdlib=libc++ -fmodule-file=std=std.pcm b.cpp # 1.5s 
a.cpp and b.cpp are equivalent but b.cpp does `import std;` and a.cpp imports every standard C++ header file (same thing as import std, you can find them in libc++' std.cppm).

Notice that this is an extreme example since we're importing the whole standard library and is actually discouraged [^1]. Instead you can get through the day with just these flags: `-stdlib=libc++ -fimplicit-modules -fimplicit-module-maps` and of course -std=c++20 or later, no extra files/commands required! but you are only restricted to doing import <vector>; and such, no import std.

[^1]: non-standard headers like `bits/stdc++.h` which does the same thing (#including the whole standard library) is what is actually discouraged because a. non-standard and b. compile-times, but I can see `import std` solving these two and being encouraged once it's widely available!


As big fan of C++ modules (see my github), we are decades away of widespread adoption, unfortunately.

See regular discussions on C++ reddit, regarding state of modules support across the ecosystem.


C++ will be a very useful, fast, safe, and productive language in 2070.


Am I wrong about this?

Their algorithms were simpler.

Their output was simpler.

As their complexity grew, proportionately did program performance.

Not to mention adding language convenience features (generics, closures).


See Ada, released in 1983.

Generics were already present in CLU and ML, initially introduced in 1976.

Check their features.


Yeah but--


tinycc is still fast. all the current single-pass compilers are fast.


Agreed, I would say the main problem is lack of focus on developer productivity.

Ok, the goalpost has moved on with what -O0 is expected to deliver in machine code quality, lets then have something like -ffast-compile, or interpreter/jit as alternative toolchain in the box.

Practical example from D land, compile D with dmd during development, use gdc or ldc for release.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: