Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You too! Yeah I think that was a great call. I took inspiration from D for sure when aiming for this milestone that we reached today.

Some people say you should use an old computer for development to help you write faster code. I say you should use a new computer for development, and write the fastest code you possibly can by exploiting all the new CPU instructions and optimizing for newer caching characteristics.



I'm still in the camp of using computers our users tend to have.

Also, self-compile times are strongly related to how much code there is in the compiler, not just the compile speed.

I also confess to being a bit jaded on this. I've been generating code from 8086 processors to the latest. Which instructions and combinations are faster is always flip-flopping around from chip to chip. So I leave it to the gdc/ldc compilers for the top shelf speed, and just try to make the code gen bulletproof and do a solid job.

Working on the new AArch64 has been quite fun. I'll be doing a presentation on it later in the summer. My target machine is a Raspberry Pi, which is a great machine.

Having the two code generators side by side also significantly increased the build times, because it's a lot more code being compiled.


Fair enough, and yeah I hear you on the compilation cost of all the targets. We don't have aarch64 yet but in addition to x86_64 we do have an LLVM backend, a C backend, a SPIR-V backend, WebAssembly backend, RISC-V backend, and sparc backend. All that plus the stuff I mentioned earlier in 15s on a modern laptop.


I considered a C backend at one time, but C is not expressive enough. The generated code would be ugly. For example, exception handling. D's heavy reliance on common blocks in the object code is another issue. C doesn't support nested functions (static links). And so on.

Never found a user who asked for that, either :-/


Some users want a C backend, not for maintainability reasons, but for the ability to compile on platforms that have nothing but a C compiler. The maintainability or aesthetics of the C is irrelevant, it's like another intermediate representation.


Can confirm, Zig's generated C is extremely ugly. We literally treat it as an object format [1].

The MSVC limitations are maddening, from how short string literals must be, to the complete lack of inline assembly when targeting x86_64.

[1]: https://ziglang.org/documentation/0.14.1/std/#std.Target.Obj...


I bet it would be easier to write a code gen for such platforms than to wrassal generated C code and work around the endless problems.

Anyhow, one of the curious features of D is its ability to translate C code to D code. Curious as it was never intentionally designed, it was discovered by one of our users.

D has the ability to create a .di file from a .d file, which is analogous to writing a .h file from a .c file. When D gained the ability to compile C files, you just ask it to create a .di file, and voila! the C code translated to D!


Maybe, I don’t use those platforms and so I don’t know from experience, I just know that’s why people asked us in Rust.

I somehow missed that D has that! I try to read the forums now and again, but I should keep more active tabs on how stuff is going :)


I think it would be a good idea to have some kind of "speedbump" tool that makes your software slower, but in a way where optimizing it would also optimize the faster version.

I don't know whether this is technically feasible, maybe you could run it on CPUs with good power management and force them to underclock or something.


You could use Qemu to emulate an older CPU, you would need to disable kvm with -no-kvm. There's also a throttle option I found while googling this.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: