Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Static linking also produces smaller binaries and lets you do link-time-optimisation.


Static linking doesn't produce smaller binaries. You are literally adding the symbols from a library into your executable rather than simply mentioning them and letting the dynamic linker figure out how to map those symbols at runtime.

The sum size of a dynamic binary plus the dynamic libraries may be larger than one static linked binary, but whether that holds for more static binaries (2, 3, or 100s) depends on the surface area your application uses of those libraries. It's relatively common to see certain large libraries only dynamically linked, with the build going to great lengths to build certain libraries as shared objects with the executables linking them using a location-relative RPATH (using the $ORIGIN feature) to avoid the extra binary size bloat over large sets of binaries.


Static linking does produce smaller binaries when you bundle dependencies. You're conflating two things - static vs dynamic linking, and bundled vs shared dependencies.

They are often conflated because you can't have shared dependencies with static linking, and bundling dynamically linked libraries is uncommon in FOSS Linux software. It's very common on Windows or with commercial software on Linux though.


You know how the page cache works? Static linking makes it not work. So 3000 processes won't share the same pages for the libc but will have to load it 3000 times.


You can still statically link all your own code but dynamically link libc/other system dependencies.


Not with rust…


I wonder what happens in the minds of people who just flatly contradict reality. Are they expecting others to go "OK, I guess you must be correct and the universe is wrong"? Are they just trying to devalue the entire concept of truth?

[In case anybody is confused by your utterance, yes of course this works in Rust]


Can you run ldd on any binary you currently have on your machine that is written in rust?

I eagerly await the results!


I mean, sure, but what's your point?

Here's nu, a shell in Rust:

    $ ldd ~/.cargo/bin/nu
        linux-vdso.so.1 (0x00007f473ba46000)
        libssl.so.3 => /lib/x86_64-linux-gnu/libssl.so.3 (0x00007f47398f2000)
        libcrypto.so.3 => /lib/x86_64-linux-gnu/libcrypto.so.3 (0x00007f4739200000)
        libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f473b9cd000)
        libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f4739110000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f4738f1a000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f473ba48000)
        libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f473b9ab000)
        libzstd.so.1 => /lib/x86_64-linux-gnu/libzstd.so.1 (0x00007f4738e50000)
And here's the Debian variant of ash, a shell in C:

    $ ldd /bin/sh     
        linux-vdso.so.1 (0x00007f88ae6b0000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f88ae44b000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f88ae6b2000)


Well seems I was wrong about linking C libraries from rust.

The problem of increased RAM requirements and constant rebuilds are still very real, if only slightly less big because of dynamically linking C.


That would have been a good post if you'd stopped at the first paragraph.

Your second paragraph is either a meaningless observation on the difference between static and dynamic linking or also incorrect. Not sure what your intent was.


Why do facts offend you?


I’m genuinely curious now, what made you so convinced that it would be completely statically linked?


I think people often talk about Rust only supporting static linking so he probably inferred that it couldn't dynamically link with anything.

Also Go does produce fully static binaries on Linux and so it's at least reasonable to incorrectly guess that Rust does the same.

Definitely shouldn't be so confident though!


Go may or may not do that on Linux depending what you import. If you call things from `os/user` for example, you'll get a dynamically linked binary unless you build with `-tags osusergo`. A similar case exists for `net`.


go by default links libc


It doesn't. See the sibling comment.


Kind of off-topic. But yeah it's a good idea for operating systems to guarantee the provision of very commonly used libraries (libc for example) so that they can be shared.

Mac does this, and Windows pretty much does it too. There was an attempt to do this on Linux with the Linux Standard Base, but it never really worked and they gave up years ago. So on Linux if you want a truly portable application you can pretty much only rely on the system providing very old versions of glibc.


The standard library is the whole distro :)

It's hardly a fair comparison with old linux distros when osx certainly will not run anything old… remember they dropped rosetta, rosetta2, 32bit support, opengl… (list continues).

And I don't think you can expect windows xp to run binaries for windows 11 either.

So I don't understand why you think this is perfectly reasonable to expect on linux, when no other OS has ever supported it.

Care to explain?


Static linking produces huge binaries, it lets you do LTO but the amount of optimisation you can actually do is limited by your RAM. Static linking also causes the entire archive to need constant rebuilds.


You don't need LTO to trim static binaries (though LTO will do it), `-ffunction-sections -fdata-sections` in compiler flags combined with `--gc-section` (or equivalent) in linker flags will do it.

This way you can get small binaries with readable assembly.


> Static linking also causes the entire archive to need constant rebuilds.

Only relinking, which you can make cheap for your non-release builds.

Dynamic linking needs relinking everytime you run the program!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: