Hacker News new | past | comments | ask | show | jobs | submit login
Fun with Gentoo: Why don't we just shuffle those ROP gadgets away? (quitesimple.org)
130 points by crtxcr on Jan 26, 2023 | hide | past | favorite | 80 comments



I remember my Gentoo days freshman year in college. I spent more time compiling updates than actually using the computer.


I used to keep using the boxes whilst steam billowed out the sides until things started crashing.

I recall gcc3 -> 4. The prevailing "wisdom" was emerge --deep (etc) world ... twice! My laptop was left for around a week trundling through 1500 odd packages. I think I did system first, twice too. I left it running on a glass table in an unheated study, propped up to allow some better airflow.

One of the great things about Gentoo is that a completely fragged system doesn't faze you anymore. Screwed glibc? Never mind. Broken python? lol! Scrambled portage? Hold my beer.

I have a VM running in the attic that got a bit behind. OK it was around eight? years out of date. I ended up putting in a new portage tree under git and reverting it into the past and then winding it forwards after getting the thing up to date at that point in time. It took quite a while. I could have started again but it was fun to do as an exercise.


These days my 5950X can get through some of the big scary packages quite rapidly. Firefox is done in about 8 minutes, a new point release of Rust seems to take about 15.

I still haven’t decided whether or not I should be embarrassed that I mainly bought a 16-core CPU to run Gentoo.


Don't be embarrassed, its what computers are for! I've done the same thing recently too. It honestly feels like a better use of a high-core desktop CPU than have it sit idle 99% of the time.


I wonder which is more wasteful - compiling these packages for the nth time vs mining cryptocurrency...


These aren't even close to comparable, and I am very tired of hearing people complain about this!

My current Gentoo system seems to have existed since 03/29/21, so roughly two years now. In the time period, the time spent compiling packages has accumulated to 5 days, and my CPU takes ~140W at max load (Ryzen 3900x).

If I did my math correctly, this comes out to roughly 16KWH accumulated energy across two years.

We can compare this to a gamer, who spends 1 hour per day gaming, for 2 years, on a system that takes 300w while running a game, and this comes out to 230KWH in total. That about 15x as much energy spent by a fairly lightweight gamer on a very average system.

It's also worth noting that the majority of packages build in under 1 minute on my system, the vast majority of compile time is spent on things like Firefox, Rust, GCC and a few more.

This is just a very silly thing to be concerned over, and if we are going to be offended at people for being wasteful there are much larger targets than someone building packages from source.


Do you use ccache?


I remember installing from stage1 on a 1ghz-ish single core. Just something like kde2 would take hours, and that's not even counting the dependencies. Anything bigger than a command line tool was something you'd kick off before going to bed and pray it didn't error. (Spoiler: It almost always did)


I do all world updates overnight for this very reason. But on my R5 3600, the longest emerge is, by far, qtwebengine, which takes just under 1.5 hours. Plus, Gentoo provides -bin versions of many packages notorious for protracted build times, such as Rust, Chromium, Firefox, etc...


-bin seems like a strange thing when you are doing Gentoo, which is all about compile locally. Gentoo has always been about choice and -bin is a choice. However you lose USE flag choice decision with a -bin.

The possible combinations that Gentoo allows looks to me like a sort of Linux immune system in action. Quite a few "unpopular" flags will get used (lol USEd) somewhere by someone that will be more motivated on average to log a bug somewhere.

Gentoo also got the console shell look (colours, fonts etc) right way before any other distro. It's copied widely.


Sure, binary packages don't reduce choice though since they are available in addition to the normal packages (except for stuff that is not open source at all).

Wanting to have control over config via use flags for your system doesn't mean that there aren't packages were you don't really need that. Like if you only use Libre Office a couple times per year on your aging laptop, do you really care enough about the exact USE config to justify compiling it yourself? Even more so if you need it on short notice. Or if you only use Chromium/whatever to check that your website works with that browser but don't actually use it yourself, why bother compiling it.

IIRC there used to be a Gentoo fork (forgot the name) that extended this concept to all packages, so if you used default USE flags you did not need to compile things yourself.


I still use it and love it. On an i9-13900k, my Kconfig compiles in 1 minute[0] with -j33 and makes barely any noise or heat.

[0] https://www.dropbox.com/s/w1zlftin1cojkhr/kernel_compile.mov...


Same thing for me. 2003 it was .. and gentoo was a well good entry vehicle into linux


We're the same age! I remember printing off a ~20 page runbook of instructions to manually build and configure grub and gentoo. Took hours to set up.


Why did I never think of printing it? I'd open it in lynx on a 2nd framebuffer (I forget the proper term... the things that were like Alt+Shift+an Fkey or something)


Console 8)


Virtual terminal


Yea, that’s it


I remember praying before every 'emerge -uDav world' that I won't have to deal with fixing my system for the next 2 hours.


College was some good distcc days though. My off campus house all ran Linux and they were dumb enough to distcc me. Debian, RedHat 9 (non rhel), and Slack were the other popular distros at the time. My school was ran on Solaris.


As a student, I've actually put an overheating PowerBook G4 in a fridge just to finish an install


How? Were you watching the compile output? Because you don't need to spend much time when your computer is doing all the work.


I like this idea. I have an idea for something that would be cool, if impractical: Imagine a GCC wrapper that doesn't actually link, but produces a bundle that performs the linking in randomized order in realtime and then runs.

I think that you could do this quite well on NixOS, and I'm now intrigued to try to rig up a proof-of-concept when I can find the time.

Side-effect: Does not work for libraries without a significantly more complex wrapper that certainly could not work for all libraries. Though, you could re-order the objects within a static library fairly easily.


That'd make process startup EXTREMELY slow


It's pretty much what OpenBSD is doing at bootup.

Truthfully though you're right, using typical linkers, this would be pretty slow; at least a few seconds for large binaries, to minutes for things as large as web browsers. However, for many binaries, linking can be done much faster; mold claims to be only 50% the runtime of using `cp` on the object files, which is fast enough to even re-link Firefox on-the-fly without it being unusable.

You could imagine writing a linker specifically for this case, that encodes information about the object files directly into the resulting bundle.


I thought openbsd did it after boot?


OpenBSD relinks sshd. Which is relatively small thing that is linked from relatively large objects (ie. it is the typical modern C code). Relinking thing like glibc on demand is going to be problematic, because it is structured as to allow small binary sizes for static linking and thus almost every function that is part of glibc API is a separate compilation unit and object file. Linking that into .so is slow, no mater what kind of optimalization tricks you implement in the linker.


doesn't matter how long it takes if you don't block the boot process doing it

you can link in the background at idle priority, and if you don't complete before reboot: no big deal


Relinking glibc would block the boot process.


how?

it's a dynamic library, and this isn't windoze with awful mandatory locking

as long as the underlying version is unchanged: there should be no problem whatsoever


glibc is going to get used by everything in userspace, so you’ll need it when you boot.


Yes, but this thread is about doing the linking after boot. It doesn't matter if you link synchronously before you start the program or link asynchronously after you start the program - you will still get a new unique binary for each boot.


yes... it is there at boot

then after boot you relink for next boot


Yeah. That said, I'm suggesting that if it was really too slow, though, it'd probably be infeasible to relink libc, the kernel, etc. at bootup. It's not a direct comparison to be sure.


Not that bad if you link with a custom mold fork.


I wonder if just shuffling it on every release (even minor) isn’t sufficient (and actually even publishing that order). That doesn’t have full security benefit (attackers have a finite set of options) but keeps reproducible builds and the ability to distribute pre-linked binaries while raising the attack complexity significantly since no two machines are likely running the exact same version. That means an exploit has to try several different versions. Taking this a step further, create link N randomly sorted copies per version and randomly distribute those. Now the space to search through is large and the probability of picking the correct gadget variant goes down with 1/MN where there are M releases being attacked and N variants per release that might be installed (a targeted attack or an attack of a specific version only gets 1/N). Additionally, deterministic builds maintain your ability to audit binaries and their providence fairly easily (only grows linearly) while the risk of noticing the attempt without a successful exploit is N-1/N.

I’m not saying it’s perfect but it seems like a reasonable defense for binary distribution. As someone who used to run Gentoo, I’d say most people are in favor of the faster times to install a new package.

EDIT: extending this idea further, I wonder if compilers can’t offer a random seed to supply that causes a random layout of the sections within a built execution so that even statically linked binaries benefit from this.


For binary distributions, how about shipping object files and linking them on install with mold? This should be faster than compiling from source, just marginally slower than installing pre-linked binaries, and each build will be as unique as it gets.


The size of the distributed binary gets very large because you're shipping a lot of code that ends up getting eliminated by the linker. Also if you want to do any kind of LTO, then I don't see how you do it in your model. (which is significant for the larger applications like Chrome that have the likely attack surface). Not every binary on the system actually needs this either.

Finally, the main problem with this idea is that you can't audit malware because there's no way to maintain a source of truth about what the binary on a given system should be. Distributing randomly linked copies solves that because you can have a deterministic mapping based on machine characteristics (you do have to keep this hash secure but it's feasible). You'd basically be maintaining N copies of your distro with randomly built binaries with the user being given a random one to install.

And to be clear, my better idea is to do this at the compiler level so that you randomize the relative location of functions. That way it's impossible to find any gadget to grab onto and you have to get information leakage from the machine you're attacking & this information leakage has to be repeated for each machine you want to compromise.


Randomizing the link order per release does not solve anything, for this to really work as an mitigation layer, you need to have few different randomly linked versions and randomly give these to the end users. Just randomizing the build does not solve anything as there still is exactly one layout that everyone uses.

On another note: automating this on gentoo is cool exercise, but almost certainly if you just build everything locally, the memory layout will be random enough that writing shellcode blindly presents an interesting challenge. (different compiler flags, various probabilistic optimization passes… all that leads to the functions in same object file having different sizes)


> Randomizing the link order per release does not solve anything, for this to really work as an mitigation layer, you need to have few different randomly linked versions and randomly give these to the end users. Just randomizing the build does not solve anything as there still is exactly one layout that everyone uses.

First, it does. At scale, the probability of everyone running the exact version of every piece of software is 0. If you want, go take a look and see how many users are running a given version of Android.

Also, did you miss when I wrote

> Taking this a step further, create link N randomly sorted copies per version and randomly distribute those

I agree, doing it per version is only just a small amount of coverage. We're in agreement that generating N randomized copies and distributing those evenly is a stronger position because it makes the cost MN where you have M releases that are still running and N variants per release.


This is generally less useful with automatic updates for security patches because then you do want everyone to be running the same, latest, version.


Openbsd also puts a fair amount of work into removing ROP gadgets.

For example.

https://marc.info/?l=openbsd-cvs&m=152824407931917


Very cool, thank you for sharing! Not only does ROP facilitate traditional binary exploitation, but it’s also used in cutting-edge evasive techniques. By abusing ROP instead of direct calls, red teamers are able to heavily obfuscate activities from endpoint detection and response.


Uh, yeah... The post opens with a mention of being inspired by OpenBSD and goes into some detail on differences between their approach and OpenBSD's throughout.


Though, much less effective than reordering gadgets.


Lack of reproducible builds seems like a big cost here.

I wonder if there's a way to do just-in-time random relinking such that the performance cost is low, but the security benefit is still strong.

Just-in-time gets you reproducible builds, and also addresses the "local attackers who can read the binary or library" problem.

There would be a performance cost in terms of startup time, but since the number of possible permutations is a factorial function of the number of possible linking orders, it seems like even a very coarse-grained random relinking can go a long way.

You could accomplish this by doing static analysis of a binary to generate a file full of hints for ways to rewrite the binary such that its behavior is provably equivalent to the original. Then there could be a wrapper (perhaps at the shell or OS level) which uses the hints to randomly relink on the fly just prior to execution.

Another advantage is that this approach should be feasible on an OS like Ubuntu where everything is precompiled.

However the static analysis part could be a little tricky? I'm not familiar with the state of the art in static analysis of compiled binaries.

Performance-sensitive users could be given a way to turn the feature off, in cases where fast startup time was more important than security.


Do reproducible builds even matter if you're building/linking and executing a binary on the same system?

The biggest benefit seems to be in making it infeasible/dangerous for a malicious actor to distribute binary versions containing different behavior from the published source.

On a local machine, when and with what would you compare your binaries?


Sure, just think of it as a way to get the same benefit on a precompiled system like Ubuntu I guess.


>> As a side-effect, reproducible builds, which this technique breaks, are less of a concern anyway (because you've compiled your system from source).

Reproducible builds verify the source code and build process (including options) were the same. Not sure how important each aspect is.

Also, if for some reason you rebuild a dependency, you'll need to relink everything that depends on that. This could get messy, but it's still interesting.


Isn’t it impossible to have truly from-scratch reproducible builds? IIRC, you have to trust the compiler which can’t be built from scratch.


You can bootstrap the compiler. It's a chore but not impossible. More usefully, you can check that your builds are identical to other people's, so at least your compiler isn't uniquely compromised.


I don’t think it’s possible since you’d need the original compilers from the 70’s and bootstrap other compilers up to a modern one. Otherwise your existing compiler could taint your new one.


Many years ago I wrote a C compiler in assembly language. It wasn't hard, and C hasn't changed that much. The complexity in modern compilers is in the optimisation, which you don't need if you're bootstrapping. It's not impossible.


A pragmatic approach!


There are people who spend time trying to solve this issue!

https://bootstrappable.org/

https://www.gnu.org/software/mes/

The idea here, is that if you can get a very basic C compiler, you can start building TinyCC, and eventually build a pre-C++ version of GCC, and from there build up to modern GCC. This is a lot easier said than done of course, but not quite as bad as needing the original compilers from the 70s!


No, you only need two compilers that have not been subverted by the same adversary.

https://www.schneier.com/blog/archives/2006/01/countering_tr...


That’s a good point


It'd be a fun exercise to write a tiny Forth in machine code (sans assembler) and use it to write enough of a C compiler to build tcc, or something along those lines. From there I think you can chain old (but accessible) gcc versions up to modern gcc.


> You can bootstrap the compiler. It's a chore but not impossible.

And specifically, only one person needs to do this once... I'm surprised there isn't some project doing this...



Why? If the dependencies are dynamically loaded libraries it shouldn't matter?


Control over the RNG seed, and tracking that seed as an 'input', would be a way to get reproducible builds while still having randomization.


I'm guessing "dev-libs/openssl shuffleld" should go into "/etc/portage/package.env" instead (in the appendix).


Good catch, thx!


> The potential issue comes from the assumption that all .o files will be given continuously in the command line. The assumption appear to hold, but could blow up down the road. But well, it's hack.

Other than this issue (which may well be a large / unsolvable one), I wonder what other disadvantages to this approach there might be. Does this hack have any potential for a Gentoo profile or mainlining?


Don't try this with C++, unless you're certain that there are no interdependencies or side-effects in global variable initialisation. The link order (usually) affects the order in which initialisers are executed.


On the contrary: do do this and if you observe your program crashing due to linking order, fix the damn bug.


Developer PoV vs User/Distro PoV here :P you're not wrong, though...


Fair enough :) I just meant to point out what could go wrong.


Does the C++ spec guarantee initialization order? Or is any application that depends on it relying on undefined behaviour?


There's no mandated order between compilation units. It's a problem significant enough to have its own snarky name: the Static Initialization Order Fiasco https://en.cppreference.com/w/cpp/language/siof


How does this work with dynamic libraries (shared objects). In Windows land, you get a .lib with a .dll and afaik that has hardcoded function addresses. You statically link the "import library" .lib with your exe, so if you randomize the function addresses and rebuild just the .dll later, it blows up (you need to rebuild all exes as well).

Is dynamic linking in Unix world truly runtime-only (a-la "GetLibrary" / "GetProcAddress")?


Unix/ELF doesn't have seperate .lib and .dll files - you link directly against the .so (or a linker script, but those are typically only used for special system libraries). The main thing this does is record the name from the DT_SONAME field of the .so as a requied dependency in your binary.

But I also don't think that this would be a problem on Windows. After all, you can generally replace DLLs with entirely different versions and you'll be fine as long as all the required symbols are present and ABI-compatible.

The main difference between ELF and PE dynamic linking is that with PE you have a list of required symbols along with the libraries to load those symbols from while with ELF you have a list of required libraries and a list of required symbols but not information recorded about which symbols should come from which libraries.


One gap to this approach: gcc can use argument files (pass a file that contains the actual arguments). I've only really seen this with build systems that expect to work on large numbers of arguments that will not fit on the command line. Still, something to be aware of.


I'll keep an eye on that, thx!


Deep feels from that web design. Simple, aesthetic, functional.


Why not prevent control transfer to the ROP gadget?


Because we are unable to do that, and we've tried for decades.

There are all kinds of things we're doing (e.g. rewriting things in memory-safe languages) to make it less likely for an attacker to become able to control a jump to somewhere, however, we don't expect to fully succeed any time soon, and this is defense in depth against cases when attackers once again do find a way to control transfer to some arbitrary gadget.


ROP gadgets?


https://en.wikipedia.org/wiki/Return-oriented_programming

> Return-oriented programming (ROP) is a computer security exploit technique that allows an attacker to execute code in the presence of security defenses[1][2] such as executable space protection and code signing.[3]

> In this technique, an attacker gains control of the call stack to hijack program control flow and then executes carefully chosen machine instruction sequences that are already present in the machine's memory, called "gadgets".[4][nb 1] Each gadget typically ends in a return instruction and is located in a subroutine within the existing program and/or shared library code.[nb 1] Chained together, these gadgets allow an attacker to perform arbitrary operations on a machine employing defenses that thwart simpler attacks.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: