Hacker News new | past | comments | ask | show | jobs | submit login
Building GCC as a Cross Compiler for Raspberry Pi (solarianprogrammer.com)
99 points by AlexeyBrin on May 12, 2018 | hide | past | favorite | 53 comments



On Gentoo,

    crossdev -t armv7a-hardfloat-linux-gnueabi
crossdev can create Gentoo ebuilds of a complete set of toolchain for any platforms.

    crossdev -t x86_64-w64-mingw32
would create a GCC cross-compiler for Windows.

* https://wiki.gentoo.org/wiki/Crossdev

* https://wiki.gentoo.org/wiki/Embedded_Handbook/General/Creat...

* https://wiki.gentoo.org/wiki/Raspberry_Pi#Crossdev

* https://wiki.gentoo.org/wiki/Mingw


Crossdev (or variants like crossdev-ng) may work, but only if you stick to the most well-tested build and target combinations. Otherwise, there are O(ludicrous) configurations, most of which are untested.


Definitely, but Windows and Raspberry Pi are both well-tested I believe.


Why bother?

  apt install gcc-arm-linux-gnueabihf # Debian / Ubuntu
  dnf install gcc-arm-linux-gnu # Fedora


I recently built a cross-compiler for a used Kindle 3 I got from a friend. Its kernel is from 2008, but you can still read books with it just fine. I was thinking about turning it into a mobile terminal I can use in full sunshine (e-ink is rad) and using precompiled hacks from the mobilereads forums, I got ssh on the device pretty quickly.

But as soon as I wanted to compile my own binaries, things weren't so easy anymore. The glibc on there is too ancient for modern cross-compilers (like the one installed by apt). It can't be replaced because the kernel is too ancient for a modern glibc. Replacing the kernel would probably fail due to lack of compatible drivers.

The sources for the original cross-compiler, the kernel and various libraries were actually quite easy to find on Amazon's website, but they were incredibly difficult to build. Because that GCC used some GNU C extensions that changed semantics in modern GCCs, it didn't even compile without patching the source. The build step for glibc involved filtering the assembly produced by the compiler through a sed script, which of course silently failed when the newer compiler produced different output.

All in all, it took me about two weeks of experimentation until I had a working cross-toolchain.


Did you put this up somewhere ?

There are probably at least a couple of other people who might find it usefl.


I did something similar for kindle 4, and it wasn't that hard, i just had to patch a few of the makefiles and i got gcc, but i was unable to then build that to run on the kindle.


Wouldn't this madness be solved by documenting the versions of the various required tools?


Give me a binary and I hack for a day. Teach me to build a binary and I code for a lifetime.

(Just in case you didn't get the allusion: https://quoteinvestigator.com/2015/08/28/fish/)


You mean:

Teach me how to build a compiler binary, and I'll be building compiler binaries for a lifetime.

which is somewhat less inspirational. Even less so when replacing "compiler binary" by "other people's code".

I think the main problem with your analogy is that in IT, the "fish" are free.


No. I mean: Teach me how to build a compiler binary, and I'll be able to build pretty much whatever binaries I need whenever I need them for a lifetime. (What's the point of having a compiler at all if you're not going to build your own binaries?)

> the "fish" are free

That's true. But if you know how to build them you can make new kinds of fish. That can come in handy if you don't like the free fish.


> (What's the point of having a compiler at all if you're not going to build your own binaries?)

By 1) minimising the times you need to compile yourself 2) making it easy to compile yourself (e.g. easier than autoconf/automake but even those are examples which have said goal in mind).

> That's true. But if you know how to build them you can make new kinds of fish. That can come in handy if you don't like the free fish.

Its not difficult to learn how to make these new fish, but learn it when its the best or required (ie. only) solution.

A better analogy is that you have a hammer at your disposal, but you refuse to use it because when you learn to use your feet instead that's a useful practice. Its not; its only useful the time you don't have a hammer (which you usually have around). Instead, try to keep your hammer with you.


Spending all day on a fishing boat when all you want is a fish is an apt analogy to building GNU cross toolchains for embedded targets.


Indeed so. And fishing is not for everyone. But some people find value in it.


So more like teach me how to build a fishing rod?


No. That would be teaching you how to write a compiler.


Yeah but if you are downloading a compiler, then that sort of implies that you know how to build binaries :)


Sure, if you count knowing how to type "make" as "knowing how to build binaries."


Yes, because every project uses the same build tools, right?


Give me a binary and I run it. Teach me to build the binary and I am fixing bugs for a lifetime.


Better than just living with bugs for a lifetime IMHO.


"give a man a lump of sugru..."


Unrelated variant: Give a man fire and he'll be warm for a night. Set a man on fire and he'll be warm the rest of his life.

;)


lol you explained the fishing proverb


I found cross-compiling makes linking with weird stuff, for example, the raspberry pi's proprietary Broadcom libs hard. You can chroot directly to a raspberry pi image using a simple script with qemu-user-static-arm, and that will guarantee 100% linking compatibility.


I found the same for anything remotely complicated. I prefer just using a VM.


If you need any configuration options different than the packages provide, or a version later than the packages provide.


The article mentions the need to account for a different version of glibc in Raspbian. Does this solution also solve this problem, or do you need to also use the same version of Debian on your main machine?


From my experience, compiling on a machine with an older version of linux/glibc should work fine. The other way, not so much.


Now I need to install apt and whatever dnf is and whatever dependencies they have and so on...


Do people still fight with dependencies?

Apt or dnf or some other package manager (like yum or pacman) comes preconfigured and preinstalled on all but the most esoteric Linux distros, so unless you're doing some really esoteric stuff, if you're fighting with dependencies, I would assume something's gone wrong.

Which is to say, it still certainly possible to fall into dependency hell in this day and age, but the parent response belies a certain level of fear of installing things on Linux because of dependencies. I hear Windows still has this problem, decades later, and while Linux has it's own share of foibles, installing pre-packaged software is generally not the exciting/frustrating part. (Which Docker has made even easier!)


a recent example for me is that i want to install some stuff on an embedded linux that runs opkg and doesn’t have apt. now all of the instructions for the thing i want to build just point to apt installation commands. they do provide instructions to build from source, but now i am immediately in dependency hell. it isn’t clear at all how to get all the dependencies even installed, much less properly, as they also have their own dependency hell instructions.

and as a developer who primarily develops on windows, windows itself doesn’t have any “dependency” problem really. it’s all the people who basically write their software on and for linux and then haphazardly try to make it work on windows through a series of hacks while claiming support for windows.

when you come across maintainers who take both linux and windows seriously, you just run an installer and get on with your life.


opkg itself looks like a competent package manager. If the software selection of the repositories for the device you were working with is any good, then it might take some work to find equivalent packages, but opkg should still handle resolution of the dependencies.

Then again, the more esoteric the platform, or the less frequently a particular piece of software is wanted on that platform, the more friction you'll have to overcome.

Find a Windows for MIPS (or whatever that embedded Linux is running on) and tell me that getting equivalent software running is easier.


Uh, why would you install a system package manager when you already have one?


Exactly, why bother?


I found the approach taken by yocto quite powerful: it can be used not only to compile all kernel and all packages for the target system but also build a matching SDK (cross-compiler with matching libraries).

The learning curve is quite steep though. For Raspberry Pi this is an excellent getting-started guide: http://jumpnowtek.com/rpi/Raspberry-Pi-Systems-with-Yocto.ht...


I really wish I had read this a year ago when I was struggling with cross-compiling Qt for the RaspberryPi and instead natively built it which took 3 days.


Well, I've actually spent 3 days trying to get Qt5 to cross-compile for a custom arm platform. It might be easier for something common like the raspberry pi, but still, you may have dodged a bullet there. ;)

The worst thing about cross-compiling Qt is that I always run into different issues with new releases, and it's not even always consistent between builds. I ended up renting a 64 core VPS to build Qt so that I could iterate faster though build failures, but it's still at least half a day's work to get it to build properly.


I hear ya, especially with respect to upgrades. Like without fail MYSQL Plugin will always be missing, QtWebsockets won't build automatically, and even now on ubuntu 18.04 which ships with openssl1.1.0 I discovered all my applications can no longer use https properly until I downgraded libssl to 1.0.2

Also: I'm assuming the 64 core machine is ARM? Which provider do you use for that?


I'm cross-compiling, meaning I'm not compiling on ARM. ;)

It might be helpful though for 64 bit ARM, but my target architecture is 32 bit and because I need webengine I suspect that it will not compile on a 32 bit platform. The problem is that the linker requires more memory than can be addressed in a 32 bit address space for linking chrome/webengine.

I just used google cloud.


There's also crosstool-ng and buildroot for this, as well as pre built compilers by Linaro and Bootlin (formerly Free-Electrons). Sorry for lack of links, extremely limited connection right now.




Damn GCC, that you have to build for each single architecture and os combo. Clang, being intrinsically a cross-compiler, is much better and way less of a hassle.


Agreed.

I'm actively trying to replace GCC+binutils for llvm based tools in Android (and further, at Google). Currently working through some issues for the kernel, more so llvm's assembler and linker than Clang.


Building a cross compiling toolchain is a good exercise, but if you need to have a suite of tools/libs, check out llnl's spack. Spack automates building compilers/toolchains/libs from source, has hundreds of package definitions, and supports cross compiling. Kinda like Gentoo, but usable on any Linux/Mac platform.

http://spack.readthedocs.io/en/latest/basic_usage.html


Wish I had this when I was building gstreamer for the raspberry pi, it took ages!

I tried various solutions like Qemu, but the performance was about the same as a real Pi.


Not all software supports being cross compiled - I can't remember if the problems I had with gstreamer were due to it not being able to be cross compiled or because I was trying to build Raspbian-compatible .deb files.


Could you at least use distcc and scale it out horizontally?


I've built a gcc-based cross toolchain for m68k (m68k-linux-gnu iirc) and it was very easy with crosstool-ng (ct-ng).

I would recommend building a cross-toolchain by hand just a couple of time to understand the whole process, but then just use ct-ng or something similar: it gets the job done and it's very configurable (abi, target architecture, libraries etc).



There's also arch linux arm way, where you control the build/linking from the target device, but the compilation of object code is done by a cross-compiler over the network.


Linaro has really good premade cross compilers for ARM




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: