It really is just a bad idea. Or at least one that is working against ideas central to the way Linux is currently used.
Universal/fat binaries made sense on the Macintosh because there is no concept of program installation on these systems. While I think that eschewing installation is generally a better design, one drawback is that if you want to be able to support multiple architectures in one application you have to do the architecture check when the program is loaded.
Central to Linux and Windows is the idea of program installation, either through packages or installer programs. No one is interested in making it so that you can drag and drop items in Program Files or /usr/bin between systems and expect them to run, which is the only thing that using fat binaries really gets you over other solutions.
Nearly all of the commercial binary-only software I have seen in Linux (and other Unixes) uses an installer program, just like windows. There is no technical reason why such an installer couldn’t determine the architecture to install.
Not quite. Current Linux packaging formats encourage the developer to build one package per architecture. This means presenting several download choices for the user which can be confusing. The user doesn't always know what his architecture is.
The problem can be solved in two ways:
1) Distributing through a repository and have the package manager auto-select the architecture. However this is highly distribution-specific. If you want to build a single tar.gz that works on the majority of the Linux distros then you're out of luck.
2) Compile binaries for multiple architectures and bundle everything into the same package, and have a shell script or whatever select the correct binary.
While (2) is entirely doable and does not confuse the end user, it does make the developer's job harder. He has to compile binaries multiple times and spend a lot of effort on packaging. Having support for universal binaries throughout the entire system, including the developer toolchain, solves not only confusion for the end user but also hassle for the developer. On OS X I can type "gcc -arch i386 -arch ppc" and it'll generate a universal binary with two architectures. I don't have to spend time setting up a PPC virtual machine with a development environment in it, or to setup a cross compiler, just to compile binaries for PPC; everything is right there.
I think the ultimate point is not to make impossible things possible, but to make already possible things easier, for both end users and app developers.
Somebody has to test and debug your app on actual PPC hardware.
Our Xcode-supported unit tests transparently run three times -- once for x86_32, once for x86_64, and once for PPC. The PPC run occurs within Rosetta (ie, emulated).
If the tests pass, we can be reasonably sure everything is A-OK. In addition, we can do local regression testing under Rosetta (but it's rarely necessary -- usually everything just works).
The only native PPC testing we do is integration testing once we reach the end of the development cycle.
I doubt anyone would have a problem with toolchain support being added. But you don't need a kernel patch to fix the user end of the equation. It doesn't add anything that can't be provided just as conveniently (or more so - the shell script approach doesn't require any changes on the users side) without it.
To your point one, I agree. I also fail to see how universal binaries help. The problem isn't with supporting multiple architectures, it's with supporting multiple distributions.
Regarding point two, the developer is stuck with the packaging hassle regardless. The binary goes one place, config files and man pages in others, maybe you want a launcher in the gnome and kde menus... You are stuck with writing an install script anyway.
Yes, universal binaries do not help when it comes to supporting multiple distributions. However I have a problem with the fact that Linux people downright reject the entire idea as being "useless". This same attitude is the reason why inter-distro binary compatibility issues still aren't solved. Whenever someone comes up with a solution for making inter-distro compatible packages or binaries, the same knee-jerk reaction happens.
And yes, the developer must take care of packaging anyway. But that doesn't mean packaging can't be made easier. If I can skip the step "setup a cross compiler for x86_64" then all the better.
Universal/fat binaries made sense on the Macintosh because there is no concept of program installation on these systems. While I think that eschewing installation is generally a better design, one drawback is that if you want to be able to support multiple architectures in one application you have to do the architecture check when the program is loaded.
Central to Linux and Windows is the idea of program installation, either through packages or installer programs. No one is interested in making it so that you can drag and drop items in Program Files or /usr/bin between systems and expect them to run, which is the only thing that using fat binaries really gets you over other solutions.
Nearly all of the commercial binary-only software I have seen in Linux (and other Unixes) uses an installer program, just like windows. There is no technical reason why such an installer couldn’t determine the architecture to install.