This was a good idea when it was part of the Application Bundle spec (or even 68k/ppc fat Mac applications), and it's still a good idea.
That's why I doubt it'll get much traction[0]. ELFs still don't even have an accepted embedded icon standard FFS.
Anyways, imagine what the world would be like if fat binaries were the norm and your OS guaranteed support for a "virtual architecture" that you could also compile to, that had the same interface to the OS as any native application. Then you could publish a binary containing native versions for all existing architectures and be assured that it would also work for all future ports of the OS to other architectures without the need for recompilation.
You could basically do this today with Linux, actually. Just pick one of QEMU's many targets to standardize on (and of course also standardize on a base platform of libraries with a stable ABI) and use it with the dynamic syscall translation.
[0] Turns out I was right, as this is an abandoned project. I wish I wasn't able to predict such things via pure pessimism.
> ELFs still don't even have an accepted embedded icon standard FFS
Also Apple does not embed icons in their binaries. Their app bundles are not binaries, they are a directory structure. The icon is just another file, just like the _actual_ executable(s).
> you could publish a binary containing native versions for all existing architectures
This sort of ignores the hardest part of shipping binaries; linked libraries. Dynamic linking everything is simply not always feasible. Not to mention libc.
Also I don't really understand why anyone on Linux would want this. The fact that I can recompile all of the software I use, is a really important feature to me and not a distribution problem. I can see why Apple wanted this to simplify distribution via their Appstore, but IMO that's mostly to work-around their specific distribution problems. I don't see any of those problems on Linux.
> Their app bundles are not binaries, they are a directory structure.
Yes, but on Linux no file manager understands directories as bundles (except perhaps GNUstep's GWorkspace).
> Also I don't really understand why anyone on Linux would want this.
Because they want to distribute binaries themselves or via a 3rd party distribution site (ie. not part of a linux distribution) without having the user compile the code themselves (either out of convenience or because they do not want or cannot distribute the source code).
Having said that this is mainly useful in case you want to distribute a single binary that supports multiple architectures. Almost everything is distributed in archives (even self-extracting archives can be shell scripts - although annoyingly enough, software like GNOME's file manager make this harder) so you can use a shell script to launch the proper binary without kernel support.
Which wouldn't really be much of a problem for its users if it wasn't gtk breaking backwards compatibility between 2 and 3 (but that is another painful topic).
> I can see why Apple wanted this to simplify distribution via their Appstore, but IMO that's mostly to work-around their specific distribution problems. I don't see any of those problems on Linux.
This predates the Appstore by a huge margin. They added universal binaries to make the transition between 32bit and 64bit seamless. And it worked really well actually.
Soon after tools popped up to reduce the binary sizes by stripping out the 64 or 32 bit part of it.
The other part that's a bit special is that apple has these special variables @executable_path, @loader_path, @rpath in linker options with an install_name_tool that allowed(s?) you to rewrite the system path to an application specific one, which allowed you to bundle the necessary libraries with a linker path that's relative to the executable or app resource path. I think this has gotten better recently, but pretty much everyone struggled with this at the beginning.
In linux it was basically outsourced to system packaging. So the developers outsourced it to the distro, whereas in Mac environment because of the lack of said packaging, the burden was placed on whoever is distributing software. Making people think twice about what they link.
> They added universal binaries to make the transition between 32bit and 64bit seamless.
Not just 32 vs 64-bit, but entire architectures. Mach-O universal binaries originated at NeXT, where at one time a binary could (and many did) run on SPARC, PA-RISC, x86 and 68K. On http://www.nextcomputers.org/NeXTfiles/Software you can see this in their filename convention: the "NIHS" tag tells you which architectures (NeXT 68K, Intel, HP, SPARC). The binary format carried over into OS X, where it was secretly leveraged as part of Marklar for many years.
In fact, Universal even on OS X really meant PowerPC and i386 at the beginning of the Intel age. It eventually morphed into the present meaning. I even maintained a fat binary with ppc750, ppc7400 (that is, non-AltiVec and AltiVec) and i386 versions.
>Also I don't really understand why anyone on Linux would want this. The fact that I can recompile all of the software I use, is a really important feature to me and not a distribution problem. I can see why Apple wanted this to simplify distribution via their Appstore, but IMO that's mostly to work-around their specific distribution problems. I don't see any of those problems on Linux.
Couldn't agree more and yet Snap and Flatpack exist. It's probably so that third-parties can package closed-source stuff for all distros easily. These days one of the first things I do on a fresh Ubuntu install is get rid of snapd because they use it for things where it's useless (e.g., gnome-calculator). If someday they stop packaging the apps directly I'll probably finally go back to Debian.
It isn't just for closed source stuff. Some developers actually care about the user experience and don't want to have to tell people "sorry, you have to wait until someone comes along and decides to package that for your distro, or compile it from source!".
> Also Apple does not embed icons in their binaries. Their app bundles are not binaries, they are a directory structure. The icon is just another file, just like the _actual_ executable(s)
You actually can put an icon into the resource fork of a Mach-O binary, and it’ll show up in the Finder and Dock (assuming the executable turns itself into a GUI app).
It’s an uncommon thing to do, but Qemu uses it, and unfortunately I don’t think there’s another way to embed an icon in a bare Mach-O binary
The current Apple application structure (the .app directories that originated in NextStep) isn't what the previous poster was referring to -- the binaries in traditional (pre-OSX) MacOS weren't like this but actual files that could run on either Motorola 68000-series chips or (in the 1990s) IBM's Power PC chips.
> The fact that I can recompile all of the software I use, is a really important feature to me and not a distribution problem.
I've always found this to be an interesting observation about free software. So many complicated things like FatELF, dll-hell are just straight up _not_ and issue when you're working in a source code world where you just compile the software for the machine you're using it on.
Most of the efforts around FatELF, FlatPak, etc seem to be to be driven by the desires of corporations who want to ship proprietary software on linux, and as such need better standardization at the binary level rather than the software level.
It's a win for Free Software in my mind, that we shouldn't typically have to worry about this added complexity. Just ship source code, and distributions can ship binaries compiled for each specific configuration that they choose to support.
Note that source code access and FOSS are orthogonal. AFAIK in older Unix systems software you'd buy would often be in source code form. In fact at the past severa lLinux distributions had a lot of such software.
As an example Slackware distributes a shareware image viewer/manipulator called xv (which was very popular once upon a time): http://www.trilon.com/xv/
It is the license that makes something FOSS, not being able to compile/modify the source code.
Well, except I work on a large open source project and we have to blacklist random versions of gmp and gcc out code doesn't work with due to bugs.
And, we can't reasonably test with every version of the compiler and libraries, so we just have to wait for bug reports, then try to find out what's wrong.
Whereas I pick one set of all the tools, make a docker image, and then run 60 cpu days of tests. No Linux distro is going to do that much testing.
> So many complicated things like FatELF, dll-hell are just straight up _not_ and issue when you're working in a source code world where you just compile the software for the machine you're using it on.
Said like someone who has never actually had to compile someone else's software. Why do you think so many projects these days have started shipping Docker containers of their build environment? Why are there things like autoconf?
> Also Apple does not embed icons in their binaries. Their app bundles are not binaries, they are a directory structure. The icon is just another file, just like the _actual_ executable(s).
Pedantry. You could mount an ELF as a filesystem if you had any desire to. Structures are just structures.
> This sort of ignores the hardest part of shipping binaries; linked libraries.
Time has shown that dynamic linking all the things is a terrible idea on many fronts anyway. Why do you think there's all this Docker around and compiling statically is on an upward trend?
The solution is simple: DLLs for base platform stuff that provides interfaces to the OS and common stuff, statically compile everything else. Then the OS just ships a "virtual arch" version of the platform DLLs in addition to native on every arch.
The reason the Linux Community don't want this sort of thing is that, frankly, they just hate stability. I mean, the Kernel is stable (driver ABI excepted), but basically nothing outside of that is.
> The reason the Linux Community don't want this sort of thing is that, frankly, they just hate stability.
I'd argue that the reason the Linux Community doesn't want this is that it introduces maintenance burdens on the community that only really serves to support corporations shipping proprietary software.
I really don't care about making proprietary software easier on linux, but I do care about linux having to carry the baggage of backwards compatability like Windows has had to handle just so that Google can deliver Chrome as a binary more reliably.
> reason the Linux Community don't want this sort of thing is that, frankly, they just hate stability
And yet I fearlessly upgrade my Linux system at any time. With OSX you first have to check if the software you use is at all compatible, especially if you use proprietary software..
> And yet I fearlessly upgrade my Linux system at any time
Only because you've never encountered an issue due to an upgrade, probably because your usecases are so mainstream and minimal that you've never had to use applications that aren't in the repo and well tested before release. If you look around and aren't wearing blinders you'll notice that a lot of people do have problems from upgrading.
> your OS guaranteed support for a "virtual architecture"
IBM i (the OS formerly known as OS/400) and TIMI say hi. Under the hood it gets transparently recompiled to the target architecture. "Customers were able to save programs off their CISC AS/400s, restore them on their new RISC AS/400s, and the programs would run. Not only did they run, but they were fully 64-bit programs."
>Then you could publish a binary containing native versions for all existing architectures and be assured that it would also work for all future ports of the OS to other architectures without the need for recompilation.
This may be a closed vs open-source applications split which would then explain the Windows/OSX vs Linux split that seems to happen. On Windows/OSX the property that you can run the same build forever because everything is backwards compatible is appreciated. On Linux most people are downloading a distro that includes most or all the software they will be running. So you don't care if the old app build still works in the new OS because there's not even that distinction between what are apps and what is the OS.
Once you're in that frame of reference having an icon standard in ELF doesn't add anything. What I really care about is that the whole Debian archive continues to be available and maintained so stuff doesn't bitrot. Other people have other tastes of course (hence snap/flatpack) but apparently not enough have wanted and worked on it for it to happen.
> On Windows/OSX the property that you can run the same build forever because everything is backwards compatible is appreciated. On Linux most people are downloading a distro that includes most or all the software they will be running.
One of these two methodologies requires an army of middlemen compiling and packaging applications for people. And if a middleman didn't compile it, well, fuck you I guess. It's a nightmare for users who want to run things no one has packaged yet and its a nightmare for developers who just want to ship their product to a user. It's the reason Flatpak, Snap, and AppImage exist.
> Other people have other tastes of course (hence snap/flatpack) but apparently not enough have wanted and worked on it for it to happen.
Not enough people in the Linux Desktop Community, you mean. Which might be why that community is so tiny.
I could argue the other side, as there's definitely value in a well assembled distro. But I won't because you're essentially complaining about what other people do with their own time. If you don't like it don't use it. Complaining about it with insults is a crappy thing to do and one of the reasons open-source maintainers burn out.
I don't use it. That's one of the big reasons I don't use it. And just as they are free to do with their time as they like, I'm free to say "my but that's an inefficient way to do things".
You're free not to like me for calling it out, and you're free to downvote me for disagreement. Freedom.
I didn't insult anybody, the "fuck you" here is what the community says to people who want to install things that aren't in the repo.
The community says no such thing, that's the insult. The community even has multiple options current and past to reduce frictions on using things that aren't packaged, or make packaging easier and updates faster. Your characterization of what a distro is and how it treats its users is insulting exactly because it completely discounts the huge effort that's put into solving the complex issues and the advantages it brings. You're not offering an argument for how things work today, their tradeoffs and what could be done better. You're not offering time or other resources to help. You're just attributing vile behaviors to other people while offering nothing of value.
> The community says no such thing, that's the insult. The community even has multiple options current and past to reduce frictions on using things that aren't packaged
Some parts of the community do, namely the ones that came up with AppDirs, AppImages, and to a lesser extent Flatpak and Snap. The fact that these have not been widely adopted by the community over the past decade of their existence says all that needs saying about the community at large.
Instead, what happens? Libraries routinely break ABIs, paths are hardcoded at compile time. Etc. If you complain about such things what do you get? An earful about how you should just use the package manager.
> Your characterization of what a distro is and how it treats its users is insulting
My characterization of distro's treatment of users is insulting because it characterizes their treatment of users as insulting... ok, sure. Then I guess it is an insult, and I feel no need to apologize for that.
> exactly because it completely discounts the huge effort that's put into solving the complex issues and the advantages it brings.
That's mostly because I can't really think of any benefits. I suppose a centralized location to obtain software? That's.... something I guess. Otherwise it is a giant, over-engineered mess of a system that largely only serves to solve problems it has created for itself.
> You're not offering an argument for how things work today
These things exist today, all that has to happen is for the community to start using them.
> You're not offering time or other resources to help.
See project this comment thread is about. They tried to help, the community told them to shove it. That's a story I've seen repeated many times, from embedded ELF icons to the many variations of AppDirs. There is nothing left to do but call the community out on why their crap is crap.
Maybe I should just keep quiet about it? Yeah, maybe. I'm either preaching to the choir or committing horrific sacrilege depending on which perspective you're reading this from. But I guess I take it personally that there are so many people so intent on computing being awful.
> The fact that these have not been widely adopted by the community over the past several decades of their existence says all that needs saying about the community at large.
The fact that someone hasn't adopted your favorite solution is not evidence of evil intentions. Maybe, just maybe, it's evidence people don't like that solution.
>My characterization of distro's treatment of users is insulting because it characterizes their treatment of users as insulting... ok, sure.
The insult is taking something that's a debatable point (we'd be better off with flatpacks) and turning that into "if you haven't yet turned all your efforts into doing what I think is best you're obviously user hostile". That's not an argument.
>That's mostly because I can't really think of any benefits.
Maybe you should have asked someone what those were before you concluded this was an evil plot to screw over users.
> there are so many people so intent on computing being awful
You keep making accusations about people's intentions and find it strange no one is willing to have a technical argument or work for free on what you'd like to happen. If you want a distro that's just a base and have everything as flat packs go ahead. Fork/replace/remove anything you like. But you're not owed anything from people that have decided to do things in their free time and accusing them of evil intentions does nothing to help your case.
> The fact that someone hasn't adopted your favorite solution is not evidence of evil intentions. Maybe, just maybe, it's evidence people don't like that solution.
It's evidence that the Linux Desktop community doesn't like that solution. That Linux still has paltry share on the desktop could be taken as evidence that no one else like's their solution.
> The insult is taking something that's a debatable point (we'd be better off with flatpacks) and turning that into "if you haven't yet turned all your efforts into doing what I think is best you're obviously user hostile". That's not an argument.
I don't need to point to package management to show evidence of user hostility, I just need to visit anywhere people ask questions about Linux. Even people who actually like Linux encounter the user-hostile attitude, just check out Gnome's github issues and related comments.
Or you could consider every time you've heard "users don't need that", "users just want <simple lazy strawman usecase>", "users can't be trusted to ...", "users don't like it because it isn't shiny enough", etc. The Linux Desktop community has a cult of user hostility.
> Maybe you should have asked someone what those were before you concluded this was an evil plot to screw over users.
Oh believe me, I know the arguments. I get to hear them literally every time I say "I don't use Linux because I don't like package management". I just disagree with those arguments.
> You keep making accusations about people's intentions and find it strange no one is willing to have a technical argument
Even when you're nice about it people generally aren't willing to have a technical argument about it. I'm passionate about this topic, and having to watch for 2 decades now as people have tried to fix these issues only to be rejected and shunned by the community has made me a bit bitter.
> or work for free on what you'd like to happen.
This is always trotted out when all the other arguments fail. It's a fair point that I have no right to demand that people work for free, but I don't believe I've done that. What I've done is criticize the work they've done for free because it isn't what I want, and they keep asking things like "why isn't it the year of the Linux Desktop yet?". The people who have worked for free towards the things I want have been roundly shunned by the community. Case in point, the article this is all about.
No, Java is a language and virtual machine coupled together that abstracts away the entire underlying platform. The point of the "virtual architecture" I describe is to keep the OS un-abstracted and only abstract away the hardware architecture underneath it.
Seems they were a little ahead of their time. Now no one cares about binaries that ship on multiple distros because that was fixed with containerisation / snappy / flatpak / docker. But now people actually care about arm and want their docker to work there.
And no one really cares about the CD image size either. :)
Why doesn't everyone just move toward a standard where all executables can be a ZIP file with some glue, and every execution results in the OS unzipping into a cache folder matching the hash of the zip? The OS could also scan a set of directories for executables and pre-extract all of them, and then you could run a command which reverse-matches hashes to executables, so that for example, if you have an app and it's built against a dependency with a specific hash, it can find that hash on the system regardless of the file name.
So you could have executables just sprawled out everywhere on disk, but executing one program would only result in the exact dependencies being loaded, no conflicts. At the same time, you could reference an executable name instead of a hash to use the latest installed version. So you could run an app with "as-originally-compiled-dependencies" or "latest-installed-dependencies". And if you supported recursively resolving zips inside other zips, you could deliver an entire app stack with one file.
As far as avoiding a cache directory, it should be possible to mmap() each specific file in the [uncompressed] zip into memory, have a filesystem driver translate a zip index into a virtual filesystem, and then reading from the virtual filesystem would just read from those mmap()ed sections of memory. That may be stupid, though, so perhaps the zip file can just be a delivery mechanism and a flat-file standard along with kernel modules can provide the rest of the utility?
One of the big problems here being that some libraries will ship different header files for each platform, rather than doing everything with #if. For example, LibJPEG.
I'm glad this was rejected as it was a terrible idea. If it wasn't clear back then, it certainly is obvious today.
I have to agree with Drepper: It tried to solve an easy problem that didn't really matter - in a very messy way - without addressing the hard part, library dependencies.
I agree that it's not really needed in the Linux world, but I disagree with your conclusion about the reason.
It addresses libraries the same as anything else, make the libraries also fat binaries and there you go.
The bigger issue I see with it is that fat binaries really only make sense when you only have the binary and are giving that directly to untrained users. It was great for Apple because that's exactly how their platform was used, and through their various architecture transitions fat binaries significantly eased the pain because users didn't have to care which kind of Mac they had.
When you have a package manager style infrastructure that builds from source like every meaningful Linux distribution, suddenly it doesn't really offer anything in most cases. Users just ask the package manager to install something and it deals with the architecture stuff behind the scenes. Unless you're trying to create a single disk image that boots on multiple architectures it's just needless bloat.
From a technical perspective I love the thought that it'd be possible to build a single disk that could boot on any platform anyone cares about, but from a practical perspective I can't see any real purpose for such a thing to exist beyond "because we can".
That's why I doubt it'll get much traction[0]. ELFs still don't even have an accepted embedded icon standard FFS.
Anyways, imagine what the world would be like if fat binaries were the norm and your OS guaranteed support for a "virtual architecture" that you could also compile to, that had the same interface to the OS as any native application. Then you could publish a binary containing native versions for all existing architectures and be assured that it would also work for all future ports of the OS to other architectures without the need for recompilation.
You could basically do this today with Linux, actually. Just pick one of QEMU's many targets to standardize on (and of course also standardize on a base platform of libraries with a stable ABI) and use it with the dynamic syscall translation.
[0] Turns out I was right, as this is an abandoned project. I wish I wasn't able to predict such things via pure pessimism.