Yep, agreed. Though it does expose an option here which would be "have glibc provide a mechanism for other libraries (likely more core/widely used libs) to support both ABIs simultaneously.
Presumably if that had been done in glibc when 64-bit time_t support was added, we could have had multi-size-time ABI support in things like zlib by now. Seems like a mistake on glibc's part not to create that initially (years ago).
Though if other distros have already switched, I'd posit that perhaps Gentoo needs to rethink its design a bit so it doesn't run into this issue instead.
> have glibc provide a mechanism for other libraries (likely more core/widely used libs) to support both ABIs simultaneously
The linked article is wrong to imply this isn't possible - and it really doesn't depend on "GLIBC provide a mechanism". All you have to do is:
* Compile the library itself with traditional time_t (32-bit or 64-bit depending on platform), but convert and call time64 APIs internally.
* Make a copy of every public structure that embeds a time_t (directly or indirectly), using time64_t (or whatever contains it) instead.
* Make a copy of every function that takes any time-using type, to use the time64 types.
* In the public headers, check if everything is compiled with 64-bit time_t, and if so make all the traditional types/functions aliases for the time64 versions.
* Disable most of this on platforms that already used 64-bit time_t. Instead (for convenience of external callers) make all the time64 names aliases for the traditional names.
It's just that this is a lot of work, and little benefit to any particular library. the GCC 5 std::string transition is probably a better story than LFS, in particular the compiler-supported `abi_tag` to help detect errors (but I think that's only for C++, ugh - language without room for automatic mangling suck).
(minor note: using typedefs rather than struct tags for your API makes this easier)
This is a nightmare to implement (unless you go library by library and do it by hand, or you have significant compiler help), as e.g. functions may not directly use a struct with time_t but indirectly assume something about the size of the struct.
Note that for example the std::string transition did not do this.
Fat binaries (packing together 2 entirely different objects and picking one of them) could potentially work here, though I suspect we'd run into issues of mixed 32-bit/64-bit time_t things.
Another option that's closer to how things work with LFS (large file support) (mostly in the past now) is to use create different interfaces to support 64-bit time_t, and pick defaults for time_t at compile time with a macro that picks the right impl.
Also possible could be having something like a version script (https://sourceware.org/binutils/docs/ld/VERSION.html) to tell the linker what symbols to use when 64-bit time_t was enabled. While this one might have some benefits, generally folks avoid using version scripts when possible, and would require changes in glibc, making it unlikely.
Both of those options (version script extention, and LFS-like pattern) could allow re-using the same binary (ie: smaller file size, no need to build code twice in general), and potentially enable mixing 32-bit time_t and 64-bit time_t code together in a single executable (not desirable, but does remove weird link issues).
I was thinking along the lines of a simple extension to the Mach-O fat binary concept to allow each member of the fat binary archive to be associated with one or more ABIs rather than just a single architecture.
Then all the time_t size-independent code would go into a member associated with both 32-bit and 64-bit time_t ABIs, and all the time_t size-dependent code would be separately compiled into single-ABI members, one for each ABI.
For both executables and libraries, name conflicts between members associated with the same ABI would be prohibited at compile/link time.
The effective symbol table for both compile time and runtime linking would then be the union of symbol tables defined by all members associated with the active ABI.
Language-level mechanisms that allow compilers to recognize and separately compile time_t-dependent symbols would be required, ideally implemented in ways that allow such dependence to be inferred in most if not all cases.
While compilers would be free to size-optimize code by splitting functions into dependent and independent subparts, I see no immediate reason why this needs to be explicitly supported at link level.
Finally, as I imagine it, mixing 32- and 64-bit time_t ABIs would be prohibited — implicitly, as 64-bit time_t symbols aren't ever visible to 32-bit time_t code and vice versa — with code that needs to support non-native time_t values (for I/O, IPC, etc.) left its own devices, just like code dealing with, e.g., non-native integer and floating-point formats today.
Admittedly this sounds like a lot of unnecessary work vs. similar alternatives built on existing multi-arch mechanisms, but it's still an interesting idea to contemplate.
> Finally, as I imagine it, mixing 32- and 64-bit time_t ABIs would be prohibited — implicitly, as 64-bit time_t symbols aren't ever visible to 32-bit time_t code and vice versa — with code that needs to support non-native time_t values (for I/O, IPC, etc.) left its own devices, just like code dealing with, e.g., non-native integer and floating-point formats today.
It's pretty easy to imagine having both 32-bit time_t and 64-bit time_t in a single "executable" as long as the interfaces actually in use between 32-bit-time and 64-bit-time components don't use `time_t` (or derived types).
iow: if the fact that `time_t` is 32-bits is kept entirely internal to some library A used by some library B (by virtue of library A not having any exposed types with time_t that are used by library B, and library A not having any functions that accept time_t or derived types), there's nothing preventing mixing 32-bit-time_t code and 64-bit-time_t code in a single executable/process (in this theoretical case where we use a LFS (ala _FILE_OFFSET_BITS, etc) for time_t).
LFS had/has the same capability for mixing (with off_t being the type in question there).
Presumably if that had been done in glibc when 64-bit time_t support was added, we could have had multi-size-time ABI support in things like zlib by now. Seems like a mistake on glibc's part not to create that initially (years ago).
Though if other distros have already switched, I'd posit that perhaps Gentoo needs to rethink its design a bit so it doesn't run into this issue instead.