Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Standardise alias in order to unstick library evolution


What is currently not possible, which alias will fix? That’s what I couldn’t get out of the article.

If the answer is “the ability to change the types of library functions without changing their name” (which is what his first few examples were showing to be the “problem”), then of course you can’t do that, and I’m not convinced we should waste any time or effort trying to get that to work. If you want to change the types accepted/returned by a function, make a new function with a new name.

Of course, if there’s something I’m missing here I’d love to know… the article has done a terrible job outlining it if that’s the case.


The main issue is that even though C has typedefs like intmax_t that are supposed to help implementations modify them depending on compiler and platform support, in practice, ABI requirements force these typedefs never to change if programs ever link to external functions with them. This can also be seen in time_t, which could not easily be switched to 64 bits due to existing interfaces that expect 32-bit time_t values.

For how transparent aliases help solve this, suppose that there is some ancient library function called get_year:

  typedef int32_t time_t;
  int get_year(time_t time);
  // document time_t and get_year
Then, library users will call get_year with 32-bit time_t values. But suppose that eventually, as Y2038 draws near, the library writers want get_year (and related functions) to instead take 64-bit time_t for all newly compiled programs that use get_year. Previously, this was impossible: users would have to modify their programs to call a new version of every function, or link to a new version of the library. But with transparent aliases, library writers can simply replace the headers with:

  typedef int64_t time_t;
  int get_year_v2(time_t time);
  _Alias get_year = get_year_v2;
  // document time_t and get_year
Now, existing compiled programs continue to call get_year in the library, which remains implemented for compatibility. Meanwhile, newly compiled programs instead call get_year_v2, without having to modify their source code at all! This enables types such as time_t and intmax_t to be transitioned without breaking any code.


The problem I see with that is what happens if you have dynamic library C (globe) and application A that’s rebuilt but library B (which sits between A and C) hasn’t. Now you have a linking error. What happens if you rebuild B and C but A hasn’t been rebuilt. That means B still needs to know to set up aliases for back compat (which you won’t find out until runtime). What if library B is yielding the changed ABI type and A is feeding it to C?

I think it’s a useful tool but I don’t think it’s an ABI versioning panacea (even c++ adoption of inline namespaces within libraries is limited with the standard library being the only place I’ve seen it used in a meaningful way).


IMHO, the real solution is to avoid exposing opaque types from library C in the interface of library B. Obviously, some libraries do this anyway, so your criticism is valid. Library C's headers would likely need some #define to link to the old functions and types for compatibility. But transparent aliases are still very effective in the case of shallow dynamic-library dependency graphs (e.g., most open-source projects), even if, as you note, they aren't a panacea. (I'm not even sure there is one: there'll always be a Y2038 bug or TLS deprecation or whatever that library C needs to make a breaking change to fix.)


You should mention that libraries already do this! That is a major argument for this proposal. Some libs already have these levels of indirection, this just adds it to the langauge as opposed to having to use pragmas or __attribute__((alias("_blah"))) everywhere.


the solution is to stop linking libraries globally


Then you raise the number of libraries in your program to some power > 1, due to the proliferation of versions. Hope you're using compile-time LTO with code deduplication!


Most libraries are not shared, even when they are ostensibly "shared" libraries. Plus disk is cheap, it seems dumb to optimize for it in the 21st century.

You don't need LTO for dead code elimination.


A number of libraries are in fact shared. And RAM is not cheap when considered across the entire OS, and if every process has their own copy of the library, that’s a lot of RAM eaten up by needless duplication.


It's negligible, for example : https://drewdevault.com/dynlib


Is that necessarily true in a world of zram?


CPU cycles are still quite expensive, I assure you. It adds up.


Its not nearly enough to cover scalar arguments. You've got to cover composite arguments. time_t, fileno_t, fpos_t, etc may appear in structs, arrays, etc.


typedef int64_t time64_t;

And the new code adopts get_year_64 directly and you don’t blow up system complexity with features that you don’t really need.


Backwards compatibility is holding the evolution of C and C++ back. Design improvements can't get approved if they affect ABI. Many of those improvements affect performance. The ecosystem needs a way to move forward without breaking ABI.


C++ shouldn't be a problem? Just add a new namespace:

std::move_fast_and_break_things::vector

Or maybe the other way around so you can keep using std::vector: move_fast_and_break_things::std::vector

Then you can add a 'using namespace move_fast_and_break_things;' at the top of your files if you know you don't care about the ABI in your program.


You can do that with boost:: or absl:: right now. No need to involve the standards committee at all.


This doesn't just apply to stl, this also means breaking boost or w/e and you're stuck with the same problem, users are mad and pitted against library writers.


Boost doesn't even have a stable API commitment, let alone a stable ABI. Every release may require involvement from users to keep up to date.

Abseil doesn't even have releases any more. They expect you to live at head like GOOG does internally.


In c++ you do this with inline namespaces (change the default inline namespace version, keep the old one around, everything continues to work correctly in theory).


It's not just new and breaking things, it also applies to fixing bugs.


I guess I’ll have to take your word for it… this article has failed to make that case IMO.


The point is to get rid of the old crappy function, replacing it with the new better function, in new code while allowing old code to keep working.

Old functions should not have first mover advantages on names, nobody wants to riddle their function calls with *_v2 all over the place, and neither do library maintainers want people to keep using *_v2 when *_v3 fixes issues present in *_v2.


You're willing to type a multiparagraph complaint about the article being too long winded and yet balk at the TL;DR. A little patience could help, crtl-f'ing for alias and skimming a little more to find out why it's needed might be enough.

I agree there's a lot of fluff in the article but complaining even more when someone goes out of their way to appease you is just too much.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: