Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The article downplays a little bit the value of the autotools. It's not just about building for 30-year ago hosts, it's also about providing a lot of convenient tools and providing the user with a known interface. If you write your own configure, people may expect to be able to pass additional CFLAGS (not overriding the existing one), setting the path to various elements, installing in a different root, having certain targets for the generated makefiles (like dist, clean, install, uninstall), having cross-compilation work (including crosscompiling to Windows with mingw64 which solves the toolchain problem of releasing binaries for Windows), ... It's a tedious task and easy to get wrong.

On the other hand, using the autotools in a modern way is dead easy. You don't need to add many many tests if you don't intend to support old stuff. You get access to automake which is a fantastic tool on its own.

Don't read old tutorials, don't look at how big established projects are doing, look at the Autotools Mythbuster instead (autotools.io) and start with a minimal configure.ac.



> If you write your own configure, people may expect to be able to [...] It's a tedious task and easy to get wrong.

The author's point is to do <em>without</em> a configure step entirely, not write your own.

A standard Makefile is perfectly capable of implementing these things in a manner which is clean, reasonably portable and without the level of indirection that makes things hard to debug.

My own experience is that autotools has evolved to feel less standard than the modern platforms it purports to smooth over the differences in; I seem to frequently find that I can't generate a 'configure' for a 3rd party software or libraries from Git repositories because I of the wrong autotools versions. Efforts to investigate this by unpicking the various macros etc. provided in the build have almost always been unsuccessful, leaving me building from distributed .tar.gz files. With something that feels like such a moving target after 30 years, I'm glad to see people realising the benefit of a simple Makefile that's easy to customise for the edge cases.


There's no way you can do what autotools does in a Makefile without implementing half of autotools yourself. How do you check for headers? How do you check for functions? How do you test if qsort_r expects

    (*)(void*, const void*, const void*) 
or

    (*)(const void*, const void*, void*)
as function pointer? If people are forced to detect that kind of stuff in Makefiles, they get the urge to match `uname` against some hard-coded platforms and that's a terrible solution.


>There's no way you can do what autotools does in a Makefile without implementing half of autotools yourself.

That's part of the argument: with standards compliant code, you don't need to do "what autotools does".


ENOTSUP is part of the standard and I do have autotools macros checking if some function calls produce that error value on the host system.


> no way you can do what autotools does in a Makefile

Well, you're responding to something that wasn't actually said; "these things" quotes specifically the list of requirements given by the previous poster -- CFLAGS etc.

Your example is indeed valid, it's an ugly case and I agree that testing for the feature specifically (rather than arch, compiler) is always preferred. But let's look at the practicalities -- how many of these cases do I actually need in a codebase to tip the balance that would justify use of autoconf? In your example, a simple #ifdef against the platform (Windows/BSD/Linux) and it's gone. qsort_r is off limits for massively portable program anyway, so autoconf's ability to help is limited anyway.


> a simple #ifdef against the platform (Windows/BSD/Linux) and it's gone.

That's equivalent to the uname trick the GP was complaining about. That means your software won't compile in a lot of platforms.

On a small scale, I don't think that's a big problem. Somebody on those platforms only has to add the correct conditions to your #ifdef forest and if he sends the update back, other people on the same platform won't even have the same problem. It's not much different from software not working on untested platforms.

It starts becoming a problem when done often, or on high level (at the calling stack) code.


> That means your software won't compile in a lot of platforms.

Is won't anyway; qsort_r has already limited me to a tiny number of platforms. If I care about further portability, autotools can't do anything to help me; my next step is not autotools, it's "don't use qsort_r"


> How do you test if qsort_r expects

    > (*)(void*, const void*, const void*) 
> or

    > (*)(const void*, const void*, void*)
You can write C++ code that uses templates and SFINAE to do it. I have. I will admit, it looks like garbage. But it is possible.


Who uses the latter form? I don't think I've ever seen that before.

Also, isn't the first supposed to be:

  void*, size_t, size_t, int (*cmp)(const void*, const void*)
Or are you using some wtf version that hides the element size and count behind a void*?


The signatures came from another answer. I have no idea what the signatures are, or should be, or why there's a difference.

My SFINAE code was to handle a similar problem picking between a GNU-specific strerror_r and a POSIX strerror_r that have different signatures (both are available on Linux, determined by whether certain macros are set: https://linux.die.net/man/3/strerror_r , I couldn't just rely on those macros because I wanted my code to compile on any POSIX platform).


You're commenting on the prototype for qsort() itself, whereas the original point was about the function pointer that is an argument to qsort_r().

Both Linux and BSD chose to add a non-standard qsort_r(), each choosing different ways to do it.


I wish standard (POSIX) Make would be sufficient but it frequently isn't. While some Make features from GNU and BSD Make made it into the POSIX spec (such as "include"), I frequently find myself desiring a more orthogonal facility to integrate output of external programs into Make builds. That is, by using POSIX Make's "VAR=`cmd`" syntax with backtick-quoted commands (or other shell evaluation syntax), you can define macros programmatically for incorporation into build rules as lazily evaluated text substitution variables, but you can use those only in build rules rather than in requisites or targets (where they get interpreted verbatim). A partial solution would be adopting GNU Make's eager evaluation syntax (VAR:=`...`) into POSIX.

OTOH, while I'm not a big fan of autotools internals (especially libtool), being able to run "./configure && make install" on tens of thousands of F/OSS packages is something I'd hate to loose. In fact, the extreme consistency in installation procedures (including use of pkg_conf etc.) and discipline in directory layout accross so many F/OSS packages is something I very much admire, given it's an unexpected outcome of a "Bazaar" style development model.


>"./configure && make install"

You forgot 'make'!

I'm tempted to turn this into a cheap shot about how the consistency obviously isn't worth much, but I won't.


Explicitly doing 'make' is redundant for makefiles generated by any version of automake I've ever used. Dependencies are setup correctly so that install also builds everything that it installs and isn't already uptodate (I'm not sure whether 'install' also explicitly depends on 'all')


The usual reason to separate them is that if you're installing to a prefix that needs root privileges to write to, then you'll be doing something like:

    ./configure && make && sudo make install
If you do this instead:

    ./configure && sudo make install
...then the compiler toolchain gets invoked as root and all the intermediate build products end up owned by root.


There is another reason. `make -j4 && make install' would be faster than just `make install' (multi-process vs. single-process).


You can do `make -j4 install`. Works as intended.


He also says later in the post you can write your own configure with a bit of bourne shell. If you expect your software to be widely distributed, a custom build script is a pain for distributions as most of them are heavily relying on ./configure && make with the appropriate arguments. Distributions also rely on the ability to override certain flags, something easy to do with autotools and quite hard with plain Makefiles (compare ./configure CFLAGS=... and make CFLAGS=..., the former don't override upstream CFLAGS).

I don't object that many projects have convulated and dated autotools scripts notably because they don't age well. It's easy to accumulate a lot of crap.

Most projects don't get autotools because they don't know how to use plain Makefile. They do because autotools bring much more than that.


>The article downplays a little bit the value of the autotools. It's not just about building for 30-year ago hosts, it's also about providing a lot of convenient tools and providing the user with a known interface. If you write your own configure, people may expect to be able to pass additional CFLAGS (not overriding the existing one), setting the path to various elements, installing in a different root, having certain targets for the generated makefiles (like dist, clean, install, uninstall), having cross-compilation work (including crosscompiling to Windows with mingw64 which solves the toolchain problem of releasing binaries for Windows), ... It's a tedious task and easy to get wrong.

So why not get a tool with 1/10 the complexity and 1/100 the legacy stuff of Autotool, but the same interface otherwise?

People who need the old checks 100% could continue to use Autotool-old, people who don't care for such BS could use the new.


I don't understand what you mean, if it has the same interface as autotools how do you remove the legacy stuff without breaking anything?

Autotools is a huge pain in the ass for the dev, unfortunately I haven't found any alternative that didn't end up being an even bigger annoyance.

A decent, simple, easily customizable and portable C/C++ build system is still very much a unsolved problem as far as I'm concerned (and I've tried quite a few of them). At least autotools are supported basically everywhere.


>I don't understand what you mean, if it has the same interface as autotools how do you remove the legacy stuff without breaking anything?

By having autotools for the projects that need "everything" (legacy BS checks) and this leaner version for the projects that don't need them.


I'm confused. Are you suggesting this leaner version should exist and it does not? Or are you advocating for a leaner version that I have not heard of?


I am suggesting a leaner version should exist.

The same way there's vim and neovim without the legacy stuff.


Have you tried cmake?


This whole discussion and article had me thinking about that.

CMake is faster than autotools and works fine with all the compilers talked about so far and all the windows compilers I know of.

Creating a CMakeLists.txt covering moderately complex build that links against a few libraries (but doesn't need and custom logic for moving files or other uncommon stuff) is normally just a few lines of code. Usually just one line of code per source file and per library (depending on how you feel about automatically including files this can be further shortened), then a little bit declaring the language and other settings. There are plenty of 10 line CMakeLists that can build large and seemingly complex projects.


You don't have to use all the legacy stuff. See: https://autotools.io/whosafraid.html. You choose what tests you want to run. No need to check if you have unistd.h if you don't care about that.

As a rewrite using the same user interface but a completely different developer interface, there is mklove (https://github.com/edenhill/mklove). However, this just covers autoconf: you don't get the flexibility of automake. Also, I don't know how complete it is.

A rewrite just to speed up things a little is a bit useless. Autotools are not that slow. And if a rewrite was done, it would be a shame to keep the horrible syntax. What needs to be kept is


If you use a simpler configure solution, then the user probably won't have much trouble figuring out how to deal with niche edge cases that they expect to work with autotools.

On the other hand, when something goes wrong with autotools, the first step is to pour yourself a drink.


It is true that Autotools offer some nice standard options to use when building and installing software. However, for whatever reason, the ability to figure out portability differences seems to be the selling point to many programmers ( http://queue.acm.org/detail.cfm?id=2349257 , https://varnish-cache.org/docs/4.0/phk/autocrap.html ).


The m4 macros (autoreconf) and ./configure is too slow when it can be not. Of course './configure -C' may speed up running it a bit. Still not fast enough.

I really feel a need for something like AC_TRUST_HOST or AC_TRUST_DISTRO or something like that would not check for each header or each function in some library. Instead it will check the version of GCC, libc, and compile some (hopefully just one) program and test.

Of course the common tests shall be skipped only in popular distributions. Say, Debian, Arch, Fedora (and their derivatives), to name a few.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: