Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Soon, somebody will convert the build system to CMake, move it to github, clean up the insulting directory structure, then nobody will look at their google code page ever again.


People act like CMake is an improvement, but it's not. The language is not very good (lists as semicolon-separated strings, seriously?). For example: its pkg-config support is completely broken. It takes the output of pkg-config, parses it into a list (liberally sprinkling semicolons where the spaces should be) and then the semicolons make it into the compiler command line, causing all manner of cryptic errors.

Stick with automake. Seriously.


CMake is also very difficult to debug (e.g. to find out why a library test is failing), harder to fix once you've debugged it, has strange ways of accepting extra compiler/linker flags from the environment, has poor --help, tries to allow creating Xcode projects but mostly produces nonsense, etc...

I think it might actually be worse than autoconf in every way, which is surprising considering how bad autoconf is. The handwritten non-macro-expanding not-much-autogenerating configure/makefile in ffmpeg/libav/x264/vp8 is easier to deal with than either.


I disagree with you that autoconf is bad. Its design came from a lot of locally-optimal choices that don't look so good in 2011, and there's a lot of legacy code being copied around in people's configure.ac files.

To me, it's not perfect, but it's pretty good. Then again, I'm known to my friends as "that guy who knows automake" :-).


On Windows, automake is an order of magnitude slower to compile than the projects generated by CMake, not to mention that compiling with MSVC is very difficult to make work at all with autotools. automake just isn't a viable option if your projects need to be portable to Windows.


Speed differences like this are often down to process creation.

A lot of automation routines designed on unix-a-like systems involve creating short lived processes with reckless abandon because creating and tearing down a process in most Unix environments is relatively efficient. When you transplant these procedures to Windows you are hit by the fact that creating or forking a process there is relatively expensive. IIRC the minimum per-process memory footprint is higher under Windows too, though this doesn't affect build scripts where generally each process (or group of processes if chaining things together with pipes and redirection) is created and finished with in turn rather than many running concurrently.

This is why a lot of Unix services out there are process based rather than thread based but a Windows programmer would almost never consider a process based arrangement over a threaded one. Under most unix-a-like OSs the difference between thread creation and process creation is pretty small so (unless you need very efficient communication between things once they are created or might be running in a very small amount of RAM) a process based modal can be easier to create/debug/maintain which is worth the small CPU efficiency difference. Under Windows the balance is different: creating processes and tearing them down after use is much more work for the OS than operating with threads is, so a threaded approach is generally preferred.


I always used MinGW on windows.


OT: If I understand correctly, CMake was purpose-built to support building Kitware's visualization application. Their app uses Tcl as an embedded language; how they could already be using Tcl in their app then insist on building an ad-hoc language into CMake (versus using Tcl, which already supports looping, conditions, variable setting/getting etc) is an occasional wonder to me.


I hate CMake. I hate autotools, too. But, if there's going to be a replacement for autotools, it's gotta be better than CMake. At least autotools is standard on every Linux/UNIX system these days. CMake is just another build dependency for very little gain.


Don't use them. Makefiles are fine. So far i haven't seen the need for configure magic. On the other hand, i do not work on software, which needs to compile on archaic AIX systems.

For a Win/OS X/Linux portable build, a Makefile should suffice. Example: https://github.com/MatzeB/cparser/blob/master/Makefile


That's probably fine as long as you're just compiling .c to .o. I suspect it breaks down in the face of anything more complex. For instance, I've got a problem right now where I need to use objcopy to turn arbitrary binaries into .o files, and that requires knowledge about the toolchain on the user's machine which, as far as I can tell, can only be gathered by compiling a throwaway file and sniffing its output with objdump. That's exactly the sort of task which autotools is good at, but I'm desperately trying to find a different way to get at the information so I don't have to introduce autotools to what is otherwise an already... erm... interesting build chain.


Why not just convert the binary into an array literal in a C source file and compile that?


Whoah! Hold on, now. autotools (and CMake) exists for a reason. They, in many cases make your life easier than maintaining makefiles. I do think make is much easier to work with (with some warts) than autotools, and it's certainly more comprehensible, since it's so much smaller and contains far fewer bits of magic. But, to just throw out everything autotools were designed to deal with doesn't make sense.

Sure, if your build is simple, make works fine. But, the projects I've worked on where autotools was used, simply using make would have been a horrible experience. And, in most cases, the projects started out using make by itself and then moved to using autotools when the number of platform specific makefiles became too big to maintain.


Configuration magic aside, have fun supporting everything in http://www.gnu.org/prep/standards/html_node/Makefile-Convent... . You never know which features your eventual users will rely on. That's one of the main problems automake solves.


Custom-made build systems tend to break conventions that are useful to packagers (destdir) or home-directory installs (prefix) or developers (ccache, though your example isn't guilty of the last one). I'd rather build something that handles all this and integrates well everywhere than see variations that need custom patching.


What are your thoughts instead on djb's redo, which is being implemented and so far working nicely at https://github.com/apenwarr/redo ?


redo and ninja are replacements for make. They have a dependency graph and build it. Tool integration, configuration, feature detection, lifecycle (install, release, deploy…) have to be handled by something else.


I think that tup build build tool is wa-a-ay better solution than CMake. http://gittup.org/tup/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: