Why Switch a Big Software Project to autoconf? 219
woggo queries: "I'm a CS grad student working on a large research project (over 1 million lines of code, supported on many platforms). The project has been under development for several years, and the build system is nontrivial for end-users. We'd like to make it easier to build our software, and I'm investigating the feasibility of migrating to GNU autoconf. I need to demonstrate that the benefits of autoconf outweigh the costs of migrating a large system of makefiles with a lot of ad-hoc kludge-enabling logic. Has anyone made a similar case to their advisor/manager? Does anyone have some good 'autoconfiscation' war stories to share? (I've already seen the Berkeley amd story and the obvious links from a google search....)" Depending on the intricacies of the build process, such a conversion might take an awful lot of work. It might be easier to put a nicer face on the "nontrivial build process", although there is something to be said for the ease of "./configure; make; make install"
Low costs (Score:4, Informative)
You don't have to migrate the whole thing at once. Just start with a nice simple configure.in that copies Makefile.in to Makefile and config.h.in to config.h and add #includes for config.h everywhere.
Make sure new tests go in configure.in and you can slowly move across with little trouble. It should therefore be pretty easy to show that the costs are very low...
Even more (Score:5, Insightful)
In other words, it's better to have one makefile, and fix the problems in the makefile, than to have to fix problems in configure or your configure specification.
If you want an example of how this can work, look at Dan Bernstein's djbdns, or qmail. On both of these, "make" Just Works(tm).
-russ
Re:Even more (Score:1)
We are using includes in our makefiles to set up for different environments ( not just different OSs ). They are tremendously complex and it takes a while to get a project set up, but it provides a good deal of relief for the average developer on day to day builds.
We also have a dedicated build and release group who maintain the includes. They are a good resource when we have questions either in general or specific to our system.
Using autoconfig adds another set of software to an already complex build system. Invest the time to standardize and fix your Makefiles. It will pay off as you add more systems.
Good luck,
Allen
Re:Even more (Score:2, Informative)
autoconf normally only changes the Makefile to handle things like compiler options and library options. DJB handles those by creating little files which the Makefile uses. It is entirely possible to use autoconf in this fashion, and it requires no work beyond what DJB's system requires.
The autoconf system is certainly more complex than DJB's system. I think it is also more powerful and more consistent. It's far from perfect, but I don't think you've clearly identified the problems it has.
As Pro Linux as I am.... (Score:1, Offtopic)
Windows troll, was: Re:As Pro Linux as I am.... (Score:1)
Re:Windows troll, was: Re:As Pro Linux as I am.... (Score:2, Interesting)
Why would that be so difficult?
Re:Windows troll, was: Re:As Pro Linux as I am.... (Score:1)
/Erik
Re:Windows troll, was: Re:As Pro Linux as I am.... (Score:1)
And as far as making it easy for your average user, most software (for linux at least) is released as
Cool, but far from universal (Score:1)
No, it won't be the universal solution...
Just you try with Nvidia driver.
Just you try
for you'll have to install the rpms, then modify the Xfree conf, then restart the whole shebang.
It "could" have been easier.
Could have
Re:Cool, but far from universal (Score:1)
Re:Windows troll, was: Re:As Pro Linux as I am.... (Score:1)
As far as $PORTABLE_LANGUAGE, in large part that's what configure is.
Source versus binaries (Score:3, Insightful)
I think the important difference you're forgetting is that Windows installers are installing binaries, while configure and make are building source code. A better analog in the Linux world is rpm or apt-get. There are very few programs these days that don't come with prebuilt binaries for several Linux configurations. And, no, I don't think rpm/apt-get (graphical, if you like) is much harder than setup.exe (modulo dependency satisfaction in rpm, but I would hardly consider the fact that most Windows programs are distributed with (possibly out of date) versions of a lot of libraries you already have to be advantageous).
I would argue that one of the functions of configure (aside from detecting hella shit about your build environment) is to "ask a few questions." Just doRe:Source versus binaries (Score:1)
There is no reason why 99% of applications that rely on the current configure && make && make install could not be wrapped with a graphical front end. The same "questions" you need to "answer" with the configure script can easily be represented by a multi-stage graphical front end. For example, choosing the installtion directory just becomes a -prefix= argument to the configure script. Do you want to compile with SDL or use X overlays directly? Thats a radio button which becomes a --with-sdl or --without-sdl (For example) argument.
Why would it be overkill? If the actual installer was part of your system anyway, based on say Python and Tk (As examples), then the actual installer script for each application would be a few lines of code. Applications that don't require a whole bunch of questions can just get the default installer (E.g it just asks you for the installation path) and that can simply run configure for you.
The whole point of my idea is to reinforce a previous posters comment that expecting users to open a terminal and run configure etc. from a CLI is actually a high barrier of entry for most users who are migrating from Windows. Why should we make it tough on them? Why can't we try and focus on making Linux as easy to use as possible?
Re:Source versus binaries (Score:3, Interesting)
This is why so much development effort has been put into creating nice, user friendly graphical binary installers, and why so little effort has been put into more general Tk configure frontends. Development energy can be better spent elsewhere.
Besides, the problem of build failure is a serious one. The other day I compiled an (autoconf-based) application that was supposed to build out of the box on my system. The developer, however, had not tested the application with gcc 3, and I had to make half a dozen changes to the source to get it to compile. What do you propose your graphical newbie-friendly builder does at this point? You're pretty much screwed, because there's no way to recover from "error X on line Y" aside from telling the user that there was an insurmountable failure. Binary installers work much better for newbies, because your installer can detect and report (and possibly fetch and install) missing libraries on which the new application is dependent. Then, with very high probability, the installation will succeed. (At that point, the only stumbling block should be things like the locations of configuration files.)
.Net may prove otherwise (Score:1)
Take a look at 10 Tips for Great .NET Programming [fawcette.com]. Tip #10 states that:
This seems to allow one to write a compiler/installer solution on the fly.
Gokhan Altinoren
Re:Windows troll, was: Re:As Pro Linux as I am.... (Score:1)
#!/bin/sh
./configure && make && make install
Now, how difficult was that? Surely simpler than clicking on many buttons...
--
Re:As Pro Linux as I am.... (Score:2, Insightful)
Re:As Pro Linux as I am.... (Score:1)
however, i have to agree that if people are taught how to do things, this becomes a non-issue. i guess everyone just gets caught up in the "easy-as-possible" way of thinking and the teaching part of things goes to the wayside.
Re:As Pro Linux as I am.... (Score:2, Informative)
Well first, this isn't the issue here. We're talking about compiling from source which isn't the same as deciding how to install a binary once compiled.
As for rpm, etc you just want a graphical front-end, I think.
That said, Window's setup.exe is actually unnecessary complicated for users. With ROX, we're using application directories [sf.net]. There is no setup program because the program can just run in-place. See the example at the bottom of this page [sf.net] for an example. As a bonus, running a source package is done in exactly the same way as running a binary (it just takes a bit longer).
The whole business of installing is terribly arcane if you think about it (hint: the computer already has everything it needs to run the application... why the extra steps?)
Re:As Pro Linux as I am.... (Score:4, Funny)
#!/bin/sh
./configure
make
make install
Re:As Pro Linux as I am.... (Score:1)
Re:As Pro Linux as I am.... (Score:2, Insightful)
Apple and Oranges (Score:1)
setup.exe is a way of installing software to your system. Actually, it would be better to compare with Debian 'deb' files or RedHat 'rpm' files. Using apt-get should be equal to running setup.exe, assuming you have access to all deb files a certain application is dependent on.
/Jonas U
Re:As Pro Linux as I am.... (Score:2, Informative)
apt-get --compile source packagename
and go have a cup of coffee...
Re:As Pro Linux as I am.... (Score:2)
By default, apt-get only grabs/compiles sources from sites which host only sources signed by Debian-trusted developers. Someone blatantly abuses their power? Their key gets yanked from the trusted ring, and apt-get will no longer fetch their packages -- done. apt-get can also verify signatures on the client machine.
Unless you add extra package sources by hand, in which case you've made a conscious decision to trust someone else, apt-get will indeed only fetch packages from trustworthy folks w/ valid agendas.
Re:As Pro Linux as I am.... (Score:2)
they type rpm -i supercoolprogram.rpm
or better yet, I tell KDE for them to do that when they double click on an RPM file.
It works great, until you get to developers apps that have to have installed 60-70 bleeding edge libs that dont come with that super old redhat 7.2 (gawd, running a production release? what are you in the stone ages?)
and they refuse to staticall link the program so it will install anyways for the newbies.
basically most productivity software out there is specifically packaged to discourage newbies from running it. who wants people other than glibc and GTK experts running their software.
Re:As Pro Linux as I am.... (Score:1)
Re:As Pro Linux as I am.... (Score:2)
The only people who have a hard time with a command line are the people who have been trained on nothing but Windows and have no desire to learn anything new. Any brand spanking new computer user who has no experience will be happy learning how to use a computer. Last time I checked, most people are not afraid to type.
Sharp stick in a sore spot (Score:1)
My current nightmare install has a hard-coded db sid (ORCL), $ORACLE_HOME, and other asst. gems all wrapped up in a NT setup.exe.
The $DOLLARS/day service engineer 'just happens' to have the exploded setup files on cd (that I am normally not allowed to see). What are they? A vb script that copies a few files and executes a few sql scripts (and not very well, mind you).
Feh. Even the oddest of Makefile setups allows me to get in and figure things out if a make fails.
--
Re:As Pro Linux as I am.... (Score:2)
Re:As Pro Linux as I am.... (Score:2)
I do have to say that the gmake file for NT is nice and clean due to the unifomity between the Windows platforms. A lot of pain would be saved if the Unix people started refusing to support some of the older and non-standard Unix versions so that we could use a single makefile.
One comment: if at all possible, the biggest win in readability and maintanability is to put explicit #ifdefs into the source, and use gmake and the output of uname to put 'if's into the makefile. This stuff is immesurably easier for a programmer to figure out and debug than autoconf. It would be nice if somebody made a make replacement that explicity compiled the dependency rules from a sequential language with normal if and looping constructs and obvious built-in tests for all the stuff in autoconf.
Why autoconf, automake and libtool fail (Score:5, Informative)
Here's an article on the subject, written by Uwe Ohse (you can read the original article here [www.ohse.de]. Many of the problems were fixed in the mean time, but it makes an interesting read nevertheless.
autoconf config.guess, config.sub There was no source from which one could get up-to-date version of these scripts, which are used to determine the operating system type. This often caused pain: asked the openbsd port maintainers about it. (btw: there is now a canonical source for them, "ftp.gnu.org/gnu/config") autoconf takes the wrong approach The autoconf approach is, in short:
No, it doesn't work good enough. This approach has lead to an incredible amount of badly code around the world.
Studying the autoconf documentation one learns what kind of incompatibilities exists. Using autoconf one can work around them. But autoconf doesn't provide a known working solution of such problems. Examples:
Yes, this solves a problem. But it's overused, it's dangerous. In many cases unportable.c doesn't work on the developers system, so she can't test it. On other cases unportable.c only works correctly on _one_ kind of system, but will be used on others, too.
Yes, the often used packages _have_ been tested almost everywhere. But what about the lesser often used? ...
Keep in mind that there is no central repository of replacement functions anywhere
This is surely nothing which can be avoided completely, but it's something which has to be avoided whereever possible.
There's a solution to this problem, but it is completely different from what's used now: Instead of providing bare workaround autoconf (or a wrapper around it) ought to provide an abstraction layer above it, and a central repository for such things.
That way a programmer wouldn't use opendir, readdir, closedir directly but call they through the wrap_opendir, wrap_readdir and wrap_closedir functions (i'm aware of the fact that the GNU C library is this kind of wrapper, but it hasn't been ported to lots of systems, and you can't rip only a few functions out of it).
autoconf macros are inconsistent. For example: AC_FUNC_FNMATCH checks whether fnmatch is available and usable, and defines HAVE_FNMATCH in this case. AC_FUNC_MEMCMP checks for availability and usability of memcmp, and adds memcmp.o to LIBOBJS if that's not the case. Other examples exist. autoconf is not namespace-clean. autoconf doesn't stick to a small set of prefixes for macro names. For example it defines CLOSEDIR_VOID, STDC_HEADERS, MAJOR_IN_MKDEV, WORDS_BIGENDIAN, in addition to a number of HAVE_somethings. I really dislike that, and it seems to get worse with every new release.
My absolutely best-loved macro in this regard is AC_FUNC_GETLOADAVG, which might define the following symbols: SVR4, DGUX, UMAX, UMAX4_3, NLIST_STRUCT, NLIST_NAME_UNION, GETLOADAVG_PRIVILEGED. autoconf is large I'm feeling uneasy about the sheer size of autoconf. I'm not impressed: autoconf-2.13.tar.gz has a size of 440 KB. Add automake to that (350 KB for version 1.4). Does it _really_ have to be that large? I don't think so.
The size has a meaning - for me it means autoconf is very complicated. It didn't use to be so, back in the "good old days". And it accomplished it's task. I really don't see that it can do so much more today (i don't mean "so much more for me"). configure is large Even trivial configure scripts amount to 70 KB of size. Not much?
Compressed with gzip it's still 16 KB. Multiply that by millions of copies and millions of downloads.
No, i don't object to the size. It's perfectly ok if you get something for it. But you don't, about one half or more of each configure script can be thrown away without any lossage.
The configure scripts are the utter horror to debug. Please just try _once_ to debug 4000 lines of automatically generated shell scripts.
Note the autoconf maintainers: The road you're on is going to end soon. autoconf is badly maintained Let me clarify this first: I don't think bad about the developement. I'm missing maintainance of the already released versions. Now, at the end of 2000, almost two years have passed without a maintainance release of autoconf. 9 months have passed since a security problem has been found (in the handling of temporary files). There have more bugs been found, of course. ...
I know that nobody likes to dig in old code, but 2 years are a little bit much. automake My primary objection to automake is that automake forces me to use the same version of autoconf everywhere. Autoconf has a large number of inter-version conflicts anyway, but automake makes that situation worse, much worse.
I'd need the same version of both tools on all machines i happen to touch the Makefile.am or configure.in or any other autoconf input file on. There are a number of reasons for that, one of them is that automake used to provide some autoconf-macros during the years autoconf wasn't developed at all, and these macros are now moved to autoconf, where they belong to. But if you happen to use, say, AM_PROG_INSTALL, and later versions declare that macro obsolete
That doesn't sound too bad? Ok, but suppose
I found that hard to deal with. Things were even worse since every CVS checkout tends to change time stamps, which can mean that autoreconf is run even if there's no chance been done to any autconf input file.
Don't misunderstand me: i don't attribute that to automake. I attribute it to the internal instability of autoconf. Unfortunately you can't have automake without autoconf. libtool Libtool adds an additional layer of complexity. I never had any reason to look at the insides of libtool (which proves that it worked for me). But having one project which used autoconf, automake and libtool together was enough - never again. I got exactly the same problems as i got with automake, described above, but they were worse and happened more often.
One problem with libtool is that releases don't happen very often. Libtool rarely is up to date with regards to changes on some operating systems. Which makes it difficult to use in packages meant to be really portable (to put it mildly).
A libtool script and a number of support files are distributed with every package making use of libtool, which ensures that any user can compile the package without having to install libtool before. Sounds good? But it isn't.
Another problem is the size of the libtool script. 4200 lines ...
summary
Autoconf is the weak link of the three tools, without any doubt.
Version 1 wasn't really designed, version 2 made a half-hearted
attempt of dealing with design problems. I'm not sure about the
next version.
Re:Why autoconf, automake and libtool fail (Score:1)
Re:Why autoconf, automake and libtool fail (Score:1)
Re:Why autoconf, automake and libtool fail (Score:1)
Re:Why autoconf, automake and libtool fail (Score:1)
An idea: fork the libxml code, and make your own library, only supporting the forked one. And on your website and in your documentation, explain why you did that (incompatible releases of the same library).
Re:Why autoconf, automake and libtool fail (Score:1)
For 2.0, they've changed the signal handling, menu short-cuts, text rendering, image handling, etc.
Do I require everyone to install Gtk+-2.0 (when it's officially released, of course), thus annoying everyone who hasn't upgraded their distro in the last few months. Or should I stick to 1.2, and not let anyone benifit from the new features, even if they have it installed?
Or shall I just use autoconf and support both?
Re:Why autoconf, automake and libtool fail (Score:1)
I think this is a problem with autoconf/automake/etc. They can be a little too convenient and will turn good design and code into #ifdef spagetti in no time. Trust me. I wrote a program which supported GNOME, QT, _and_ plain GTK+. While the QT code was very seperate, the GTK+ and GNOME code overlapped much so I thought "hey, code reuse!" Bad idea. Once a new GTK+ version was out (no matter if it was a minor release or not) _everything_ broke. And there is always the case of relying on this one special "feature" of GTK+ only to have it gone in the next bugfix.
Re:Why autoconf, automake and libtool fail (Score:1)
Re:Why autoconf, automake and libtool fail (Score:1)
autoconf/automake is not a silver bullet, but it does provide a uniform way for users to compile and install your package. Virtually every Unix user knows to do
libiberty (Score:2)
There's already a library which does some of this, and chances are you already have it installed: libiberty [gnu.org]. Perhaps its role could be expanded a bit?
Re:Why autoconf, automake and libtool fail (Score:2)
checking for Cygwin environment... (cached) no
checking for mingw32 environment... (cached) no
Autotools also work fine on SunOS, HP-UX and SCO. libltdl (libtools wrapper library for dynamically loadable modules) supports
And the makefiles output by autoconf/automake should work on every known version of make. They don't rely on any vendor specific features.
Make the switch but be prepared... (Score:5, Informative)
Autoconf and friends make it infinitely easier to compile our code. However you will have to put in a fair bit of work determining all variety of tests required to determine the idiosyncracies of each build. You are probably already doing something similar if you can build on multiple platforms.
Autoconf has been well worth the initial effort. Occasionally new compile problems crop up, but they are usually solved by the addition of another 1 line check in configure.in.
Selling autoconf should be easy. Wrestle with compile problems once getting autoconf working, or have users repeatedly wrestle with the problems without autoconf.
Read the book! (Score:4, Informative)
The maintainers of the autotools (autoconf, automake, libtool) wrote a book [redhat.com] to help explain the approach used by the tools. (Yes, it's called the goat book. Read the page to find out why.)
I've seen an amazing amount of crap posted in these comments; the parent article by jvl001 is one of the few good exceptions. NO tool can get it all; the autotools get you about 90%, and you have to help it the rest of the way. There are solution for just about all of the problems and red-herrings I've seen posted here, but you need to look a little farther than /. to find them.
Ant (Score:2, Informative)
Re:Ant (Score:2, Informative)
Re:Ant (Score:2, Informative)
Just for the record: libtool and AIX (Score:3, Informative)
I spent quite a bit of time this summer trying to use autoconf'd stuff on AIX (Gtk+). I played with a pile of recent and not-so-recent versions of libtool. It was a pain in the butt. (Granted, linking on AIX is
I think the power of the autoconf suite to make things work across "all Unices" is a bit exaggerated. Check whether it really does a good job of supporting all the platforms you need, first.
Re:Just for the record: libtool and AIX (Score:1)
Portability (Score:1)
Autoconf does help portablity to "normal" OS's. (Score:3, Informative)
Thanks to Auto conf, some really nasty #if's in the code have been removed by a single include line, its also able to simplify code by removing 50 trillion OS specific checks from the source files and only insert it when it needs to.
Once you get over the basic learning curve hurdles, then you should be fine.
It would be nice though if there was a makefile -> autoconf converter (but that is just me).
just do it (Score:2, Insightful)
The talking about takes longer than doing it.
Then you'll really know.
Beware autoconf if you are cross-compiling!! (Score:5, Interesting)
A few macros (particularly AC_TRY_RUN, and anything that calls it) compile and run a program on the host to test for functionality. Unfortunately, there is no way to tell autoconf that you aren't going to be able to run the compiled program, because it's for your target platform.
Many of the AC_FUNC_ macros simply DIE when running in a cross-compiling environment. For example, let's cross-compile the ncftp package. The configure.in uses the AC_FUNC_SETPGRP macro. Running autoconf results in:
configure.in:146: warning: AC_TRY_RUN called without default to allow cross compiling
That's right. This macro (defined elsewhere in the autoconf package) uses AC_TRY_RUN without a default clause. So instead of recovering nicely, the configure script dies when it tries to do this test. The only way to get around this is to modify
Re:Beware autoconf if you are cross-compiling!! (Score:5, Informative)
Define the CONFIG_SITE to point to a config.site file in configure's environment. Then put into your config.site file a line like:
ac_cv_func_getpgrp_void=yes
Yes, you do have to look at the configure code to find that name, but it lets you give the software the answers it needs.
HTH,
Eli
Re:Beware autoconf if you are cross-compiling!! (Score:1)
Ahh I admit my ignorance!
That's what I get for spouting off like that. You're right--config.site is the answer for many of these situations.
Re:Beware autoconf if you are cross-compiling!! (Score:1)
One of the added benefits to that approach is that you can accumulate a config.site file that answers questions for most of what you cross compile. That way, you only have to say that a word is 4 bytes once.
HTH,
Eli
Re:Beware autoconf if you are cross-compiling!! (Score:1)
Morever, you have to ask yourself -- what's the alternative? Most other configuration frameworks I've seen simply fold up and die when presented with a cross compilation environment (especially wierd non-standard ones). Autoconf, with just a smidgen of forethought, works like a champ.
Re:Beware autoconf if you are cross-compiling!! (Score:1)
That's right. This macro (defined elsewhere in the autoconf package) uses AC_TRY_RUN without a default clause. So instead of recovering nicely, the configure script dies when it tries to do this test. The only way to get around this is to modify /usr/share/autoconf/acspecific.m4 or not do the test!
Those test have been greatly improved in 2.5x, but there are of course lots of test that you just can't do on the host machine. Solution: Run those tests once on your target machine and add the result to your site-specific file. You can't expect autoconf to solve something that is impossible to solve.
Re:Beware autoconf if you are cross-compiling!! (Score:2, Informative)
I've used autoconf extensively when building with a cross-compiler. Your advice to ``stay far away from autoconf'' is unwarranted.
It's certainly true that building with a cross-compiler requires some extra care. It's also true that autoconf provides somes tests which give errors when building with a cross-compiler. However, it's not arbitrary. Those tests can not be run correctly if you can not run an executable. The tests are provided a convenience for the common case of a program which is never built with a cross-compiler.
For a program which is built with a cross-compiler, there are various ways to handle these tests. I usually write the configure.in script to check the cross_compiling variable, and, if set, perform the check at run time rather than at compile time. For example, if you have the source code to GNU/Taylor UUCP around, look at how the ftime test is handled.
There is a chapter [redhat.com] in the book I cowrote which discusses building with a cross-compiler.
Why not try Jam? (Score:4, Informative)
We're using a build tool called "Jam", which can be gotten from http://www.perforce.com/jam/jam.html. It does a very good job at cross platformability and is faster than make at determining include dependencies.
An open source project that uses their own version of Jam are the boost libraries at http://www.boost.org/
Enjoy!
Re:Why not try Jam? (Score:2, Informative)
Ahh, you are a knowledgeable one. :) I was hoping someone would mention jam. It not only handles platform-dependent tasks, it's a full replacement for make and actually generates correct dependencies. It might be a little tough to convert a make-based project to jam, but it's the way I would go starting out.
boost.build [boost.org] is the build system for the Boost [boost.org] libraries which, as mentioned, uses jam. In fact it uses an advanced version of jam with many new features. I'm not sure if those will be rolloed back into the "official" jam sources (boost jam is actually a derivative of FT Jam [freetype.org], from the FreeType [freetype.org] project).
Jamfiles (analogous to Makefiles) are platform-independent. A "Jamrules" file holds all the configuration-specific information. Some systems use autoconf to generate this. Boost does not and their build system is very flexible, allowing one to not only define platform-dependent things but also specify build features such as whether to make static of shared libraries (or both!), optimization levels, etc. A single build run can build shared and static libraries at several optimization levels, for example.
About to do the same (Score:4, Insightful)
We have a monstrously large software project with home grown kludges for building on multiple platforms that is a mess of shell scripts and makesfiles. It's ugly and the time required to keep fixing and extending this system is a heavy burden. Plus, it's only tolerable to use.
I've used autoconf a little on a different smaller project and I know this much about it.
When autoconf works, it works great. As a user, there's great delight in typing those 3 magic commands and having a whole series of feature tests fire off and configure the build process. And, as a developer, every time the system works (and it works more than kludge systems), it saves me from hassles of porting to platform de jour. If you look at some fairly complicated systems such as Octave or teTeX build, it's nothing short of amazing.
However, that said, autoconf is complicated. And, you really have no choice but to learn a fair amount of the complications to deal with the issues that will inevitably come up during cross platform testing. I've mostly learned autoconf, a little of automake and almost none of libtool. The m4 macro language for defining the autoconf macros has syntax that always gives me nausea.
If you implement autoconf for your project, great. But, expect a substantial fraction of your time-savings from it to be dedicated to becoming more of an expert at autoconf, m4, sh, etc.
autoconf is better than imake, IMHO, but it sure is short of being as good as it should be. Large projects can justify the time for a person learning the intracacies of autoconf; small projects cannot, unless you happen to know it already. Learning autoconf is kind of like learning nroff just to write a man page - the time investment is more justifiable if you use your expertise to write more than one of them, or to write a really important man page.
P.S. Is there any chance that the software carpentry project will come out with something soon? Some of their early submissions for improved build tools really sounded intriguing, like Tan, an XML based description.
External output/temp directories? (Score:2)
Writing them somewhere else makes doing backups much easier, plus the directories don't get overcrowded. You can even make source distros just with a simple tar command.
However, using Makefiles leads to really ugly bungles. Autoconf doesn't, to my knowledge, have any kind of support for this.
Any ideas about how to do it easily and cleanly?
Re:External output/temp directories? (Score:2)
autoconf doesn't, but automake (or any other GNU Makefile standards-compliant makefile generator) can.
Try this:
% tar zxf package-version.tar.gz
% mkdir build-package
% cd build-package
%
% make
% make install
All your makefiles and such will be built in the build-package directory instead of the source tree.
thats the way you do it (Score:1)
in your Makefile.in, put:
VPATH = @srcdir@
in configure.in, you need:
AC_SUBST_FILE(srcdir)
the VPATH variable is documented in the gnumake-manuals.
Divide & conquer and other advice (Score:5, Informative)
Don't forget the Autoconf macro library [sourceforge.net] and also the fact that there are thousands of free packages out there which will have configure scripts from which you can borrow - try to find packages in a similar domain to your own.
The difficulty (complexity, time taken) in maintaining a package which works on N platforms is usually proportional to N - or if you are unlucky, some small power of N like 2. What your code is really doing is trying to understand the properties of the target system and so the #ifdef __hpux__ for example in line 1249 of blurfl.c is actually trying to determine if the quux is use like this or like that. Autoconf on the other hand will produce a single preprocessor macro for driving the quux. This means that you don't have an extra 15 lines of #ifs to handle quuxes in other operating systems. Hence you may not have less #ifdeffed bits, but each of the #ifdeffed bits will be shorter.
Autoconf works differently to thge standard approach - it allows your code to work with each feature independently, and so while the normal approach is O(N) or O(N^2) in the number of suppoered OSes, the complexity of maintaining an autoconfiscated program is O(N) in the number of different features supported by the operating systems between them. The great thing about this is that Autoconf keeps these orthogonal and prevents these things from interacting too heavily, and it turns out to the the case that the total number of different features basically flattens out to some constant number even when you continue to add more supported OSes (i.e. there are only a certain total number of different ways of doing things even though each of the N operating systems can choose among X ways of doing Y things).
So, what this means is that you shold do the feasibility study as discussed above, but naturally autoconfiscating the whole system will take a while. The ideal time to do this is either
Another option to explore in combination with Autoconf is the Apache Portable Runtime [apache.org]. There is also Autoproject [gnu.org] but I suspect that is a little lightweight for your needs.
Taking all the above together I suggest that you
software carpentry (Score:1)
software capentry [codesourcery.com].
SC Config [codesourcery.com] is the one that is of interest to autoconf alternative, and contains links to 4 alternatives (BuildConf, ConfBase, SapCat, Tan).
Autoconf is not the only piece of the puzzle (Score:2)
With Libtool, you can be sure that shared libraries can be created, even on architectures/OS you don't have access to. That's a very important point.
Automake eases a lot the building process of clean packages for end users, with all standard targets for 'make'. It also builds Makefiles that can automatically generate
Also, Autoconf, Automake and Libtool are aware of operating-systems bugs that you probably don't know if you never worked on them. So they are your best friends to produce portable and reliable software.
A War Story (Score:2, Interesting)
When I started to work for my last employer, they where using RCS and static Makefiles plus a hand-crufted make and version control system that a prior contractor had introduced. The code base for the large integrated system was about 250.000 LoC c/C++ code that ran on a number of Solaris 2.5, Linux-Boxen and Windows maschines. It used GNU-Tools and libraries quite liberally and thus was dependant on the regular introduction of external code.
But since a number of developers didn't really grasp what make was doing, they would simply copy the makefile of one of the larger subprojects and tweak until they got it working. This led to each of the about 20 sub-projects having Makefiles of about 2000 lines each. Quite unmaintainiable.
I started to use auto* for my own library that, in the beginning, was fairly separate from the other projects. More from curiosity, than actual need (I know how to use make B-) ), I put first autoconf, then automake and libtool into the code that was at that time about 20.000 LoC. I also got everybody to switch to CVS, since it supports multiple developers on the same project much better.
It worked quite well I must say. For the following reasons:
Could this be done with only some make magic? Yes, if everybod knew make well. But in the real world people that really grok make are few and far between.
would your own solution handle shared libs on multiple systems? Maybe you are that good. I am not. I use libtool.
like any software there are problems with these. For instance the handling of languages like java is suboptimal in automake. But this is opensource. If you have a problem solve you itch and trust that others will solve other itches.
Mozilla (Score:2, Informative)
nikel
autoconf/automake good; libtool bad (Score:3, Insightful)
This is ridiculous (Score:2)
Building shared libraries on multiple systems is a task that many people face. Any time you expect to write a shared library and have it ported across various unix-like platforms, you have this problem. Getting shared libraries working across even a few platforms is difficult enough. Linux, Solaris, and FreeBSD? Each have different requirements and quirks and (especially in the latter case) brokenness.
If you're just writing something on Linux with zero foresight as to portability, you might take the approach that "this doesn't matter," but that would be pretty naive.
Also, libtool does more than just implement a layer of portability for building shared libraries (which is its main goal). It has library dependency tracking, version maintenance, and a module loading API. These, coupled with the extreme ease of building a shared library when coupled with automake and autoconf, make it an indispensible tool.
Get a cllue (Score:2, Insightful)
When a
OT: language choice (Score:3, Informative)
Other languages take different approaches. Java has a very large set of libraries that are specified to be part of the language and so must be included. The Java language is also constant between platforms. Not every platform has a conforming Java environment, but the most popular ones do. Common Lisp also has huge functionality in its standard library that is part of the language specification. OCaml has a nice standard library, and is open source. If you want your program to work in OCaml on some unsupported archictecture, you can compile it yourself. This still leaves porting the library. If the target archicture has POSIX, this is easy.
You need to understand portability, first. (Score:5, Informative)
Autoconf is a tool that in the end can only make portability choices for you. In order for those choices to mean anything you have to have a need for your software to be portable (to a wide number of platforms, really), and you need to understand the real issues with writing portable software.
If you're writing for FreeBSD, Solaris, and Linux 98% of the time for application software you can write it so there are no portability issues. Why have the autoconf step when you can "make;make install"? Modern systems are not all that different for high level stuff, and are converging for medium level stuff. It's really only the low level details keeping them apart.
If on the other hand running on an old Ultrix box, and on your SCO Unix box, or on that PDP-11 in your garage is important autoconf can give you the mechanisms to make all that work but only if you know that differences between the platforms, and what changes need to happen to your code to make it work . It does no good to have autoconf check to see if bzero exists if you don't know to use memcpy as an alternative, or vice versa. A check without an alternative is just a way of bombing a little sooner than the compile stage.
The other thing autoconf can help with is optional packages. These are not portability issues per se, but rather choices that need to be made, but often aren't worth bothering a user about. Consider the application that's all command line based except for a single X app that's not really needed, just nifty. Well, if the system doesn't have X, you don't build it, and if there's no X it's unlikely the user wanted to run X apps anyway.
As far as the mechanics go, autoconf is fairly easy. Once you understand the changes that need to happen making autoconf make them for you is trivial.
dissenting view (Score:5, Interesting)
basically, the autoconf/configure school of portability says "forget about actually writing portable code, just write it for each variant and let the build process pick". that's whacked. you'll continually be running accross new variants, new ways in which systems are different, or just new systems. for example, i use Plan 9, which has a very good ANSI/POSIX compatability system (better than many *nix systems). despite being near-fully ANSI compliant, pretty much every autoconf fails because it doesn't recognize the output of uname or something stupid like that (of course, that says noting about when programs that claim to be ANSI or POSIX arn't, like including non-ANSI/POSIX headers, typically BSD stuff).
this school of portability also typically makes your code far less readable - littering it with ifdef's every third line - and much larger. it ends up taking much more time than just slowing down and disiplining yourself to write real portable code.
the argument 'configure ; make ; make install' is easy is stupid, as well. y'know why? 'cause 'make install' is easier. build your makefiles well. the 'install' target should have prerequisits, obviously. make will build them for install. and 'configure' is slower and more prone to failure than 'cp makefile.linux.386 makefile'. and light years more reliable. and editing an example makefile is way easier than putzing with autoconf/configure if something doesn't work. not to mention easier to debug (uh, 'echo' anyone?).
on a personal note, having done portability work a good bunch, i'd offer just a little extra advice: do not require GNU-specific stuff. don't use any of the gmake-specific features or gcc language extentions. GNU propaganda, that'll just kill your portability. and if you need something both simpler and more powerful than make, check out mk [vitanuova.com] from the Plan 9 and Inferno distrubutions. Vita Nuova's got a version that runs on various unixes and win32 systems. it's very similar to make, but a bit more general and flexible. but, given that you've already got all the makefiles, i'd suggest your best bet is just sticking with plain makefiles and cleaning them up.
Re:dissenting view (Score:2, Insightful)
This is not true at all. There's nothing in autoconf that mandates sloppy programming.
the argument 'configure ; make ; make install' is easy is stupid, as well. y'know why? 'cause 'make install' is easier.
OK, but what if I want to set a different prefix, turn on optimisation, enable debugging, and tell the build where Qt is installed ? I've seen straight make builds where this information is distributed across several Makefiles, and that's a pain.
i'd offer just a little extra advice: do not require GNU-specific stuff. don't use any of the gmake-specific features or gcc language extentions.
This I agree with.
Re:dissenting view (Score:2)
Re:ohhhh is plan9 awesome or what (Score:2)
There are a lot of cool things about plan9, but I just can't get over the mouse-centric GUI. Even in the terminals you have to use the mouse all the time and I freaking hate it.
And the Inferno GUI just seems like a buggy and half-assed Windows 95 clone.
The lack of a web browser for plan9 pretty much sucks, too.
Oh well, I hope that some of their better ideas make it into more mainstream OS's (yeah, right.)
Re:dissenting view (Score:2)
more practically, i've admined many unix (primarialy Solaris, but others as well) boxes which didn't have gmake installed. when i want package foo, i don't want to have to install gmake to get it, particularly on sensative systems or in situations where time and/or space is a concern. i've also been a user on boxes other people administer. disk quotas often make builiding and storing my own build tools impractical when what i really want is some other package.
i've written makefiles for several (admitedly small) projects, and not found it to be a problem. mk is better, but less commonly available. i still think if you're aiming at portability or generality of your build process, use the straight make.
What project is it? (Score:4, Funny)
Linus is that you? Or maybe those whacky XFree86 guys.
Autoconf/configure to the rescue! (Score:2)
I'll even go for xmkmf before resorting to editing Makefiles by hand ala xanim. I've done editing of Makefiles but... a trip to the dentist is easy as pie in comparison.
Re:Autoconf/configure to the rescue! (Score:2)
Also, I have manually upgraded many librarys; except when I'm told I must do gcc I balk. At that point, I download the newest version of RedHat and it's associated errata and go to work heh.
Book (Score:1)
Re:Book (Score:2)
GNU autoconf/automake/libtool are Open Source, too (Score:1)
A lot of people are complaining that autoconf doesn't do this or libtool doesn't work on this platform. Please remember that all of these tools are Open Source, so you may fix them yourself and contribute the changes back to the respective projects!
Granted, not everyone has the time or the desire to fix such complex tools, but the author of the Berkeley amd story [columbia.edu] contributed many of his new tests back to these projects.
Why not improve the status quo for everyone by contributing or fixing these tools when you decide to use them? You may learn a new skill or two that you may add to your résumé in the process!
Be careful... (Score:3, Insightful)
Current versions of configure seem to sometimes produce makefiles which can't be used with classic Berkeley make, and depend on GNU extensions...
bugs and version differences... (Score:2, Interesting)
Look at the PHP [php.net] project. They use autoconf. Yes, it (autoconf) works most of the time pretty good for PHP.
However, you can use only certain versins of it because the older versions do not have the necessary features, and the new ones break BC or just plain have bugs that prevent PHP from building with that new version, so you are often locked in to a certain couple versions of autoconf, which is one of the main problems autoconf is supposed to fix in the first place.
It does contribute a bit to developer frustration, but, in the long run, it is much better than most other things I have seen so far, so it's probably not that bad of a choice after all...
:)
the best in build configuration mgmt (Score:4, Funny)
Use tmake (Score:2, Interesting)
Autoconf/Automake is a mixed blessing (Score:3, Informative)
Following these simple rules has made it very easy for me to create sane makefiles across projects with a very large number of subdirectories and sources.
Autoconf is just a tool (Score:3, Interesting)
In my experience with software projects, a straightforward build system is essential. It permits consistent builds, which is essential for debugging complex problems. It permits easy builds, which is essential for developer testing. The hardest part of programming is debugging, and good debugging requires speeding up the edit/compile/debug cycle, while ensuring that everybody still gets the same result for each build.
So I would say that no matter what, you should improve your build system.
Given that, should you use the GNU build system, which is autoconf/automake/libtool? Well, it depends. There is a definite learning curve to these tools. They are written in several different languages (sh, m4, perl) and dealing with certain problems can require understanding each of those languages and how they interact. Using these tools will not automatically make your code portable or easy to build; you have to use them correctly, and you have to understand what you are doing.
On the other hand, every other system which supports a complex build process also has a steep learning curve. There's no easy solution--if there were one, everybody would be using it.
The GNU build system is widely used, so there are a lot of people who can help you with it. The mailing lists are active and helpful. There is also a book [redhat.com], but I'm one of the co-authors so you'd better take that with a grain of salt.
I've converted large projects from ad hoc build systems to using the GNU build system. It's usually pretty straightforward--a matter of a week or so--but then I have a lot of experience with the tools.
I've never used most of the other build systems (e.g., Odin, Jam, ANT) for a serious project, so I can't honestly evaluate how they stack up. I can recommend against imake.
'portability' vs. 'optimization' (Score:2, Informative)
My experiences with autoconf- and libtool- based build processes is that they tend to either a) require using a gcc-based compiler or b) will only kick in optimization flags if the end user sets CFLAGS manually (and even then, the CFLAGS may not get carried over into all parts of the project).
So, depending upon your needs and just how portable you need to make your project, you might want to look at imake. While imake isn't 'simple' by any stretch of the imagination, one can take advantage of the fact that any system that ships with X11 developer packages has a working imake system that includes a good set of optimization switches set. The only big problem with imake is that a lot of folks don't set the site and/or host configuration files to change the compiler settings if they aren't using the manufacturer's compiler. [a simple #define HasGcc2 YES or is usually all it takes!]
Re:Rational Clearcase is the only answer (Score:1)
Re:Rational Clearcase is the only answer (Score:2)
Been there, done that, used it, learned it, liked it, but geez the learning curve was high for newbies.
As for Clearcase config files: I suppose it is useful to have the source control equivalent of a gun that can simultaneousely shoot only the deer of the right age, with antlers in excess of the legal minimum length, at the right time of season, during daylight hours, but having to trust that it won't go off when you point it at your own head in order to use it is, er, kind of unnerving. The kiss principle strongly applies here.
Re:Rational Clearcase is the only answer (Score:1)
The effectiveness of ClearCase depends on the quality of the implementation. The tool is just too flexible to evaluate it en masse. Successfully implementing ClearCase for your project takes a ClearCase guru. Setting up the project poorly can make it a "productivity virus." If the project is set up well it shouldn't be any more cumbersome than CVS, but it will offer more features.
The sticker price is pretty high though.
You want to support linux, forget imake (Score:1)
On a typical linux-system, the imake configuration files do not know how to handle OpenGL or Motif, leave alone Qt or GTK, or non-X stuff like SSL . I have never seen an imake template that handles shared libraries in a portable way. If a software package does not support imake templates (few open source projects do), imake will not help you at all.
On the other hand, autoconf uses compile time tests to decide system characteristics. It will try to compile/link/execute little test programs, or execute shell scripts. (By the way, autoconf uses imake to find out X specific stuff
The basic problem is: autoconf will help you to check for system-specific stuff, but it will not write portable code for you.
Re:autoconf limits you to gnu make (Score:2)
Re:autoconf limits you to gnu make (Score:2)
This is nonsense. Autoconf generates a configure script, which makes substitutions in a user supplied Makefile template, Makefile.in. If you write a portable Makefile.in, the configure script will write a portable Makefile.