Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Why Switch a Big Software Project to autoconf? 219

woggo queries: "I'm a CS grad student working on a large research project (over 1 million lines of code, supported on many platforms). The project has been under development for several years, and the build system is nontrivial for end-users. We'd like to make it easier to build our software, and I'm investigating the feasibility of migrating to GNU autoconf. I need to demonstrate that the benefits of autoconf outweigh the costs of migrating a large system of makefiles with a lot of ad-hoc kludge-enabling logic. Has anyone made a similar case to their advisor/manager? Does anyone have some good 'autoconfiscation' war stories to share? (I've already seen the Berkeley amd story and the obvious links from a google search....)" Depending on the intricacies of the build process, such a conversion might take an awful lot of work. It might be easier to put a nicer face on the "nontrivial build process", although there is something to be said for the ease of "./configure; make; make install"
This discussion has been archived. No new comments can be posted.

Why Switch a Big Software Project to autoconf?

Comments Filter:
  • Low costs (Score:4, Informative)

    by tal197 ( 144614 ) on Wednesday November 28, 2001 @10:05AM (#2624277) Homepage Journal
    I need to demonstrate that the benefits of autoconf outweigh the costs

    You don't have to migrate the whole thing at once. Just start with a nice simple configure.in that copies Makefile.in to Makefile and config.h.in to config.h and add #includes for config.h everywhere.

    Make sure new tests go in configure.in and you can slowly move across with little trouble. It should therefore be pretty easy to show that the costs are very low...

  • Even more (Score:5, Insightful)

    by Russ Nelson ( 33911 ) <slashdot@russnelson.com> on Wednesday November 28, 2001 @10:07AM (#2624280) Homepage
    There's even more to be said for running short test programs which discern the same things that configure does, and then writing the correct .h file to disk. That way, you can write a makefile which ALWAYS works. The trouble that I have with configure is that it creates a makefile which differs from machine to machine, so that debugging it is quite difficult. Plus, if configure gets something wrong, you can make and make and make all day long, and you'll never fix it. So even though configure, in theory, saves you from having to maintain a makefile, in reality it means that everyone who runs configure may have to debug the makefile.

    In other words, it's better to have one makefile, and fix the problems in the makefile, than to have to fix problems in configure or your configure specification.

    If you want an example of how this can work, look at Dan Bernstein's djbdns, or qmail. On both of these, "make" Just Works(tm).
    -russ
    • In general, (controled, scheduled) work on makefiles pays off in developer time. Sell it as an improvement right after a release as that is when a kludge is most obvious.

      We are using includes in our makefiles to set up for different environments ( not just different OSs ). They are tremendously complex and it takes a while to get a project set up, but it provides a good deal of relief for the average developer on day to day builds.

      We also have a dedicated build and release group who maintain the includes. They are a good resource when we have questions either in general or specific to our system.

      Using autoconfig adds another set of software to an already complex build system. Invest the time to standardize and fix your Makefiles. It will pay off as you add more systems.

      Good luck,
      Allen
    • Re:Even more (Score:2, Informative)

      When you use the AC_CONFIG_HEADER macro, as almost everybody does, autoconf does pretty much what you suggest: the main difference is that it writes a single header file to disk, rather than many small header files.

      autoconf normally only changes the Makefile to handle things like compiler options and library options. DJB handles those by creating little files which the Makefile uses. It is entirely possible to use autoconf in this fashion, and it requires no work beyond what DJB's system requires.

      The autoconf system is certainly more complex than DJB's system. I think it is also more powerful and more consistent. It's far from perfect, but I don't think you've clearly identified the problems it has.
  • ...there is also something to be said for double clicking 'setup.exe', then clicking 'Next' through an installer. One of the places I have always thought linux was way behind windows (especially for the new user) is with software installation. rpm -ivh?? ./configure, make, make install?? come on, you think a beginner home user will figure those out?
    • Of course, this assumes you have a single target platform (or a very limited set, like half a dozen Windows targets). Since the question was asked for a project with multiple target platforms, double-clicking setup.exe is irrelevant.
      • Why cant we have an equivilent of setup.exe that untars the package and runs ./configure && make && make install? The process the user sees would be comparable to an InstallShield installer. Open, ask a few questions, pass the answers onto configure, such as --prefix for example.

        Why would that be so difficult?
        • I'm pretty sure there is such an application called installfest

          /Erik
        • Great, if you're only targeting one platform. But it doesn't work for building on multiple platforms, because your "setup.exe" binary will only run on the platform it was built for.

          And as far as making it easy for your average user, most software (for linux at least) is released as .rpm or .deb for your distro which is usually just as easy as a setup.exe - double click the .rpm in Konquerer (or whatever file manager your distro uses by default) and away you go.
          • Yep, RPM click in Konqueror is Cool !

            No, it won't be the universal solution...
            Just you try with Nvidia driver.
            Just you try ...

            for you'll have to install the rpms, then modify the Xfree conf, then restart the whole shebang.

            It "could" have been easier.
            Could have ...
        • Why cant we have an equivilent of setup.exe that untars the package and runs ./configure && make && make install? The process the user sees would be comparable to an InstallShield installer.
          When was the last time an InstallShield installer compiled your new program from source for you based on a couple configuration questions?!!

          I think the important difference you're forgetting is that Windows installers are installing binaries, while configure and make are building source code. A better analog in the Linux world is rpm or apt-get. There are very few programs these days that don't come with prebuilt binaries for several Linux configurations. And, no, I don't think rpm/apt-get (graphical, if you like) is much harder than setup.exe (modulo dependency satisfaction in rpm, but I would hardly consider the fact that most Windows programs are distributed with (possibly out of date) versions of a lot of libraries you already have to be advantageous).

          The process the user sees would be comparable to an InstallShield installer. Open, ask a few questions, pass the answers onto configure, ...
          I would argue that one of the functions of configure (aside from detecting hella shit about your build environment) is to "ask a few questions." Just do ./configure --help or read the documentation, and it'll tell you about all sorts of "questions" you can answer (--prefix=XXX, --with-option-X, --without-option-X, --with-db=db3, whatever). If you're upset that there aren't any little radio buttons to help you answer these "questions", then, yeah, I'm sorry. It is possible for applications to support graphical configuration; the Linux kernel (make xconfig) illustrates this quite nicely. I think having a Tk frontend to configure is major overkill for nearly all applications, though.
          • There is a difference between installing from source and straight binaries? Not really. Source install is a two step process. The first step is to compile binaries. Thats it.

            There is no reason why 99% of applications that rely on the current configure && make && make install could not be wrapped with a graphical front end. The same "questions" you need to "answer" with the configure script can easily be represented by a multi-stage graphical front end. For example, choosing the installtion directory just becomes a -prefix= argument to the configure script. Do you want to compile with SDL or use X overlays directly? Thats a radio button which becomes a --with-sdl or --without-sdl (For example) argument.

            Why would it be overkill? If the actual installer was part of your system anyway, based on say Python and Tk (As examples), then the actual installer script for each application would be a few lines of code. Applications that don't require a whole bunch of questions can just get the default installer (E.g it just asks you for the installation path) and that can simply run configure for you.

            The whole point of my idea is to reinforce a previous posters comment that expecting users to open a terminal and run configure etc. from a CLI is actually a high barrier of entry for most users who are migrating from Windows. Why should we make it tough on them? Why can't we try and focus on making Linux as easy to use as possible?
            • The whole point of my idea is to reinforce a previous posters comment that expecting users to open a terminal and run configure etc. from a CLI is actually a high barrier of entry for most users who are migrating from Windows. Why should we make it tough on them?
              I totally agree in prinicple, but in practice, if you're a person who considers a build system that requires you to type --prefix=/usr/local on a command line (rather than typing /usr/local in a textbox) to be a "high barrier," then you probably have no special need for a build from source and would prefer a precompiled binary.
              There is a difference between installing from source and straight binaries? Not really. Source install is a two step process. The first step is to compile binaries. Thats it.
              Again, I agree that there is no real difference in principle (except source builds fail and cause problems a lot more than binaries because bulid environments are a lot more variable than dynamic linking environments). However, I think there is a major practical difference in terms of usage. 99% of users install binaries. Personally, when new versions of popular software come out, I almost never install them from source or non-packaged binary--no matter how painless the build process--and I've been using Linux for 5+ years.

              This is why so much development effort has been put into creating nice, user friendly graphical binary installers, and why so little effort has been put into more general Tk configure frontends. Development energy can be better spent elsewhere.

              Besides, the problem of build failure is a serious one. The other day I compiled an (autoconf-based) application that was supposed to build out of the box on my system. The developer, however, had not tested the application with gcc 3, and I had to make half a dozen changes to the source to get it to compile. What do you propose your graphical newbie-friendly builder does at this point? You're pretty much screwed, because there's no way to recover from "error X on line Y" aside from telling the user that there was an insurmountable failure. Binary installers work much better for newbies, because your installer can detect and report (and possibly fetch and install) missing libraries on which the new application is dependent. Then, with very high probability, the installation will succeed. (At that point, the only stumbling block should be things like the locations of configuration files.)

          • Take a look at 10 Tips for Great .NET Programming [fawcette.com]. Tip #10 states that:

            The .NET Framework exposes classes that let you compile code in a specified language. These classes live in the System.CodeDom.Compiler namespace. This code snippet shows how to obtain an in-memory running instance of the C# compiler:(...sample code follows)

            This seems to allow one to write a compiler/installer solution on the fly.

            Gokhan Altinoren

      • install.exe:
        #!/bin/sh

        ./configure && make && make install

        Now, how difficult was that? Surely simpler than clicking on many buttons...

        --
    • Users had to be taught about setup.exe (or install.exe or whatever) once. And these days, autorunning CDs are removing the need to do that again. Remember, most things that you claim are easier in Windows are only easier because you've already been taught to do them. Teach someone how to use rpm/dpkg/apt and installing software with them will become just as easy.
      • but i think there is something to be said for intuitiveness. let's say that there isn't someone there to teach about either setup.exe or rpm/dpkg/apt/etc. i think you find that most people (yes, even the not-so-smart ones) could eventually figure out that you need to double-click on setup.exe and then hit "Next" a whole bunch of times. conversely, giving a user a .rpm, .deb, or .tgz package doesn't necessarily mean that they will ever be able to figure out what to do with it. in this case you'd be assuming that user would 1) find the binary responsible for package management, and 2) figure out how to use all of the command-line flags to install the package. quite a far cry from double-click... clicky, clicky, clicky.

        however, i have to agree that if people are taught how to do things, this becomes a non-issue. i guess everyone just gets caught up in the "easy-as-possible" way of thinking and the teaching part of things goes to the wayside.
    • there is also something to be said for double clicking 'setup.exe'

      Well first, this isn't the issue here. We're talking about compiling from source which isn't the same as deciding how to install a binary once compiled.

      As for rpm, etc you just want a graphical front-end, I think.

      That said, Window's setup.exe is actually unnecessary complicated for users. With ROX, we're using application directories [sf.net]. There is no setup program because the program can just run in-place. See the example at the bottom of this page [sf.net] for an example. As a bonus, running a source package is done in exactly the same way as running a binary (it just takes a bit longer).

      The whole business of installing is terribly arcane if you think about it (hint: the computer already has everything it needs to run the application... why the extra steps?)

    • by zCyl ( 14362 ) on Wednesday November 28, 2001 @10:22AM (#2624329)
      So you're tired of ./configure, etc... and you want an easy installer script for Linux that you can double click on to install programs? Here, I'll write you a general purpose installer program. Put this with any program you distribute as source and it will automatically install it by simply double clicking. You don't even have to click Next!:

      #!/bin/sh
      ./configure
      make
      make install
      • that wasn't my point. i am very comfortable with rpm, and compiling and installing, because i am at least intermediately tech savvy. i was referring to a much wider audience that doesn't want to type commands or write shell scripts.
    • As a fairly newbie Linux user it is not doing rpm or make that is the problem. I mean telling someone that rpm is the Linux version of setup is not the problem. The problem is: 1. Dependecies. 2. When the make doesn't work. My suggestion for (1) is to just include all the files you need in the one download. Sure it'll make the rpm much bigger (which is why you include alternative rpms for power users) but I'm telling you, confront a newbie Linux user (even one who isn't totally computer illiterate) with a list of 20 dependencies and you'll lose them. I mean just putting all the rpms in .tar.gz with a warning that you need certain minimum requirements eg. Red Hat 6.2 with a readme of which order to install the rpms would make life much easier. Give them all the files needed and step-by-step instructions (as in they can cut and paste the command line instructions) and they'll manage. I mean Windows install programs don't ask you to download 20 or so .dlls. And yes, sure Debian has a wonderful system around this but hey try to get newbie Linux users to actually work out how to install and run Debian in the first place...
    • You are comparing apple and oranges here. autoconf is a build system to support building on different platforms. And I like autoconf very much, since development libraries, includes etc lives in very different places even on different Linux systems.

      setup.exe is a way of installing software to your system. Actually, it would be better to compare with Debian 'deb' files or RedHat 'rpm' files. Using apt-get should be equal to running setup.exe, assuming you have access to all deb files a certain application is dependent on.

      /Jonas U
    • How about this instead?

      apt-get --compile source packagename

      and go have a cup of coffee...
    • actually yes. the beginning home user does figure it out nicely.

      they type rpm -i supercoolprogram.rpm
      or better yet, I tell KDE for them to do that when they double click on an RPM file.

      It works great, until you get to developers apps that have to have installed 60-70 bleeding edge libs that dont come with that super old redhat 7.2 (gawd, running a production release? what are you in the stone ages?)

      and they refuse to staticall link the program so it will install anyways for the newbies.

      basically most productivity software out there is specifically packaged to discourage newbies from running it. who wants people other than glibc and GTK experts running their software.
      • actually, you're totally wrong. my mom is a beginning home user. initially, she had problems learning how to double click without the mouse moving and messing it up. i said 'close the window' and she looked at me like i was speaking greek, so i backed up and said 'these are all windows, click here to close them'. there is no way she would ever figure out to type 'rpm -ivh program.rpm'. she, and thousands of other home users, will never type a command at a terminal ever, nor will they want to. i know this poster has a different audience in mind, but my point was different.
        • The people you are talking about don't know what a command line is nor will they care if they use it. My mother (who when I went home for Thanksgiving told me about how she discovered how the arrow keys work) has no problem using the linux box I set up for her. She is perfectly cable of using the 10 or 15 command line commands I taught her to do things like install or uninstall a program. She has yet to complain once about needing to type things into a window instead of clicking. I even asked her if she'd mind and she asked how it was different from putting the command in an email to tell it who to go to (aka email addresses).

          The only people who have a hard time with a command line are the people who have been trained on nothing but Windows and have no desire to learn anything new. Any brand spanking new computer user who has no experience will be happy learning how to use a computer. Last time I checked, most people are not afraid to type.
    • Pointy-cliky setups.

      My current nightmare install has a hard-coded db sid (ORCL), $ORACLE_HOME, and other asst. gems all wrapped up in a NT setup.exe.

      The $DOLLARS/day service engineer 'just happens' to have the exploded setup files on cd (that I am normally not allowed to see). What are they? A vb script that copies a few files and executes a few sql scripts (and not very well, mind you).

      Feh. Even the oddest of Makefile setups allows me to get in and figure things out if a make fails.

      --
    • No, I expect them to double click the RPM file in their file manager. As for configure/make ... when I get sources on Windows, I get a project file that may or may not have all the dependencies it needs, and may or may not detect them. Linux isn't easy to use ... and from watching my mother struggle with her computer, neither is windows.
      • Correct. The equivalent on Windows to this is VC++ project files. On the software I am working on, these have been a pain to maintain and debug and they will fail mysteriously on other people's machines (though the pain is probably less than the pain of configure, to be honest). Also annoying is the fact that the files are totally unreadable and are about a dozen times larger than all the NT-specific source code in our project (fltk). I always compile on NT using a batch file or gmake as these project files require launching the VC++ IDE which I otherwise don't use, I have also deleted my local CVS copies of these as the tiniest change will cause an extremely long CVS update time (over a 56K link), this makes CVS updates tolerable but also means I am always forgetting to update these when the set of files in the project changes.

        I do have to say that the gmake file for NT is nice and clean due to the unifomity between the Windows platforms. A lot of pain would be saved if the Unix people started refusing to support some of the older and non-standard Unix versions so that we could use a single makefile.

        One comment: if at all possible, the biggest win in readability and maintanability is to put explicit #ifdefs into the source, and use gmake and the output of uname to put 'if's into the makefile. This stuff is immesurably easier for a programmer to figure out and debug than autoconf. It would be nice if somebody made a make replacement that explicity compiled the dependency rules from a sequential language with normal if and looping constructs and obvious built-in tests for all the stuff in autoconf.

  • by ppetru ( 24677 ) on Wednesday November 28, 2001 @10:14AM (#2624295) Homepage

    Here's an article on the subject, written by Uwe Ohse (you can read the original article here [www.ohse.de]. Many of the problems were fixed in the mean time, but it makes an interesting read nevertheless.

    autoconf config.guess, config.sub There was no source from which one could get up-to-date version of these scripts, which are used to determine the operating system type. This often caused pain: asked the openbsd port maintainers about it. (btw: there is now a canonical source for them, "ftp.gnu.org/gnu/config") autoconf takes the wrong approach The autoconf approach is, in short:

    • check for headers and define some preprocessor variables based on the result.
    • check for functions and define some preprocessor variables.
    • replace functions not already found in the installed libraries.
    Yes, it works, albeit some details are discouraging, to but it mildly.
    No, it doesn't work good enough. This approach has lead to an incredible amount of badly code around the world.

    Studying the autoconf documentation one learns what kind of incompatibilities exists. Using autoconf one can work around them. But autoconf doesn't provide a known working solution of such problems. Examples:

    1. AC_REPLACE_FUNCS, a macro to replace a function not in the systems libraries, leads to the inclusion of rarely used code in the package - which is a recipe for disaster. On the developers system the function unportable() is available, on another system it isn't? Oh well, just compile unportable.c there and link it into the programs ...
      Yes, this solves a problem. But it's overused, it's dangerous. In many cases unportable.c doesn't work on the developers system, so she can't test it. On other cases unportable.c only works correctly on _one_ kind of system, but will be used on others, too.

      Yes, the often used packages _have_ been tested almost everywhere. But what about the lesser often used?
      Keep in mind that there is no central repository of replacement functions anywhere ...

    2. The same is true for AC_CHECK_FUNC(S). In this case there isn't a replacement source file, but even worse, there's an #ifdef inside the main sources, unless the programmers are careful to use wrappers, which they often aren't, because compatibility problems are often discovered very late in the testing process (or even after release) and people are usually trying to make the smallest possible changes.
      This is surely nothing which can be avoided completely, but it's something which has to be avoided whereever possible.
    In both cases you end up with rarely, if ever, used code in your programs. It's not dead code, it's zombie code - one day, somewhere, it will get alive again.

    There's a solution to this problem, but it is completely different from what's used now: Instead of providing bare workaround autoconf (or a wrapper around it) ought to provide an abstraction layer above it, and a central repository for such things.
    That way a programmer wouldn't use opendir, readdir, closedir directly but call they through the wrap_opendir, wrap_readdir and wrap_closedir functions (i'm aware of the fact that the GNU C library is this kind of wrapper, but it hasn't been ported to lots of systems, and you can't rip only a few functions out of it).
    autoconf macros are inconsistent. For example: AC_FUNC_FNMATCH checks whether fnmatch is available and usable, and defines HAVE_FNMATCH in this case. AC_FUNC_MEMCMP checks for availability and usability of memcmp, and adds memcmp.o to LIBOBJS if that's not the case. Other examples exist. autoconf is not namespace-clean. autoconf doesn't stick to a small set of prefixes for macro names. For example it defines CLOSEDIR_VOID, STDC_HEADERS, MAJOR_IN_MKDEV, WORDS_BIGENDIAN, in addition to a number of HAVE_somethings. I really dislike that, and it seems to get worse with every new release.
    My absolutely best-loved macro in this regard is AC_FUNC_GETLOADAVG, which might define the following symbols: SVR4, DGUX, UMAX, UMAX4_3, NLIST_STRUCT, NLIST_NAME_UNION, GETLOADAVG_PRIVILEGED. autoconf is large I'm feeling uneasy about the sheer size of autoconf. I'm not impressed: autoconf-2.13.tar.gz has a size of 440 KB. Add automake to that (350 KB for version 1.4). Does it _really_ have to be that large? I don't think so.
    The size has a meaning - for me it means autoconf is very complicated. It didn't use to be so, back in the "good old days". And it accomplished it's task. I really don't see that it can do so much more today (i don't mean "so much more for me"). configure is large Even trivial configure scripts amount to 70 KB of size. Not much?
    Compressed with gzip it's still 16 KB. Multiply that by millions of copies and millions of downloads.
    No, i don't object to the size. It's perfectly ok if you get something for it. But you don't, about one half or more of each configure script can be thrown away without any lossage.

    • Large parts of it just deal with caching, which wouldn't be needed if configure wasn't so slow.
    • Other parts of it are the --help output, which looks so good ... but doesn't help usually (try it and find out what to do, without reading README or INSTALL).
    • Then there is the most bloated command line argument parser i've ever seen in any shell script.
    • Then there are many, many comments, but they aren't meant to help you seeing what's going on inside configure, they are the documentation for the macro maintainers (some might actually prove to be useful, but the vast majority doesn't).
    The configure scripts are the utter horror to read. There's a reason for this: configure doesn't use any "advanced" feature of the shell. But i wonder - are shell functions really unportable? And if the answer is yes: Do you really expect anything to work on that system? The problem is that a shell that old is unlikely to handle almost anything, for example large here documents.

    The configure scripts are the utter horror to debug. Please just try _once_ to debug 4000 lines of automatically generated shell scripts.

    Note the autoconf maintainers: The road you're on is going to end soon. autoconf is badly maintained Let me clarify this first: I don't think bad about the developement. I'm missing maintainance of the already released versions. Now, at the end of 2000, almost two years have passed without a maintainance release of autoconf. 9 months have passed since a security problem has been found (in the handling of temporary files). There have more bugs been found, of course.
    I know that nobody likes to dig in old code, but 2 years are a little bit much. automake My primary objection to automake is that automake forces me to use the same version of autoconf everywhere. Autoconf has a large number of inter-version conflicts anyway, but automake makes that situation worse, much worse.
    I'd need the same version of both tools on all machines i happen to touch the Makefile.am or configure.in or any other autoconf input file on. There are a number of reasons for that, one of them is that automake used to provide some autoconf-macros during the years autoconf wasn't developed at all, and these macros are now moved to autoconf, where they belong to. But if you happen to use, say, AM_PROG_INSTALL, and later versions declare that macro obsolete ...
    That doesn't sound too bad? Ok, but suppose

    1. update all those machines regulary (i'm not going to really do that, i'd rather stick to what's installed, but anyway)
    2. i didn't touch a project for, say, 2 years, and then i need to change something and release a new version. This involves changing the version number in configure.in.
    In more cases than not this will need considerable changes to configure.in. Some major, most minor - but even the minor ones need attention.
    I found that hard to deal with. Things were even worse since every CVS checkout tends to change time stamps, which can mean that autoreconf is run even if there's no chance been done to any autconf input file.

    Don't misunderstand me: i don't attribute that to automake. I attribute it to the internal instability of autoconf. Unfortunately you can't have automake without autoconf. libtool Libtool adds an additional layer of complexity. I never had any reason to look at the insides of libtool (which proves that it worked for me). But having one project which used autoconf, automake and libtool together was enough - never again. I got exactly the same problems as i got with automake, described above, but they were worse and happened more often.

    One problem with libtool is that releases don't happen very often. Libtool rarely is up to date with regards to changes on some operating systems. Which makes it difficult to use in packages meant to be really portable (to put it mildly).

    A libtool script and a number of support files are distributed with every package making use of libtool, which ensures that any user can compile the package without having to install libtool before. Sounds good? But it isn't.

    • This approach means that you can't replace all old libtool stuff on your system easiely.
    • It also means that every package you try to compile can have a different version of libtool. And since alpha versions of libtool are often used it's not unlikely that you happen to meet one of these versions.

    Another problem is the size of the libtool script. 4200 lines ... summary Autoconf is the weak link of the three tools, without any doubt. Version 1 wasn't really designed, version 2 made a half-hearted attempt of dealing with design problems. I'm not sure about the next version.

    • Do you have any alternatives?
      • plain makefiles and portable programming. At least this works for me(tm).
        • So how would you cope with supporting three different releases of libxml, each with different functions and prototypes for saving out an XML document?
          • I'd support only one release of libxml. If they change function names at will, they should be shot, anyway. You should only add, but you must never remove syambols from a library. This guarantees compatibility (also, the behaviour must stay the same).

            An idea: fork the libxml code, and make your own library, only supporting the forked one. And on your website and in your documentation, explain why you did that (incompatible releases of the same library).
            • What about Gtk+. Should I fork that too?

              For 2.0, they've changed the signal handling, menu short-cuts, text rendering, image handling, etc.

              Do I require everyone to install Gtk+-2.0 (when it's officially released, of course), thus annoying everyone who hasn't upgraded their distro in the last few months. Or should I stick to 1.2, and not let anyone benifit from the new features, even if they have it installed?

              Or shall I just use autoconf and support both?

              • You fork your own program. Make one release that works with GTK+ 1.2 and only apply bug fixes to it. The other release you make with GTK+ 2.0 and put the new stuff in there. Otherwise you might be supporting GTK+ 1.2 for a long long time. This lets your users know that you plan to move to GTK+ 2.0.

                I think this is a problem with autoconf/automake/etc. They can be a little too convenient and will turn good design and code into #ifdef spagetti in no time. Trust me. I wrote a program which supported GNOME, QT, _and_ plain GTK+. While the QT code was very seperate, the GTK+ and GNOME code overlapped much so I thought "hey, code reuse!" Bad idea. Once a new GTK+ version was out (no matter if it was a minor release or not) _everything_ broke. And there is always the case of relying on this one special "feature" of GTK+ only to have it gone in the next bugfix.
              • Considering that GTK 2 doesn't have source compatibility with GTK 1.2 and you'd have to have two completely different versions of your app, I don't think that is a problem autoconf could solve.
    • Not to argue with any of your points, but I am using autoconf 2.52 on my Debian system. It was released on July 18, 2001. I imagine there might be a more up to date version, but I'm using the testing branch of the Debian distribution.

      autoconf/automake is not a silver bullet, but it does provide a uniform way for users to compile and install your package. Virtually every Unix user knows to do ./configure; make; make install to have the package installed.
    • There's already a library which does some of this, and chances are you already have it installed: libiberty [gnu.org]. Perhaps its role could be expanded a bit?

  • by jvl001 ( 229079 ) on Wednesday November 28, 2001 @10:17AM (#2624310) Homepage
    I am an engineering grad student working on a similarly sized project. Our project is compiled on a variety of Unix platforms using automake, autoconf and libtool. As you are already compiling for multiple platforms you are 90% of the way there in determining the different needs for each compile. If you haven't already organized your build process, now would be a good time before it becomes 10M lines of code.

    Autoconf and friends make it infinitely easier to compile our code. However you will have to put in a fair bit of work determining all variety of tests required to determine the idiosyncracies of each build. You are probably already doing something similar if you can build on multiple platforms.

    Autoconf has been well worth the initial effort. Occasionally new compile problems crop up, but they are usually solved by the addition of another 1 line check in configure.in.

    Selling autoconf should be easy. Wrestle with compile problems once getting autoconf working, or have users repeatedly wrestle with the problems without autoconf.
    • Read the book! (Score:4, Informative)

      by devphil ( 51341 ) on Wednesday November 28, 2001 @02:54PM (#2625814) Homepage


      The maintainers of the autotools (autoconf, automake, libtool) wrote a book [redhat.com] to help explain the approach used by the tools. (Yes, it's called the goat book. Read the page to find out why.)

      I've seen an amazing amount of crap posted in these comments; the parent article by jvl001 is one of the few good exceptions. NO tool can get it all; the autotools get you about 90%, and you have to help it the rest of the way. There are solution for just about all of the problems and red-herrings I've seen posted here, but you need to look a little farther than /. to find them.

  • Ant (Score:2, Informative)

    by ericrath ( 133758 )
    I'm a novice when it comes to automated build tools, but I've been impressed by Ant, from the Jakarta project by the Apache Group. From what I've read, it seems that Ant can do almost everything autoconf can, but because it's written in Java and uses XML to store its configuration, there are no cross-platform issues. I should add that I have *very* limited experience with autoconf; I've really only *used* it, not developed with it, so my opinion is a fairly uneducated one. Has anyone else used Ant and autoconf enough to make a good argument for or against Ant?
    • Re:Ant (Score:2, Informative)

      by sqrt529 ( 145430 )
      Ant is a build tool for java. autoconf is a tool that generates the makefiles for mainly C projects. You can't compare both.
    • Re:Ant (Score:2, Informative)

      by Andy ( 2990 )
      I have experience with both. If you are developing a Java app, or specifically one that employs JSP servlets, EJB, etc. your best choice is Ant. Its builtin rules are very nice. Your project should build routinely on any Unix of M$ Windows. Autoconf should be used for C/C++ builds for Unix where the system libraries and includes must be closely scrutinized.
  • by aheitner ( 3273 ) on Wednesday November 28, 2001 @10:27AM (#2624341)
    For all the fact the libtool tag line is "Do you really want to worry about linking on AIX yourself?"...

    I spent quite a bit of time this summer trying to use autoconf'd stuff on AIX (Gtk+). I played with a pile of recent and not-so-recent versions of libtool. It was a pain in the butt. (Granted, linking on AIX is ... abnormal ... but you'd think libtool would know how to do it).

    I think the power of the autoconf suite to make things work across "all Unices" is a bit exaggerated. Check whether it really does a good job of supporting all the platforms you need, first.
  • We(anon) switched to using GNU Autoconf about 2 years ago, there is nothing like it to make it easier to build on mulitple architectures. All the info you need to #ifdef your heart away is easily accessable.
  • by SirGeek ( 120712 ) <sirgeek-slashdot.mrsucko@org> on Wednesday November 28, 2001 @10:36AM (#2624367) Homepage
    I'm working on Linux HA [linux-ha.org] in their porting (porting to Solaris and *BSD).

    Thanks to Auto conf, some really nasty #if's in the code have been removed by a single include line, its also able to simplify code by removing 50 trillion OS specific checks from the source files and only insert it when it needs to.

    Once you get over the basic learning curve hurdles, then you should be fine.

    It would be nice though if there was a makefile -> autoconf converter (but that is just me).

  • just do it (Score:2, Insightful)

    by oogoody ( 302342 )
    Just do the conversion and see what's it like.
    The talking about takes longer than doing it.
    Then you'll really know.
  • by Stiletto ( 12066 ) on Wednesday November 28, 2001 @10:45AM (#2624404)
    I have found that autoconf has problems when you are cross-compiling. If you are or plan to cross-compile your project, stay far away from autoconf.

    A few macros (particularly AC_TRY_RUN, and anything that calls it) compile and run a program on the host to test for functionality. Unfortunately, there is no way to tell autoconf that you aren't going to be able to run the compiled program, because it's for your target platform.

    Many of the AC_FUNC_ macros simply DIE when running in a cross-compiling environment. For example, let's cross-compile the ncftp package. The configure.in uses the AC_FUNC_SETPGRP macro. Running autoconf results in:


    configure.in:146: warning: AC_TRY_RUN called without default to allow cross compiling


    That's right. This macro (defined elsewhere in the autoconf package) uses AC_TRY_RUN without a default clause. So instead of recovering nicely, the configure script dies when it tries to do this test. The only way to get around this is to modify /usr/share/autoconf/acspecific.m4 or not do the test!
    • by eli173 ( 125690 ) on Wednesday November 28, 2001 @11:10AM (#2624512)
      When you are cross compiling, configure may not be able to run those tests, but you can help it out.
      Define the CONFIG_SITE to point to a config.site file in configure's environment. Then put into your config.site file a line like:
      ac_cv_func_getpgrp_void=yes
      Yes, you do have to look at the configure code to find that name, but it lets you give the software the answers it needs.

      HTH,

      Eli
    • I've cross-compiled huge amounts of stuff using autoconf, and I think you're quite wrong. There are a few (and they are very few) autoconf tests that don't deal well when cross-compiling, but it's easy for the maintainer to provide a default for those cases (of course, most maintainers never cross-compile, so they don't even bother to do that). Most autoconf tests explicitly avoid running test programs precisely so they work well in a cross-compilation environment.

      Morever, you have to ask yourself -- what's the alternative? Most other configuration frameworks I've seen simply fold up and die when presented with a cross compilation environment (especially wierd non-standard ones). Autoconf, with just a smidgen of forethought, works like a champ.
    • I don't know why this is rated funny...

      That's right. This macro (defined elsewhere in the autoconf package) uses AC_TRY_RUN without a default clause. So instead of recovering nicely, the configure script dies when it tries to do this test. The only way to get around this is to modify /usr/share/autoconf/acspecific.m4 or not do the test!

      Those test have been greatly improved in 2.5x, but there are of course lots of test that you just can't do on the host machine. Solution: Run those tests once on your target machine and add the result to your site-specific file. You can't expect autoconf to solve something that is impossible to solve.

    • I've used autoconf extensively when building with a cross-compiler. Your advice to ``stay far away from autoconf'' is unwarranted.

      It's certainly true that building with a cross-compiler requires some extra care. It's also true that autoconf provides somes tests which give errors when building with a cross-compiler. However, it's not arbitrary. Those tests can not be run correctly if you can not run an executable. The tests are provided a convenience for the common case of a program which is never built with a cross-compiler.

      For a program which is built with a cross-compiler, there are various ways to handle these tests. I usually write the configure.in script to check the cross_compiling variable, and, if set, perform the check at run time rather than at compile time. For example, if you have the source code to GNU/Taylor UUCP around, look at how the ftime test is handled.

      There is a chapter [redhat.com] in the book I cowrote which discusses building with a cross-compiler.

  • Why not try Jam? (Score:4, Informative)

    by Anonymous Coward on Wednesday November 28, 2001 @10:46AM (#2624411)
    I'm working on a relatively large project myself.
    We're using a build tool called "Jam", which can be gotten from http://www.perforce.com/jam/jam.html. It does a very good job at cross platformability and is faster than make at determining include dependencies.
    An open source project that uses their own version of Jam are the boost libraries at http://www.boost.org/

    Enjoy!
    • Re:Why not try Jam? (Score:2, Informative)

      by David Greene ( 463 )
      An open source project that uses their own version of Jam are the boost libraries at http://www.boost.org/

      Ahh, you are a knowledgeable one. :) I was hoping someone would mention jam. It not only handles platform-dependent tasks, it's a full replacement for make and actually generates correct dependencies. It might be a little tough to convert a make-based project to jam, but it's the way I would go starting out.

      boost.build [boost.org] is the build system for the Boost [boost.org] libraries which, as mentioned, uses jam. In fact it uses an advanced version of jam with many new features. I'm not sure if those will be rolloed back into the "official" jam sources (boost jam is actually a derivative of FT Jam [freetype.org], from the FreeType [freetype.org] project).

      Jamfiles (analogous to Makefiles) are platform-independent. A "Jamrules" file holds all the configuration-specific information. Some systems use autoconf to generate this. Boost does not and their build system is very flexible, allowing one to not only define platform-dependent things but also specify build features such as whether to make static of shared libraries (or both!), optimization levels, etc. A single build run can build shared and static libraries at several optimization levels, for example.

  • by 4of12 ( 97621 ) on Wednesday November 28, 2001 @11:02AM (#2624474) Homepage Journal

    We have a monstrously large software project with home grown kludges for building on multiple platforms that is a mess of shell scripts and makesfiles. It's ugly and the time required to keep fixing and extending this system is a heavy burden. Plus, it's only tolerable to use.

    I've used autoconf a little on a different smaller project and I know this much about it.

    When autoconf works, it works great. As a user, there's great delight in typing those 3 magic commands and having a whole series of feature tests fire off and configure the build process. And, as a developer, every time the system works (and it works more than kludge systems), it saves me from hassles of porting to platform de jour. If you look at some fairly complicated systems such as Octave or teTeX build, it's nothing short of amazing.

    However, that said, autoconf is complicated. And, you really have no choice but to learn a fair amount of the complications to deal with the issues that will inevitably come up during cross platform testing. I've mostly learned autoconf, a little of automake and almost none of libtool. The m4 macro language for defining the autoconf macros has syntax that always gives me nausea.

    If you implement autoconf for your project, great. But, expect a substantial fraction of your time-savings from it to be dedicated to becoming more of an expert at autoconf, m4, sh, etc.

    autoconf is better than imake, IMHO, but it sure is short of being as good as it should be. Large projects can justify the time for a person learning the intracacies of autoconf; small projects cannot, unless you happen to know it already. Learning autoconf is kind of like learning nroff just to write a man page - the time investment is more justifiable if you use your expertise to write more than one of them, or to write a really important man page.

    P.S. Is there any chance that the software carpentry project will come out with something soon? Some of their early submissions for improved build tools really sounded intriguing, like Tan, an XML based description.

  • One feature which I like in one Makefile system that I use, is not writing the output files, i.e., object and target files, to the source directory but somewhere else.

    Writing them somewhere else makes doing backups much easier, plus the directories don't get overcrowded. You can even make source distros just with a simple tar command.

    However, using Makefiles leads to really ugly bungles. Autoconf doesn't, to my knowledge, have any kind of support for this.

    Any ideas about how to do it easily and cleanly?
    • However, using Makefiles leads to really ugly bungles. Autoconf doesn't, to my knowledge, have any kind of support for this.

      autoconf doesn't, but automake (or any other GNU Makefile standards-compliant makefile generator) can.

      Try this:

      % tar zxf package-version.tar.gz
      % mkdir build-package
      % cd build-package
      % ../package-version/configure
      % make
      % make install

      All your makefiles and such will be built in the build-package directory instead of the source tree.
  • by James Youngman ( 3732 ) <jay.gnu@org> on Wednesday November 28, 2001 @11:05AM (#2624489) Homepage
    I'm assuming that since your system is a large project, it has subsystems in subdirectories. If this is the case, you can do a "pilot". Pick a very small subsystem and convert it to autoconf. Then pick a real target (for example, the most awkward system which is not gigantic) for a proof-of-principle. This process will provide you with enough evidence to back you up - or it will provide you with evidence that you shouldn't do the conversion after all. If you end up converting, you will have a top-level configure script that cascades down and calls the other ones.

    Don't forget the Autoconf macro library [sourceforge.net] and also the fact that there are thousands of free packages out there which will have configure scripts from which you can borrow - try to find packages in a similar domain to your own.

    The difficulty (complexity, time taken) in maintaining a package which works on N platforms is usually proportional to N - or if you are unlucky, some small power of N like 2. What your code is really doing is trying to understand the properties of the target system and so the #ifdef __hpux__ for example in line 1249 of blurfl.c is actually trying to determine if the quux is use like this or like that. Autoconf on the other hand will produce a single preprocessor macro for driving the quux. This means that you don't have an extra 15 lines of #ifs to handle quuxes in other operating systems. Hence you may not have less #ifdeffed bits, but each of the #ifdeffed bits will be shorter.

    Autoconf works differently to thge standard approach - it allows your code to work with each feature independently, and so while the normal approach is O(N) or O(N^2) in the number of suppoered OSes, the complexity of maintaining an autoconfiscated program is O(N) in the number of different features supported by the operating systems between them. The great thing about this is that Autoconf keeps these orthogonal and prevents these things from interacting too heavily, and it turns out to the the case that the total number of different features basically flattens out to some constant number even when you continue to add more supported OSes (i.e. there are only a certain total number of different ways of doing things even though each of the N operating systems can choose among X ways of doing Y things).

    So, what this means is that you shold do the feasibility study as discussed above, but naturally autoconfiscating the whole system will take a while. The ideal time to do this is either

    1. when you are in any case adding support for an extra platform or two or
    2. on a piecemeal basis for each subsystem.

    Another option to explore in combination with Autoconf is the Apache Portable Runtime [apache.org]. There is also Autoproject [gnu.org] but I suspect that is a little lightweight for your needs.

    Taking all the above together I suggest that you

    1. Propose a feasibility study, using a couple of sections of the project
    2. Identify a way of making the newly autoconf-ed section fit in with your existing build mechanism (so that you can roll out autoconfiscation gradually across the project).
    3. Identify any related open-source projects which might have useful autoconf macros (e.g. Gnome has lots).
  • The software carpentry project/competition had a lot of useful information and implementations of build tools. You may want to read about them to weigh the pros and cons of current build systems. You may even want to use some of their implementations.

    software capentry [codesourcery.com].
    SC Config [codesourcery.com] is the one that is of interest to autoconf alternative, and contains links to 4 alternatives (BuildConf, ConfBase, SapCat, Tan).
  • Autoconf is just one piece of the Cygnus developpment tools. Two other tools complents it nicely :
    • Libtool
    • Automake

    With Libtool, you can be sure that shared libraries can be created, even on architectures/OS you don't have access to. That's a very important point.
    Automake eases a lot the building process of clean packages for end users, with all standard targets for 'make'. It also builds Makefiles that can automatically generate .tar.gz files with only the needed files in it, and build dependencies before compilation.
    Also, Autoconf, Automake and Libtool are aware of operating-systems bugs that you probably don't know if you never worked on them. So they are your best friends to produce portable and reliable software.

  • A War Story (Score:2, Interesting)

    by 12dec0de ( 26853 )
    I am a great fan of auto* (the name that the company that I used to work for has given the whole suite, all parts of which are used extensively), and here is my story on how I came to be such:

    When I started to work for my last employer, they where using RCS and static Makefiles plus a hand-crufted make and version control system that a prior contractor had introduced. The code base for the large integrated system was about 250.000 LoC c/C++ code that ran on a number of Solaris 2.5, Linux-Boxen and Windows maschines. It used GNU-Tools and libraries quite liberally and thus was dependant on the regular introduction of external code.

    But since a number of developers didn't really grasp what make was doing, they would simply copy the makefile of one of the larger subprojects and tweak until they got it working. This led to each of the about 20 sub-projects having Makefiles of about 2000 lines each. Quite unmaintainiable.

    I started to use auto* for my own library that, in the beginning, was fairly separate from the other projects. More from curiosity, than actual need (I know how to use make B-) ), I put first autoconf, then automake and libtool into the code that was at that time about 20.000 LoC. I also got everybody to switch to CVS, since it supports multiple developers on the same project much better.

    It worked quite well I must say. For the following reasons:

    • not everybody is on a intimate level with make. If at least one person knows what they are doing the automake Macros allow encoding of that knowledge for others to use with new Makefiles.
    • Specific problems with one of the plattforms may be detected, together with the activation of workarounds, in a single place, which is configure.in. One earlier poster was saying that he was having problems with changing Makefiles. I allways found that if you take the log of what happend and debug from there, changing the behavior on a given plattform as needed becomes trivial.
    • You have a single place where to configure the inclusion of features, glues to other packages and the paths/names of libraries, config-files, etc. If you write your Makefiles and helper utils to do the same, it will just be duplication of effort with a smaller user-base. YMMV
    • with the config.status file you can save what you have configured in a separate place and reproduce the configuration just by copying it.
    The war story ends with a system that has about 2 MLoc now, and whith the help of a script that one of my fellow developers wrote, will automaticaly pull itself out of CVS (using the supplied branch) and configure and build each of the sub-packages in the proper order and install the packages on the proper machines (or at least schedule them for installation. Due to security concerns only 2 humans working together may actually install a component). My sub-project has grown to 140KLoC and the public part is on sourceforge now.

    Could this be done with only some make magic? Yes, if everybod knew make well. But in the real world people that really grok make are few and far between.

    would your own solution handle shared libs on multiple systems? Maybe you are that good. I am not. I use libtool.

    like any software there are problems with these. For instance the handling of languages like java is suboptimal in automake. But this is opensource. If you have a problem solve you itch and trust that others will solve other itches.

  • Mozilla (Score:2, Informative)

    by nikel ( 112884 )
    The <a href="http://www.mozilla.org/">Mozilla</a> Project has a project on their website about migrating from their build system to autoconf. No Idea how far this got, but it fits your requirements of a huge project.
    nikel
  • by paulbd ( 118132 ) on Wednesday November 28, 2001 @11:27AM (#2624596) Homepage
    i've been using these tools for about 2-1/2 years. autoconf and automake have saved me a lot of hassle, i think. but i cannot recommend that anyone in their right minds uses libtool. libtool starts from a reasonable premise: the creation of shared libraries is done in very different ways on different platforms. it continues on to try to provide some standard method for developers to use when they need to do this. the problem is that for one reason or another, its gone way too far in a direction that with a little bit of objectivity is really quite bizarre. libraries end up being built in hidden directores, and are initially replaced by text files of dependencies. you have to run "libtool -gdb" if you want to debug your program without installing the libraries first. there are many wierd and obscure consequences to this design. and don't get me started on "libltdl", the libtool "portable dlopen() wrapper". in recent versions of this library (which were full of really stupid bugs) there are now hooks to replace malloc/realloc/free, for reasons that nobody on the libtool mailing list understood (probably some windows bullshit to do with calling malloc at some point during runtime linking, but who knows?). libltdl goes as far as "emulating support for run time linking even on platforms that don't support it". they are proud of this bizarre "accomplishment". libtool's basic problem is that it starts out trying to solve a goal that very few people face: trying to build a shared library on multiple platforms. because this is a very complicated task, libtool is necessarily complex. if you are writing a shared library for one or two platforms, particular where they are both fairly similar, using libtool to do this rather than doing it with your own understanding of the process can provide hours of frustration, confusion and despair. i don't see anyway forward for libtool. the sensible way to solve shared library problems is to wrap the actual linker in a new executable that operates in a standard way across platforms. this would hide the complexities and oddities of the process away from autoconf/automake and you.
    • Building shared libraries on multiple systems is a task that many people face. Any time you expect to write a shared library and have it ported across various unix-like platforms, you have this problem. Getting shared libraries working across even a few platforms is difficult enough. Linux, Solaris, and FreeBSD? Each have different requirements and quirks and (especially in the latter case) brokenness.

      If you're just writing something on Linux with zero foresight as to portability, you might take the approach that "this doesn't matter," but that would be pretty naive.

      Also, libtool does more than just implement a layer of portability for building shared libraries (which is its main goal). It has library dependency tracking, version maintenance, and a module loading API. These, coupled with the extreme ease of building a shared library when coupled with automake and autoconf, make it an indispensible tool.

  • OT: language choice (Score:3, Informative)

    by YoJ ( 20860 ) on Wednesday November 28, 2001 @11:55AM (#2624719) Journal
    This is not a solution to the current question, but is something to think about when choosing which tools to use for any project. Different languages handle portability in different ways. The C approach is a mixed blessing; the core language is portable to about any platform in existence. But almost every non-trivial program uses many many library calls, and libraries can have many inconsistencies across platforms. Even if the functionality is the same, the functions may be called different names. In the case of more advanced features such as threads or IPC, different platforms can have entirely different systems. The way I see it (I may be wrong), automake and autoconf solve the stupid inconsistencies between library functions, and tell the user if a needed library is not installed.

    Other languages take different approaches. Java has a very large set of libraries that are specified to be part of the language and so must be included. The Java language is also constant between platforms. Not every platform has a conforming Java environment, but the most popular ones do. Common Lisp also has huge functionality in its standard library that is part of the language specification. OCaml has a nice standard library, and is open source. If you want your program to work in OCaml on some unsupported archictecture, you can compile it yourself. This still leaves porting the library. If the target archicture has POSIX, this is easy.

  • by Above ( 100351 ) on Wednesday November 28, 2001 @11:57AM (#2624733)

    Autoconf is a tool that in the end can only make portability choices for you. In order for those choices to mean anything you have to have a need for your software to be portable (to a wide number of platforms, really), and you need to understand the real issues with writing portable software.

    If you're writing for FreeBSD, Solaris, and Linux 98% of the time for application software you can write it so there are no portability issues. Why have the autoconf step when you can "make;make install"? Modern systems are not all that different for high level stuff, and are converging for medium level stuff. It's really only the low level details keeping them apart.

    If on the other hand running on an old Ultrix box, and on your SCO Unix box, or on that PDP-11 in your garage is important autoconf can give you the mechanisms to make all that work but only if you know that differences between the platforms, and what changes need to happen to your code to make it work . It does no good to have autoconf check to see if bzero exists if you don't know to use memcpy as an alternative, or vice versa. A check without an alternative is just a way of bombing a little sooner than the compile stage.

    The other thing autoconf can help with is optional packages. These are not portability issues per se, but rather choices that need to be made, but often aren't worth bothering a user about. Consider the application that's all command line based except for a single X app that's not really needed, just nifty. Well, if the system doesn't have X, you don't build it, and if there's no X it's unlikely the user wanted to run X apps anyway.

    As far as the mechanics go, autoconf is fairly easy. Once you understand the changes that need to happen making autoconf make them for you is trivial.

  • dissenting view (Score:5, Interesting)

    by anothy ( 83176 ) on Wednesday November 28, 2001 @12:11PM (#2624809) Homepage
    i may well be in the minority among this crowd, but i think you should avoid autoconf like the plauge. people's most common reason for using autoconf/configure is portability, but that's a cop-out.
    basically, the autoconf/configure school of portability says "forget about actually writing portable code, just write it for each variant and let the build process pick". that's whacked. you'll continually be running accross new variants, new ways in which systems are different, or just new systems. for example, i use Plan 9, which has a very good ANSI/POSIX compatability system (better than many *nix systems). despite being near-fully ANSI compliant, pretty much every autoconf fails because it doesn't recognize the output of uname or something stupid like that (of course, that says noting about when programs that claim to be ANSI or POSIX arn't, like including non-ANSI/POSIX headers, typically BSD stuff).
    this school of portability also typically makes your code far less readable - littering it with ifdef's every third line - and much larger. it ends up taking much more time than just slowing down and disiplining yourself to write real portable code.
    the argument 'configure ; make ; make install' is easy is stupid, as well. y'know why? 'cause 'make install' is easier. build your makefiles well. the 'install' target should have prerequisits, obviously. make will build them for install. and 'configure' is slower and more prone to failure than 'cp makefile.linux.386 makefile'. and light years more reliable. and editing an example makefile is way easier than putzing with autoconf/configure if something doesn't work. not to mention easier to debug (uh, 'echo' anyone?).

    on a personal note, having done portability work a good bunch, i'd offer just a little extra advice: do not require GNU-specific stuff. don't use any of the gmake-specific features or gcc language extentions. GNU propaganda, that'll just kill your portability. and if you need something both simpler and more powerful than make, check out mk [vitanuova.com] from the Plan 9 and Inferno distrubutions. Vita Nuova's got a version that runs on various unixes and win32 systems. it's very similar to make, but a bit more general and flexible. but, given that you've already got all the makefiles, i'd suggest your best bet is just sticking with plain makefiles and cleaning them up.
    • Re:dissenting view (Score:2, Insightful)

      by elflord ( 9269 )
      basically, the autoconf/configure school of portability says "forget about actually writing portable code, just write it for each variant and let the build process pick".

      This is not true at all. There's nothing in autoconf that mandates sloppy programming.

      the argument 'configure ; make ; make install' is easy is stupid, as well. y'know why? 'cause 'make install' is easier.

      OK, but what if I want to set a different prefix, turn on optimisation, enable debugging, and tell the build where Qt is installed ? I've seen straight make builds where this information is distributed across several Makefiles, and that's a pain.

      i'd offer just a little extra advice: do not require GNU-specific stuff. don't use any of the gmake-specific features or gcc language extentions.

      This I agree with.

      • There's nothin in autoconf that mandates sloppy programming.
        true, but that's not what i claimed. rather, i've found (by observing code quality in numerous commercial and Free Software projects) that autoconf/configure encourages sloppy coding by eliminating many of the things which otherwise encourage it, such as portability concerns.
        ...what if I want to set a different prefix...
        um, "prefix=/my/path make install"? environment variables work well for many such things. further, editing a well-written makefile for such things is simple; about as dificult, in my experience, as finding the desired options to different configures. this is even more true if any debugging is needed.
        I've seen straight make builds where this information is distributed across several Makefiles...
        true, and that does suck. you can build awful makefiles, and you can write truely elegant, portable code and use autoconf. they're tools. my point is what these tools encourage, what is generally true about projects in which they are used.
  • by Lozzer ( 141543 ) on Wednesday November 28, 2001 @12:24PM (#2624887) Journal

    I'm a CS grad student working on a large research project (over 1 million lines of code, supported on many platforms). The project has been under development for several years, and the build system is nontrivial for end-users.

    Linus is that you? Or maybe those whacky XFree86 guys.

  • I have to say this about configure/autoconf, 99 times out of 100 everything works perfectly. I feel a tremendous weight lifted from my shoulders when I see the configure script in the output of my tar -zxvf command.

    I'll even go for xmkmf before resorting to editing Makefiles by hand ala xanim. I've done editing of Makefiles but... a trip to the dentist is easy as pie in comparison.
  • There's a book [amazon.com] on this issue (autoconf, automake, libtool). I haven't read it. Maybe someone else can give a review.
  • A lot of people are complaining that autoconf doesn't do this or libtool doesn't work on this platform. Please remember that all of these tools are Open Source, so you may fix them yourself and contribute the changes back to the respective projects!

    Granted, not everyone has the time or the desire to fix such complex tools, but the author of the Berkeley amd story [columbia.edu] contributed many of his new tests back to these projects.

    Why not improve the status quo for everyone by contributing or fixing these tools when you decide to use them? You may learn a new skill or two that you may add to your résumé in the process!

  • Be careful... (Score:3, Insightful)

    by seebs ( 15766 ) on Wednesday November 28, 2001 @12:58PM (#2625086) Homepage
    This is not a no-brainer guaranteed win. It is easy to make autoconf do horrible, silly, things - which make your code less portable. As always, the very first thing you do should be to make your code as portable as possible to begin with; this is much, much, more efficient than any kind of configuration system.

    Current versions of configure seem to sometimes produce makefiles which can't be used with classic Berkeley make, and depend on GNU extensions...
  • ...is what breaks autoconf and makes it often unusable.

    Look at the PHP [php.net] project. They use autoconf. Yes, it (autoconf) works most of the time pretty good for PHP.

    However, you can use only certain versins of it because the older versions do not have the necessary features, and the new ones break BC or just plain have bugs that prevent PHP from building with that new version, so you are often locked in to a certain couple versions of autoconf, which is one of the main problems autoconf is supposed to fix in the first place.

    It does contribute a bit to developer frustration, but, in the long run, it is much better than most other things I have seen so far, so it's probably not that bad of a choice after all...

    :)
  • by FernandoValenzuela ( 523533 ) on Wednesday November 28, 2001 @01:42PM (#2625291)
    You need to move your stuff to Visual C++ dsp and dsw files, man.
  • Use tmake (Score:2, Interesting)

    From trolltech [trolltech.com]; The version that comes with qt-2 is Perl and thus as cross platform as you can probably hope for. tmakes are significantly easier to maintain directly than makefiles, configure scripts or any other beast I've looked at.
  • by graveyhead ( 210996 ) <fletch@@@fletchtronics...net> on Wednesday November 28, 2001 @04:24PM (#2626435)
    There are three reasons in my mind why autoconf absolutely rocks.
    • The library dependancy checking via AC_CHECK_LIB rocks. Why should I have to deal with the infinite possible locations for shared libraries? Autoconf deals with this nicely, and builds your -l and -L parameters for you.
    • ./configure --prefix=[somewhere] makes it very easy for your users to customize the installation directory.
    • AC_ARG_WITH is an execllent macro that lets you create compilation options with ease. Two of my favorites are creating debugging and profiling builds that can be removed in a production compile.
    The only truly horrible feature of autoconf/automake is the function detection mechanism, as other posters have complained about. Since there is no viable replacement for the functions that are being checked, this is just plain dumb. I suggest not using this feature at all, and then only write code that is a) ansi strict, or b) only uses library functions that you checked with AC_CHECK_LIB.

    Following these simple rules has made it very easy for me to create sane makefiles across projects with a very large number of subdirectories and sources.

  • by Ian Lance Taylor ( 18693 ) <ian@airs.com> on Wednesday November 28, 2001 @04:32PM (#2626517) Homepage

    In my experience with software projects, a straightforward build system is essential. It permits consistent builds, which is essential for debugging complex problems. It permits easy builds, which is essential for developer testing. The hardest part of programming is debugging, and good debugging requires speeding up the edit/compile/debug cycle, while ensuring that everybody still gets the same result for each build.

    So I would say that no matter what, you should improve your build system.

    Given that, should you use the GNU build system, which is autoconf/automake/libtool? Well, it depends. There is a definite learning curve to these tools. They are written in several different languages (sh, m4, perl) and dealing with certain problems can require understanding each of those languages and how they interact. Using these tools will not automatically make your code portable or easy to build; you have to use them correctly, and you have to understand what you are doing.

    On the other hand, every other system which supports a complex build process also has a steep learning curve. There's no easy solution--if there were one, everybody would be using it.

    The GNU build system is widely used, so there are a lot of people who can help you with it. The mailing lists are active and helpful. There is also a book [redhat.com], but I'm one of the co-authors so you'd better take that with a grain of salt.

    I've converted large projects from ad hoc build systems to using the GNU build system. It's usually pretty straightforward--a matter of a week or so--but then I have a lot of experience with the tools.

    I've never used most of the other build systems (e.g., Odin, Jam, ANT) for a serious project, so I can't honestly evaluate how they stack up. I can recommend against imake.


  • My experiences with autoconf- and libtool- based build processes is that they tend to either a) require using a gcc-based compiler or b) will only kick in optimization flags if the end user sets CFLAGS manually (and even then, the CFLAGS may not get carried over into all parts of the project).

    So, depending upon your needs and just how portable you need to make your project, you might want to look at imake. While imake isn't 'simple' by any stretch of the imagination, one can take advantage of the fact that any system that ships with X11 developer packages has a working imake system that includes a good set of optimization switches set. The only big problem with imake is that a lot of folks don't set the site and/or host configuration files to change the compiler settings if they aren't using the manufacturer's compiler. [a simple #define HasGcc2 YES or is usually all it takes!]

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...