Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Programming IT Technology

A Distributed Front-end for GCC 195

format writes "distcc is a distributed front-end for GCC, meaning you can compile that big project across n number of machines and get it done almost n times as fast. The machines don't have to be identical or be running the exact same GCC version, but having the same OS is helpful." With the advent of faster hardware, I can't complain about kernel compile times anymore, but larger source trees could definitely benefit from this.
This discussion has been archived. No new comments can be posted.

A Distributed Front-end for GCC

Comments Filter:
  • by Anonymous Coward
    That doesn't make too much sense. What if I had 50% 2.9 machines and 25% 3.2 machines, and a bunch mixed in-between? How would it know which version I wanted my program compiled with?
    • by Angry White Guy ( 521337 ) <CaptainBurly[AT]goodbadmovies.com> on Saturday October 12, 2002 @12:04PM (#4437052)
      From the FAQ:

      distcc doesn't care. However, in some circumstances, particularly for C++, gcc object files compiled with one version of gcc are not compatible with those compiled by another. This is true even if they are built on the same machine.
      It is usually best to make sure that every compiler name maps to a reasonably similar version on every machine. You can either make sure that gcc is the same everywhere, or use a version-qualified compiler name, such as gcc-3.2 or gcc-3.2-x86-linux.


      So in other words, keep them close, especially for gcc versions that break backwards capability.
    • gecc (http://gecc.sf.net) takes care of that and
      finds a compile node that uses the same compiler. Think of something like mozilla, that uses C and C++. gcc and c++ could invoke different compiler (at leat different versions).
  • GCC version (Score:5, Insightful)

    by csnydermvpsoft ( 596111 ) on Saturday October 12, 2002 @11:38AM (#4436968)
    The machines don't have to be identical or be running the exact same GCC version

    Well, to some extent they probably do. If you're running GCC 3.2 on one, you wouldn't be able to run 3.0 on another because of binary incompatibility.
    • Re:GCC version (Score:2, Informative)

      by Anonymous Coward
      The binary incompatibility issue only exists for C++, but it is still very important. I took a brief look at the distcc manual, and it did mention (see this section [samba.org]) that you may want to use the same version of g++ on all machines if you are compiling C++. I would change "may" to "must"; you really don't want to take the risk of having incompatibility bugs.

      Regardless of the language, if some of the machines are running a different OS, or have a different architecture, they will have to do cross-compilation and things will get more complicated.

      Colin Howell
    • I thought that the binary incompatibility was only a C++ thing. So for some projects, that's an issue, but not for all. Of course, the idea of being careless about which compiler version you're using for building a large project is rather strange.
    • Only true for C++ (Score:4, Informative)

      by FooBarWidget ( 556006 ) on Saturday October 12, 2002 @12:11PM (#4437077)
      The C ABI between *all* GCC versions (and probably other compilers too) are compatible. You can compile libgnome using GCC 2.95.2 and Nautilus using GCC 3.2 and not have any problems at all.
      • On all *official* GCC versions, perhaps (if you ignore #ifdefs based on the GCC version, for the purpose of using new features or avoiding bugs in certain versions). However, Red Hat's 2.96 abomination is *not* compatible with other GCC versions, even for C. It ignores the "aligned" attribute when laying out structs, so if you have a struct containing types with specified alignment (I'm not sure what it does if you specify the alignment for the datum itself), that struct will have a different layout with 2.96 than with other GCC versions.
        • > However, Red Hat's 2.96 abomination is *not*
          > compatible with other GCC versions, even for C.

          That's plain wrong. My GNOME 1 libraries are compiled by RedHat's gcc 2.96. All the GNOME 1 apps I have compiled myself are compiled using GCC 3.0, and they work just fine.

          Maybe that "align" attribute can cause problems but it's disabled by default, and I have yet to see any desktop app using that flag by default to compile.
    • Source code (Score:4, Insightful)

      by ucblockhead ( 63650 ) on Saturday October 12, 2002 @12:25PM (#4437135) Homepage Journal
      Running different versions could cause really nasty problems if different versions of gcc support different levels of C (like C99 or older C) or if one version has a compiler bug that another doesn't.

      Can you imagine code compiling or failing to compile randomly depending on which machine happens to compile it? Yikes! Debugging nightmare...
      • Re:Source code (Score:5, Insightful)

        by wik ( 10258 ) on Saturday October 12, 2002 @02:15PM (#4437580) Homepage Journal
        I've had bad experiences with this in a Condor cluster [wisc.edu] of linux machines which had different versions of glibc. Seemingly randomly, my jobs would blow up into the netherworld without running and without an error message. Until the administrators matched all of the glibc's (but not the linux distributions, for some reason), I had to compile everything with -static on one machine and pray.

        I wonder how much of a problem network bandwidth is in this system. With Condor, moving large datasets between machines is a problem. Object files can be pretty big and if you have a lot of them, you might risk pushing the compile bottleneck to the network. Even worse might be the link step, where all of the objects have to be visible to one machine (gcc doesn't have incremental linking yet, does it?).
    • If all the machines _are_ identical and have a common filesystem, then it might be a bit quicker to use something simple like Doozer [venge.net] or PVM-enabled GNU make [sourceforge.net]. But if compile time is dominated by just crunching the code then it might not make too much difference.
  • hmmm (Score:5, Funny)

    by Anonymous Coward on Saturday October 12, 2002 @11:39AM (#4436971)
    Yay! My 133 doesn't have to take 25 billion years to compile anymore! Uhm, wait, I don't have any other computers... Shoot.
    • Re:hmmm (Score:5, Informative)

      by stevey ( 64018 ) on Saturday October 12, 2002 @11:53AM (#4437018) Homepage

      In that case you might like to look at ccache [samba.org] which is a compiler cache for a single machine.

      It will cache the compiler options for each source file and the resultant object file generated. I use a lot when I'm building packages for software - which require multiple compilations. It works very nicely - I'd love to see how well it would integrate with distcc....

      • Re:hmmm (Score:3, Insightful)

        by Blkdeath ( 530393 )
        In that case you might like to look at ccache
        Isn't the default cache size somewhere to the tune of 2-4GB?

        I recall that all of my lower powered machines were lucky to see a 6GB drive, letalone have 2-4GB to spare.

      • One of the most useful things about Clearcase
        is its ability to "wink in" object files
        from other developers' views. That means, if
        one developer in the team has built a version
        with a certain time stamp, that particular
        object file never has to be built by anyone
        else in the group unless some dependency changes. It would kick ass if CVS had that
        capability.

        Magnus.
  • Interesting approach (Score:5, Interesting)

    by PineGreen ( 446635 ) on Saturday October 12, 2002 @11:39AM (#4436972) Homepage
    The sun compiler suite comes with dmake, which does the same on the level of make, rather than cc, but is essentially the same.
    Definitelly would make beowulf clusters interesting for compilation as well as hard core numerics (no joke intendend).
  • Not N (Score:5, Informative)

    by Nashirak ( 533418 ) on Saturday October 12, 2002 @11:44AM (#4436987) Homepage
    You can almost never achevie a speed up of N. You can acheive S(N) = T(1)/(T(1)*alpha+((1-alpha)*T(1))/N+T0) Where T(1) is the time it takes to run the task with 1 computer, alpha is the part of the task that cannot be parallelised (as in startup registers etc.) and T0 is the communications overhead of the task.

    Just to clarify. :)
    • Re:Not N (Score:2, Funny)

      by Anonymous Coward
      That sounds like how they calculate my mortgage payments!
    • YES! N! Re:Not N (Score:5, Informative)

      by angel'o'sphere ( 80593 ) <{ed.rotnemoo} {ta} {redienhcs.olegna}> on Saturday October 12, 2002 @11:55AM (#4437023) Journal

      You can almost never achevie a speed up of N. You can acheive S(N) = (1)/ (T(1)*alpha+((1-alpha)*T(1)) / N+T0) Where
      T(1) is the time it takes to run the task with 1 computer, alpha is the part of the task that cannot be parallelised (as in startup registers etc.) and T0 is the communications overhead of the task.


      This is the text book. Amdahls law, IIRC.

      In reality, and also in most text books, there are exceptions where the solution scales with the number of processes.

      And it should be easy enough to see: 5 machines compiling one source file each are 5 times as fast as one machine compiling 5 source files.

      As long as you start gcc 5 times in a row you have
      the same initialization overhead for EACH instance of gcc one after the other.

      If you manage to start gcc with a couple of source files as argument to compile you save the laoding time of the binary at least. That would correspondend roughly to the alpha value.

      Amdahls law is usefull for a single program/problem: try to paralelize gcc and you find the compiling source can't get speed up very much. So 5 processors running several threads of one gcc instance, those do not scale by 5.

      However it says nothing about just solving the same problem multiple times in parallel.

      Regards,
      angel'o'sphere
      • Re:YES! N! Re:Not N (Score:4, Informative)

        by joib ( 70841 ) on Saturday October 12, 2002 @12:04PM (#4437049)
        That assumes you can divide the work equally. Consider that the number of source files probably aren't an integer*N, and that different source files take varying times to compile. Of course, as the number of source files approaches infinity, and if you have some load balancing scheme, this becomes a non-issue. Of course, in Real Life (TM) most projects don't have an infinite amount of source files.
        • Unless you have a very small project with files of vastly different sizes, the size of each file won't matter that much. What will be more problematic is lots of dependencies. For example, if 7 machines are frequently waiting for one machine to compile a file on which their compilation task depends then you'll loose a lot of the benefit of parallel compilation.
      • No, NOT N (Score:3, Interesting)

        by HisMother ( 413313 )

        Ahem. Amdahl's law still operates, and you even say so yourself. There's a constant part that cannot be removed. Let's say it takes 50 msec to initialize gcc and 500 msec to compile the average source file. Then it takes 5.05 sec to compile ten files with one copy of gcc. Ignoring commiunications, it takes 0.550 seconds to compile them on ten machines. Is 5.05/0.550 == 10? No, it's about 9.2. Therefore, the speedup is LESS THAN N. Note that the faster the actual compile time, the lower the speedup would be!
        • 10? No, it's about 9.2. Therefore, the speedup is LESS THAN N. Note that the faster the actual compile time, the lower the speedup would be!

          What about a project with

        • oops... take 2 (Score:3, Interesting)

          by yerricde ( 125198 )

          Amdahl's law still operates, and you even say so yourself. There's a constant part that cannot be removed. Let's say it takes 50 msec to initialize gcc and 500 msec to compile the average source file. Then it takes 5.05 sec to compile ten files with one copy of gcc.

          Then you go on to tell how using ten machines provides only a 9.2-fold speedup. But what about a project with 100 files? It would take 50.05 seconds to build everything on one machine, and it takes 5.050 seconds to build ten files on each machine. Now we have a 50.05/5.050 == 9.92 fold speedup. In practice, can you notice the difference between 9-fold and 10-fold speedup?

          Does the speedup factor not approach the number of machines asymptotically?

          (How can I "Use the Preview Button!" when an accidental Enter keypress in the Subject invokes the Submit button? Scoop gets it right by setting Preview as the default button.)

          • Yes, it does asymptotically approach N. The OP said you couldn't actually get N in practice, and the original reply said "yes you can." If the reply had been "right, but you can get close" I wouldn't have bothered, but the person was implying that Amdahl's law didn't apply, which is nonsense. It applies perfectly well -- you'll never reach precisely N, plain and simple.
        • It takes (0.5+0.05)*10=5.5 seconds to compile 10 source files. When did you ever see a project compile 10 source files with a single invocation of gcc?
        • Re:No, NOT N (Score:4, Informative)

          by angel'o'sphere ( 80593 ) <{ed.rotnemoo} {ta} {redienhcs.olegna}> on Saturday October 12, 2002 @04:10PM (#4438018) Journal

          Let's say it takes 50 msec to initialize gcc and 500 msec to compile the average source file. Then it takes 5.05 sec to compile ten files with one copy of gcc


          Your calculation is wrong:

          I explicitly said: you start gcc N times for N files.

          So a call like gcc: 1.c 2.c 3.c ..... 10.c is not allowed.

          Because that call falls under amdahls law(in so far as a common initialization time is needed which is divided amoung the ten compile tasks).

          However 10 calls:
          gcc 1.c
          gcc 2.c
          gcc 3.c ...
          gcc 10.c

          Those ten calls scale with 10! running those ten calls one after the other on 1 machine takes exactly ten times the time then running one of them each on its own machine.

          I repeat: Amadahls law is about parallelizing one algorithm. It is not about starting the same algorithm on different problem sets (differnt c files) in parallel.

          Where as the first one does not scale infinite and not scale with N, the second one does(of course with some limitations in RL, e.g. if all compilers use the same file server via NFS).

          The interesting difference is this: under Amdahls law you have a maximum processors up to which the solution scales. Adding more processors does not make the problem solving faster. Very often it makes the problem solving slower indeed because of communication overhead. OTOH, by just duplicating the hardware and distributing the problem "identicaly" and not "divided and parallelized" you indeed get nearly infinite scale ups. You scale up to the point where the distributing and the gathering gets to expensive. (Distributing C sources from a CVS repository to compile farm machines and gathering the *.o files or better *.so during linking back)

          angel'o'sphere
    • Can you greet Mr. Amdahl from me? :)

      And actually, this is also not quite correct. In praxis, it is possible to achieve super-linear speedup, although unlikely. (N times as much memory is more the culprit than the additional processing power)

      While speaking of distcc, one could also mention Group Enabled Cluster Compiler [sourceforge.net]. Never used one of those, I've to admit.
    • Another question why you won't recieve N speed-ups: (but rather compilation problems) is that not all source packages write Makefiles properly so that you can do make -j2(run 2 jobs in parallel).

      e.g. it will try running two sub makes within the same depth level at the same time, but the makefile hasn't specified a dependency. for ex:

      SRC_DIRS = ha_fsd \
      fsd_lib \ ... \

      Unless you put: ha_fsd: fsd_lib somewhere, you will run into compilation problems with sub makes and make -j. So my question is how does distcc happen to fix such dependency problems?

      AFAIK, samba has written makefiles properly so you can do make -j2, so maybe it was designed for samba(distcc is part of the samba.org domain...go figure).
    • S(N) = T(1)/(T(1)*alpha+((1-alpha)*T(1))/N+T0)

      That's o(N) - the asymptotic speed of N.

      • Assuming you meant the expression was o(1/N) (time, or a "speed" of o(N)), you are incorrect. In fact, the speedup is not merely O(N) but theta(N) (holding the other factors constant, of course).

        God gave you a shift key for a reason. Learn to master it. Or, alternatively, don't use terminology you don't understand.
    • Anyone else read the article summary and just know there was going to be a comment saying something about 'n' not being acurate?
  • Sure, kernel compiles are fairly speedy but large projects still take forever to compile. Even on my AthlonXP 1900+. I downloaded the latest CVS snapshots of KDE in an attempt to de-bluecurve my new RH 8.0 machine and it took a couple hours to compile everything. Depressing.
  • by IronTek ( 153138 ) on Saturday October 12, 2002 @11:50AM (#4437005)
    This could really spur the development of OpenOffice.

    With 50, 100 machines or so hooked up, OpenOffice's compile time could be reduced to as little as 1 or 2 days!!!

    • Actually, it would be kind of cool if this was part of the default install in Gentoo - along with some P2P program for finding others online who are running the same app. Then you could download the source code and distributedly (is that a word) compile it... As long as your network is fast enough, you could significantly reduce the amount of time compiling, etc. on slow machines.

      Just a thought... maybe I'm off base.

      -Russ
      • Actually, it would be kind of cool if this was part of the default install in Gentoo - along with some P2P program for finding others online who are running the same app. Then you could download the source code and distributedly (is that a word) compile it... As long as your network is fast enough, you could significantly reduce the amount of time compiling, etc. on slow machines.

        Wonderful. Then you can get rooted when others running the P2P app have modified the compiler to insert trojans into the generated binaries.

        • Ahh... that would be a problem, wouldn't it.

          Doh!

          Okay, how about a super-cool, digital-signature based P2P system based on an unbreakable trust matrix? Yeah, right. OK forget it.

          -Russ
    • Yeah, now all we need is a distributed method to launch OpenOffice.
  • So now we need to make a Distributed coder software...
  • by selderrr ( 523988 ) on Saturday October 12, 2002 @11:58AM (#4437028) Journal
    I sincerely hope Apple makes this feature into projectbuilder, which compiles insanely slow when compared to codewarrior. If it wasn't for the superior interface and integration with interface builder, I'd swap back to codewarrior right away.

    Does anyone here know how good the speed increase is when compiling on dual G4s versus a single proc ?
    • I haven't used CodeWarrior for years (got a closet full of shirts though), but I believe you are looking at a compiler difference, not the IDE difference.

      gcc is not a fast C compiler. It is a portable C compiler and it makes pretty good code, but it is not fast. g++ is a slow C++ compiler.

      The Codewarrior compilers were fast compilers that made pretty good code.

      Its all in the tradeoffs.
      • Engineering time
      • target portability
      • source language selection
      • code quality
      • compilation speed

      You can't have it all. Apple has been adding engineering hours to improve speed with gcc, but gcc will always value source language selection and target portability above all else.
      • I agree that gcc has different goals, but comeon, the speed difference is really flabbergasting ! codewarrior is twice as fast ! I can understand a speed loss due to portability, but that much ??

        Anyways, it doesn't all matter that much if there were to be a speed gain from dual procs or distributed compiling. So to restate the Q : do you have any idea if PB is faster on a dual1GHz than on a signle 1GHz ?
  • So, is it better? (Score:5, Interesting)

    by FreeLinux ( 555387 ) on Saturday October 12, 2002 @11:58AM (#4437032)
    Is this better than say, Group Compiler [freshmeat.net]?
    • From the Group Compiler [sic] (gecc) site:

      gecc is a proof of concept. It is heavily inspired by ccache and distcc. You could chain these tools to achieve the same goals gecc tries to reach. Both tools are much more mature and work in production environments. gecc just started with a little different concept. gecc has a central component (distcc has not).

      My idea is that gecc could better handle a varying set of compile nodes: if you have some machines that only from time to time could help in distributed compiling than this is nice.

      With a central component it might be easier to monitor the compilation and distribute the compile jobs.

      Right now gecc is only useful if you read the source.

      Emphasis is mine.

      I guess it all depends on whether or not you want to work with production quality code or not.

  • Could someone please point out the difference between a parallel and/or distributed make, like pmake?

    It sounds not realy reasonable to put the coding work into gcc when you like to have yacc/bison and a bunch of perl scripts and what ever else you have in your makefile also speeded up.

    Regards,
    angel'o'sphere
    • The way the words parallel, distributed are used, is practically the same thing, as far as I could see on the docs of distcc. However the systems being described by the /. may have some fundamental differences in its internals.

      It has been a long time since I tried pmake, however I frankly didn't like the thing. It was always segfaulting for some reason, while the cluster did much more complex jobs without an hassle. Unfortunately the cluster lived its life and now I don't know how things are.

      It is not reasonable to consider you can go out just with some yacc/bison/perl and well-made Makefile. Here, things are much more complex. The problem is not compiling every file in a parallel machine but the whole code in small pieces. And that's hard. Some things are capable of going parallel, others not. There are several algorithms to determine what may go parallel or not. There are also some general and analythical methods that adapt data to go parallel right from start. Besides every distributed/parallel system needs to exchange information about interval steps, that every process needs to continue its work.

      Frankly I don't know too deep the processes going on parallel compilation but I can guess them under my small practice in parallel computers. Imagine some photo or matrix that is divided into small pieces and sent over several machines. Every time calculations touch the edges of a small piece, the system needs data that is located on the other machines. There are a few methods to pass this information, the most popular is MPI - Message Passing Interface [anl.gov]. However it is not an universal solution. In cases when data is too heterogeneous, and calculations don't fit a common method, MPI and its cousins, are an hassle to handle with. Compilers is one of these cases as we are always dealing with different files, different compilers and tons of different interactions among data. To create systems capable of doing parallel compilation, we would need other approaches. At least, these systems should give the developer absolute transparency about the fact that compilation is being done in parallel, or else, this system will be completely useless. Imagine that the developer, while creating a new application, is forced to take into account if his code is capable of being compiled in parallel or not. This will be a huge overload to development, if the app goes over the size of Mozilla.
  • Big benifit (Score:5, Interesting)

    by LoudMusic ( 199347 ) on Saturday October 12, 2002 @12:07PM (#4437059)
    I think the biggest plus is that you can have one hella fast machine on your network running distcc that basically does all your compiling for all your other machines. I can see this being a big bonus for server farms like rackspace.com. The customers would be getting compile speeds from a big ass server, rather than just their little dinky Duron.

    ~LoudMusic
  • Perfect for Gentoo (Score:3, Insightful)

    by waffle zero ( 322430 ) on Saturday October 12, 2002 @12:11PM (#4437076) Journal
    Whether you're looking to install Gentoo on a old pentium to use as a router or sacrificing your first born to compile KDE, it should make things go quite a bit faster.

    Well unless every computer you own runs Gentoo you want to emerge world.
  • Could you imagine a...

    nevermind.
  • Distributed Front End may be a bit of a misnomer. It appears this is a distributed preprocessor (article says it farms out preprocessed source to different copies of gcc.) Gcc presumably puts the preprocessed source through its own front end (parser), back end (optimizer), and produces binaries which are then linked on the main machine.

    What's the problem? Optimizations are probably limited to each separately compiled module. Most optimizations will perform better across a larger code base. (Read the Dragon Book chapter 10 before telling me I'm wrong.) This method may produce valid code optimized by module but the code is nowhere as good as it could be. Making debuggable code is another challenge.

    (If the ARE doing multiprocess optimization then I'm duly impressed. I doubt it, though.)
    • This is true. Unfortunately, traditional makefiles tend to encourage compiling each file seperately, so you have to use workarounds like ICC's .il file mechanism to do global optimizations. However, for developers, this distributed processing is a big boost. When you're working on code, you have to recompile a project repeatedly, and distributing the workload pays off in decreased frustration. For those intermediate builds, optimizations don't really matter anyway.
    • Unfortuantely GCC is really poor at optimizing in any case, so it's pretty much moot.

      Just compare GCC at max optimization and SUN cc at minimum or first level. SUN cc beats GCC even there.

      So while it's true that more data -> better optimization is correct I don't think it's a very pressing issue for GCC front ends.
  • by ucblockhead ( 63650 ) on Saturday October 12, 2002 @12:19PM (#4437110) Homepage Journal
    If your system is well designed, compiling the entire thing should be a rare event. In a well designed project, most changes occur in c files or in headers only included in a few c files, so most changes only require compiling a very few files.

    Compiling the whole source tree should be the sort of thing you do fairly rarely (for a big project), perhaps once a night, perhaps automated so no one has to watch it.

    If compile time is something that is a significant problem for you, you really need to look at your code design.
    • That's BS. Software design won't save you when you are modifying the makefiles, when you are touching a header file that is used everywhere, or when you have to do a clean build just to ensure that you won't screw up everybody simply because make is such a crappy/unreliable build tool. A good deal of the blame is also shared by C/C++ header files, which are a pretty antiquated way to specify an API.

      Granted good software design should ensure that you don't have to do this often. However the typical build tools make it way too costly to fix such issues that force big recompiles. As a result they often remain unfixed.

      I wish some vendor had the guts to promote a replacement for makefiles that would allow higher-level semantics to be exploited to reliably speed-up builds. Note: Apple is already doing good by promoting jam, even if jam is "just" a better make.

      I had great hopes to see such a tool emerge from CodeSourcery's software carpentry contest. Too bad they didn't have the time to get to it.
      • If you have headers included everywhere that have to be changed often, you've got a bad design.

        You'd be amazed at how often it is easy to get rid of those stupid "include everywhere" headers.
  • by GGarand ( 577082 ) on Saturday October 12, 2002 @12:21PM (#4437117)

    From the ccache homepage [samba.org], which is also a Samba hosted project :

    ccache is a compiler cache.
    It acts as a caching pre-processor to C/C++ compilers, using the -E compiler switch and a hash to detect when a compilation can be satisfied from cache.
    This often results in a 5 to 10 times speedup in common compilations.

    • using distcc and ccache [samba.org]

      from the above link:


      distcc & ccache
      Has anybody yet thought of integrating distcc & ccache?

      Yes, of course. They work pretty well as separate programs that call each other: you can just write CC='ccache distcc gcc'. (libtool sometimes has trouble with that, but that's a problem that we will fix shortly and it is a separate question.)

      This is very nearly as efficient as making them both part of a single program. The preprocessor will only run once and the preprocessed source will be passed from ccache straight to distcc. Having them separate allows them to be tested, debugged and released separately.

      Unless there's a really strong argument to do otherwise, having two smaller programs is better practice, or at least more to my taste, than making one monolithic one.
  • slightly off topic, but i've found that ccache [samba.org] to be amazing at speeding up compiles when developing code.

    it basically hashes (after a cpp pass) and caches. alot of times one has to make clean, tweak a Makefile.am, change a preprocessor variable, or work with multiple different branches, such that most source files are still the exactly the same. in that case, huge speedups.

    -- p
  • Wow multiple computers compiling one thing, imagine a beowulf clus.... errr nevermind
  • by skroz ( 7870 )
    Oh well, I've been trying to get a distributed kernel compilation system working using Platform's LSF. Guess I can throw that project away. ;)
  • i've recently written some dist compile tool using a different approach after i've been using distcc for half an hour ... the big problem with distcc is that it does all the preprocessing on one machine, which is really an overkill in some situations (and limits the total speed increase one can gain). what i've done is first running a modified version of make and then distribute all the objects which have to be compiled to the machines. everything is done on these machines (including preprocessing) which only limits the number of compile machines by the speed of the network (i've been compiling on 60 hosts, with almost linear speed increase). the only problem involved with this approach is that the same compiler has to be installed on these machines, and that they have to write on some sort of network shared filesystem (for the objects) the same compiler is an easy thing since i've been using the intel compiler version 6 and common system includes (i've put them into a shared include dir) any thoughts about this thing ? (or some folks willing to help me create a version basing on gpl stuff so i could release this one ?)
    • If you have a large cluster of distcc compile computers it is often better to remove the host machine (localhost) from the environment variable DISTCC_HOSTS so distcc does not actually compile files locally, but instead only does preprocessing, remote machine file delegation and final linking. You might also consider reducing the "make -j" number to reduce network traffic if your network is slow.
      • i've tried that, but in the project i'm working for the preprocessing takes WAY too long, so my machine can only serve, say, 6 to 8 clients. this is a severe limitation and limits scalability kind regards -ph-
  • Distcc has a problem [samba.org] with gdb.
    It appears when a file is pre-processed on the host machine gcc does not record the directory to the pre-processed output file. When the pre-processed file is farmed out to a remote gcc build machine the remote gcc compiler (not knowing any better) compiles the file and records the remote machine's directory to the object file. Now when a user tries to debug the program gdb cannot find the source directories.
    Unfortunately, this "debugging" bug has to be fixed within GCC itself. This thread [samba.org] describes how GCC might be patched to allow gdb to work correctly with distcc, but at this time no action was taken.
    This is not a huge problem - distcc is still great for production builds.
    • This is not a huge problem - distcc is still great for production builds.

      Uh, yeah, how often do you do production builds? Every 3 month? You can let that one run over night. It's the time the test builds take that really hurts productivity, and without debugging, they're worthless. So could it be this is a non.solution to an existing problem?

      • Why so negative? They are aware of the gdb issue and are working to fix it. This does not make distcc 'worthless' by any stretch.
        Even if you do have a crash gdb will still give you the stack trace of a distcc-compiled -g program complete with function names and line numbers - you just don't get the source code without setting a directory directive within a gdb session. Big deal. It's perfectly usuable.
        Sometimes I'll launch production builds mid-day to correct - wait for it - mid-day production problems. This has to be done as quickly as possible, and distcc is very useful.
        The gdb thing is by no means a showstopper.
  • security? (Score:5, Informative)

    by gooofy ( 548515 ) on Saturday October 12, 2002 @12:48PM (#4437207) Homepage

    looks like this one is not necessarily a good idea to run on a university workstation cluster...

    1.4 Security Considerations

    distcc should only be used on networks where all machines and all users are trusted.

    The distcc daemon, distccd, allows other machines on the network to run arbitrary commands on the volunteer machine. Anyone that can make a connection to the volunteer machine can run essentially any command as the user running distccd.

    distcc is suitable for use on a small to medium network of friendly developers. It's certainly not suitable for use on a machine connected to the Internet or a large (e.g. university campus) network without firewalling in place.

    inetd or tcpwrappers can be used to impose access control rules, but this should be done with an eye to the possibility of address spoofing.

    In summary, the security level is similar to that of old-style network protocols like X11-over-TCP, NFS or RSH.

  • by GeckoFood ( 585211 ) <geckofood@nosPAM.gmail.com> on Saturday October 12, 2002 @12:48PM (#4437210) Journal

    Once upon a time, Symantec had a C++ compiler, and with version 7.5 (1996), the build process could be spread all over a network. This did speed up compilation times as machines that were running the build service that were more or less idle would be sent files to compile, passing back the objects and binaries as oppropriate.

    Oh, by the way, that compiler is now called Digital Mars C++.

    That said, all the machines on the network had to be running Windows (and at that time, I think only Windows 95 or NT were the only choices available for that compiler). Further, all had to have the same version of the compiler.

    For those of us that are running Linux boxes on a primarily Windows network, this system, whether GCC or something else, would be rather hard to implement without a cross-compiler. Additionally, even if all were Linux workstations (or BSD, or Solaris, etc etc etc) wouldn't binary compatibility be driven by not just the version of the compiler but the target OS as well?

    It's a noble undertaking. I hope that the developers are putting thought into all the little things like this that will make it tough to pull off.

  • Live boot CD? (Score:5, Insightful)

    by no_such_user ( 196771 ) <jd-slashdot-2007 ... ay.com minus bsd> on Saturday October 12, 2002 @01:22PM (#4437364)
    I'd love a speedup, but the time I'd save compiling would be wasted on having to fully install another linux box. Being able to boot a CD with a live linux distro and this software, and then connect to these slave machines to help compile would be immensely helpful. My linux box is a Cyrix 200MHz PC. Being able to stick a CD into my Athlon 1800 to help the compile would be fabulous.
    • take a look at gecc (http://gecc.sf.net) it is designed to handle a changing set of compilation nodes. It is work in progress, but maybe you would take a look at it. It does not required the same header file nor all the libs installed on alle the compilation nodes, only the compiler (gcc).
    • Looking at my post, I'm afraid that admitting to running linux on a 200MHz machine, Cyrix no less, makes me sound a little less than serious about my setup. I should point out that this is a thinknic PC with a HDD added on, running sendmail, imapd, apache, sshd, and a few other general linux thingies. I don't run X. It's been running for about 2 years, downed only a handful of times when I felt like updating the kernel. It's small and sits in the corner with no complaints.

      Perhaps my biggest complaint is how long it takes to compile. Thus, this project is right up my alley. Good job, folks!
    • Run a gcc cross compiler on the remote machine that will make Linux object files. distcc is operating system agnostic. For distcc's purposes you don't even need the target operating system's header files or runtime libraries since the files are already pre-processed on the host machine, and only compilation of .o files takes place remotely.
      Here [wisc.edu], for example, is how to build a Linux cross compiler hosted on Cygwin.
  • gecc (Score:3, Interesting)

    by j.beyer ( 209770 ) on Saturday October 12, 2002 @01:31PM (#4437412)
    There is an alternative ( http://gecc.sf.net). gecc has a little different approach, it has a central component that distributes the compilation to a number of compile nodes. The set of compile nodes may change (over time). That is: compile nodes may come and go.

    gess is work in progress, distcc is much more mature, but maybe you like to take a look at gecc also.

    (yes, gecc is my baby)
  • you could combine the distribution with object file caching. gecc (http://gecc.sf.net ) shows this. It is work in progress but you could take a look at it if you are interessted.
  • I followed the 30second install overview, and although it took 20minutes to get working, that 20minutes included a distributed kernel compile across 2 of my systems here: a p3-500 and p3-700, 512meg ram each. very nice. i like it!
  • even working on dual-processor machines and using 'make -j4' to allow multiple jobs, the nature of how makefiles calculate dependencies can make it very difficult to get much work done in parallel. other 'make'-like systems, such as
    [perforce.com]
    Jam/MR do a better job of evenly building large, multi-directory projects.
  • This was on Sweetcode [sweetcode.org] months ago.
  • My most common problem these days is *not* compile time, but *link* time. Would be nice if there was a way to speed that up somehow.

    -Bill
  • I did a simpler version of this many years ago. We had a 68K unix system from Motorola. The pcc compiler was really really bad, so we installed gcc. (I've heard a lot of complaints about how gcc optimizer is no good, but it beat the pants off of pcc.)

    Later, we upgraded to Motorola 88k. The new system came with a GreenHills compiler - which was so buggy it was unusable. The bugs were acknowleged, but never fixed. It was too buggy to compile gcc - I wasted many days simplifying expressions by hand to no avail.

    My solution was to build a cross compiler on the 68k, and run it with a stub on the 88k for cc1 that fed the preprocessed source to the compiler on the 68k, and got back the assembler source. (There was no 88k support yet in gcc, so I had to write my own machine model. It was incompatible with Greenhills in passing and returning structs.) The preprocessor, assembler, and linker would run on the 88k - only the actual compiler pass ran on the 68k. This worked beautifully to build gcc on the 88k! And our 33Mhz 88k was so much faster, that I built a 68k cross compiler for the 88k and speeded up compiles on the 68k a great deal.

    Next we upgraded to Power PC. AIX comes with no compiler at all! (IBM's compiler is very good - but expensive.) Fortunately, it comes with an assembler and linker. By dint of copying headers from AIX, and hand preprocessing and compiling to assembler, I got the preprocessor running on AIX. Then it was simple to run the cc1 stub again to compile up a native gcc for AIX. The new PowerPC was so much faster than the old 88k, that a cross compiler to speed up 88k compiles was in order also. (I contributed AIX shared lib support for gcc.)

    Recently, I fired up a PowerPC cross compiler on our 600Mhz PII running Linux to speed up compiles on our old 100Mhz PowerPC AIX using the same simple technique. By using the -pipe option to gcc, the compiler on Linux runs in parallel with the preprocessor and assembler on AIX - truly efficient.

    In conclusion, I want to thank Richard Stallman and the FSF for making it possible to rise above the stupid problems caused by closed source. Before the 68k, we had SCO Xenix. Basic utilities like tar and cpio were broken, reported, and never fixed - but GNU software was there with solid and reliable replacements. (Yes, I made donations and even ordered a tape.)

  • I spent a couple of weeks tinkering with the idea a while back and wrote a server script in Perl that can run on any Win32 box (no compilers or anything necessary to be installed), and wrote a client script that plugs into MSVC++'s IDE. The client parses the build commands and contacts all servers through a UDP broadcast, then connects to each server, preprocesses a file at a time, transfers the processed C/C++ file and the compiler and its related binaries to the server, then sends the build commands to the server as well. All output is captured on the server and sent back to the client.

    It worked pretty well, except I had a lot of problems getting fork to work right in Perl on my Win32 box without crashing, so I could get good parallelism. It even fell back to using the local machine in the event of a failure remotely at any point, so on a multiprocessor development machine and no servers to connect to, it would actually use all the processors to compile--something MSVC doesn't do normally.

    The nice thing about the server script is it only has three functions: accept a file, send a requested file, and execute an arbitrary command. So it doesn't really care about what it's doing. At work, we're planning to use it to leech some cycles off the receptionist machines and managers' boxen (we all know they don't do anything all day but email anyway!).

    The real limiting factor is that preprocessing is relatively slow due to seeks on the hard drive, where not all the headers fit in the disk cache at once. This is a bigger bottleneck than you'd believe.

    For what it's worth, I've learned of another company that concatenated all their .cpp files together and only compiled that one file every time, so their build was a fixed (short) cost, and never hit any header twice. Dirty code that has module-local statics would choke on that technique, but for good code, it's prolly smarter than distributing it.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...