Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Reduce C/C++ Compile Time With distcc 292

An anonymous reader writes "Some people prefer the convenience of pre-compiled binaries in the form of RPMs or other such installer methods. But this can be a false economy, especially with programs that are used frequently: precompiled binaries will never run as quickly as those compiled with the right optimizations for your own machine. If you use a distributed compiler, you get the best of both worlds: fast compile and faster apps. This article shows you the benifits of using distcc, a distributed C compiler based on gcc, that gives you significant productivity gains."
This discussion has been archived. No new comments can be posted.

Reduce C/C++ Compile Time With distcc

Comments Filter:
  • by PacketCollision ( 633042 ) * on Monday July 05, 2004 @04:59PM (#9616197) Homepage

    While distCC is a great tool, there are a couple things to mention. First, the article blurb states that distCC is "a distributed compiler based on GCC." It is actually a method of passing files to GCC on a remote computer in such a way that the build scripts think it was done locally.

    The article also says that other than distCC, the computers need not have anything in common; this is not strictly true. Different major versions of GCC can cause problems if you are trying to compile with optimization flags that are only on the newer version. I have run into this on my gentoo box, trying to use an outdated version of GCC on a redhat box.

    Another thing is that some very large packages have trouble with distributed building of any sort (either multiple threads on the same machine, or over a network like with distCC). As far as I know, at least parts of xfree86, KDE and the kernel turn off distributed compiling during the build. Some of this might just be in the gentoo ebuilds, but I tink some of it is in the actual Makefiles. If a program has trouble compiling, it's always worth a shot to turn off distCC.

    A good resource for setting up distCC on a gentoo system (since compiling is so large of gentoo, this is particularly important) is gentoo.org's own distCC guide [gentoo.org]

    • Um...Can I mod myself down -1 Idiot?

      The article clearly states that you have to have the same version of GCC. For some reason I read it as distCC, hence the comment about different versions of GCC causing problems.

      I stand by my other points

    • Even if the major versions are the same, Gentoo applies patches to gcc 3.3.3 that are not present in Debian's 3.3.4 (the major version is 3.3). For example, Debian's gcc doesn't recognise -fno-stack-protector which Gentoo's does, and distcc fails.
    • >As far as I know, at least parts of xfree86, KDE >and the kernel turn off distributed compiling >during the build.

      You are missinformed.

      However kernels cant be compiled with make -j
      You have to set CONCURRENCY_LEVEL instead.
      First part of compile only uses one thread, but 90% or so of the kernel-compile will take advantage of distcc
      • No there are parts of KDE, Gnome, X and others, for instance Net-SNMP, that have problems with anything other then make -j1. The make process fails to find a library that it hasn't built yet, passing make -j1 fixes everything, well these problems anyway.
        • Well I'm using debian unstable on my laptops and workstations, and have been compiling my kernels with CONCURRENCY_LEVEL=10 for the last half year or so with no problems.
          when I look in distccmon, I clearly see paralell compiles, also the compile times clearly shows that its much faster than a singlle compile would be.

          I never tried compiling KDE / Gnome / X with distcc so I wouldnt know about them.

          What I have noticed is that on FreeBsd all the ports seem to fail when compiling with -j > 1
    • Which makes it a pain in the ass if you ask me.

      I have tried to use distcc for a lot of stuff, but it doesn't work on some packages, and that's enough to make me not use it.

      I don't want to have to hand-pick which packages to use it with and which ones to not use it with. Fortunately, a lot of Gentoo packages have a rule built in to not use distcc automatically, but it's not always the case.

      The other thing about distcc is that it won't increase the speed of the compile by any large magnitude with each ma
    • Also of Note. (Score:3, Informative)

      by temojen ( 678985 )

      Many programs that can use the SSE or MMX extensions (such as video codecs or Software OpenGL) will fail to compile with DistCC, reporting "Couldn't find a register of class BREG".

      If the failing package is one of many, you can emerge just the one package by changing make.conf FEATURES to not include distcc.

      FEATURES="ccache" emerge xine-libs
      emerge whateveryouwerebefore

      Note here I've used the ccache feature. It's really handy because it won't re-compile any parts that've already compiled sucessfully. When

    • The article also says that other than distCC, the computers need not have anything in common; this is not strictly true. Different major versions of GCC can cause problems if you are trying to compile with optimization flags that are only on the newer version.

      Heh. No. GCC pretty regularly breaks binary compatibility for the C++ ABI. Breaks were at GCC 2.95 -> 2.96 (though 2.96 was just a RH/Mandrake thing), 2.x -> 3.0, 3.1 -> 3.2 and 3.3 -> 3.4.

      You can't mix C++ compiled with any of the c

  • by Anonymous Coward on Monday July 05, 2004 @04:59PM (#9616199)
    It's also been been discussed here on Slashdot (two years ago!) in "A Distributed Front-end for GCC [slashdot.org]" and earlier this year in "Optimizing distcc [slashdot.org]."

    Distcc is great for installing Gentoo on an older computer because you can have other (faster) computers help with the compile, and if you like distcc, you may also like ccache [samba.org].
    • by cbreaker ( 561297 ) on Monday July 05, 2004 @05:19PM (#9616323) Journal
      It's news to people that don't read slashdot every day.

      I don't mind revisiting older topics once in awhile - it's only annoying when it's two days in a row. And even then, it's not that big of a deal, I simply pass over it.

      Posts like this are more waste of space then then a duplicate article post, and you get a lot more posts like yours then we do dupes. It's especially annoying when people say "We talked about this TWO YEARS AGO!!!" Well here's some news for you: I don't memorize every slashdot story since the beginning, and there's been a lot of new members since then.
      • It was discussed earlier this year.. let's repeat all the stories every few months just because some people don't read slashdot every day.

        When I miss slashdot, I just browse the old stories bit for anything interesting. I don't expect the stories to be repeated just for me.
  • D00d! (Score:5, Funny)

    by 0xdeadbeef ( 28836 ) on Monday July 05, 2004 @05:00PM (#9616208) Homepage Journal
    That's why I use Gentoo! [funroll-loops.org]
  • Awesome! (Score:3, Funny)

    by Anonymous Coward on Monday July 05, 2004 @05:00PM (#9616209)
    Now one can install Gentoo in _only_ 5 days!
  • by Eric Smith ( 4379 ) * on Monday July 05, 2004 @05:02PM (#9616216) Homepage Journal
    Instead of distcc, I use nc [brouhaha.com] by Steven Ellis. It seems to be more flexible, though I'm not an expert on distcc, so I'm not certain.

    I think nc can be used like distcc by redefining CC="nc gcc". However, more commonly it is done by putting $(NC) at the beginning of the build rules. Then you can use nc for any build rules, not just C compiles.

    In addition to use with make, nc works well with SCons [scons.org].

  • Really does help (Score:2, Informative)

    by Mazaev2 ( 128002 )
    I just installed a second Gentoo box, a lowly Pentium 2 400mhz with 128mb RAM and read up on distcc. It certainly makes compiling Gentoo a LOT faster when the otherwise poor underpowered box can ask my AthlonXP 2400 for some compiling help.

    It also requires rather minimal configuratio on my part and for the most part "Just Works[tm]"

    Hehe.. now if only I had a beowolf cluster....
    • So, do you really think there is a significant difference between the optimum gcc-compiled output on a P2 and an AthlonXP? Why don't you just save yourself a LOT of time and electricity by transferring the binaries from the fast machine to the slower one?
      • It's not that simple. If he/she optimized the binaries enough, they will contain instructions not capable of running on the p2. The main one being -march=athlon-xp.

        What you can do (and I have before) is use a faster box to build a gentoo userland inside a chroot, and then transfer that to a slower machine.

    • Try a sparcstation 5 @ 80mhz. You have no idea what "lowly" is
  • My family can't afford more than one computer, you insensitive clod!

    But seriously, is there a way to make use of the concepts embodied in distcc in a home computing environment? Or is distcc designed for use by for businesses and schools?

    • You would only see a performance improvement if the participants in a distcc cluster are all on the same LAN. Unless you have a really slow computer, the time it takes to upload the code and download the results over a broadband connection exceeds the time savings in shipping the job out.

      In my home environment, I have my slow lil' K6 400 ship it's compile jobs to my Athlon XP with 512MB of RAM. Both are on the same local network. In a home environment, if you have a brawny box and a scrawny box, DISTCC is

    • And this is why we throw Linux on the toaster, microwave oven, coffee maker, food processor, game consoles, you name it. Anything with a processor helps. And I bet all this time you just thought we were crazy zealots.
    • Try ccache (Score:3, Informative)

      by AT ( 21754 )
      While it is slightly different in concept, check out ccache [samba.org]. It only uses a single computer but it can significantly speed up your compiles. It works by caching the results of each compilation; it will only help if you compile the same code over and over.
    • by boots@work ( 17305 ) on Monday July 05, 2004 @08:38PM (#9617478)
      Absolutely, I use it at home all the time. It's great for sofa computing [samba.org]: sit on the sofa with a modest laptop, and send your compile jobs across a wireless network to a faster machine in the study.

      If you use the LZO compression [samba.org] option then it's quite useful even on 5Mbps wireless.

      You can also tell distcc to run as many jobs remotely as possible to keep the laptop from scorching your lap.

      It's really nice to be able to build the kernel from source in a reasonable time on a 800MHz machine.
  • by Anonymous Coward
    Any application I have to compile before using with the 'correct optimizations for my machine' will take more time to get up and running than any 'productivity gains' it might produce. This is why Linux is still not accepted by mainstream computer users. They don't care how it works, just that it does.
  • false savings (Score:5, Insightful)

    by Yojimbo-San ( 131431 ) on Monday July 05, 2004 @05:08PM (#9616265)
    Compare the speed cost of loading a "generic" binary to an "optimised" one, multiply by the number of times you load that binary.

    Then look at the time required to compile the optimised copy.

    How often, in the lifetime of a particular version of a binary, do you really need to reload it?

    The promise of distcc is closely related to source distributions like Gentoo. The benefit is overstated. Don't waste your time.

    • The promise of distcc is closely related to source distributions like Gentoo. The benefit is overstated. Don't waste your time.

      Like many code optimizations, this sounds like one of those things that you spend a lot of time on when really, you can spend a few dollars and buy the next faster up processor to get the same gain. Not really worth it, espessially if the amount of money you're getting paid (assuming you're getting paid) is greater than or equal to the cost of the upgrade..
    • Re:false savings (Score:2, Interesting)

      by petabyte ( 238821 )
      Yes, you're trolling but I feel the urge to bite. Gentoo really isn't about CFLAGS. Its about USE flags. I can build programs as I want with whatever options I want. Portage takes care of the dependencies and my systems are _exactly_ as I want them to be.

      If you like binaries, fine, there's no shortage. You use what you want, I'll use what I want. Keep your insults to yourself.
    • I would have supposed that 'boosting up productivity' would have referred to using distcc to make faster builds of something that you're actively developing and thus need to compile dozens of times per day perhaps.

      in this kind of scenario distcc can be really useful, cheapo clustering power.
    • Re:false savings (Score:3, Insightful)

      by boots@work ( 17305 )
      Insightful, or Troll? The eternal question....

      The promise of distcc is closely related to source distributions like Gentoo. The benefit is overstated.

      The promise has nothing to do with Gentoo. distcc was released before Gentoo was well-known, and is useful to people who never run Gentoo.

      You see, some of us actually develop our own software, rather than building distributions from source. If you write your own code in C or C++, you need to compile it, probably many times per day. If you work on larg
  • by Mnemia ( 218659 ) on Monday July 05, 2004 @05:10PM (#9616283)

    Personally, I think that distcc will become more and more useless as computers get faster. My new machine (P4 2.8, 1 GB RAM, SATA drives) can compile a complete Gentoo desktop system in just about two hours. That's pretty damn cool considering that it used to take like 24 on my old laptop when I first started using Gentoo several years ago. It would probably only take about an hour to setup a server system on Gentoo on my same machine since the biggest component, X, would not need to be compiled.

    Computing power is outstripping the size of source code that needs to be compiled. Soon there will be little difference in install time between the source and binary distros, and all the jokes about Gentoo's compile time will be pretty much obsolete. Already, once you have your system installed, the time required to keep current/install new apps is minimal. My system can compile any new program (except maybe OpenOffice) in under 25 minutes. Even Mozilla can be compiled in that time.

    • There are still enterprise uses where coders need to compile huge projects from scratch that take too long on a single workstation. Instead of that build taking 15 minutes on a single workstation, they can tap the power of all the workstations and build it in a few minutes or perhaps even seconds.
    • Phht. Just wait until Microsoft releases Gentoo/Windows.

      Seriously though -- Microsoft is the one company that can guarantee that their source code sizes will continually outstrip computational power. I wonder what kind of clustering solution they use to get their Windows builds to compile in a reasonable timeframe?

    • Astounded, think I'm lying, then you were wrong.

      Not that it doesn't take me two seconds, but that 'Computing power is outstripping the size of source code that needs to be compiled.'
    • by evilviper ( 135110 ) on Monday July 05, 2004 @05:54PM (#9616494) Journal
      Soon there will be little difference in install time between the source and binary distros

      As your CPU gets faster, installing the binary will be quicker too.

      But most of all, you will see programs getting a Mozilla complex... Lots and lots of bloat, with no effort going into optimizing anything. KDE and GNOME have that problem. Even formerly lightweight programs like XFce are now heavy programs (thanks in no small part to the bloat of GTK2).

      If processing power continues to rise, pretty soon you'll see programming becomming far sloppier, and waste a lot more time. Sure, you can compile mozilla in under 25 minutes now, but you could do the same with other browsers before Mozilla, when slower CPUs were king. When Mozilla 2 comes along, it'll be massive, and we'll be back where we started. Telling people to waste tons of money on new hardware, rather than paying a bit larger salary for a better programmer than can make a full-featured browser that will run on a 100MHz processor. Think about it, is there anything fundamental that Mozilla can do that Netscape 3 couldn't?
    • Take a closer look--you're probably kidding yourself. Mozilla in binary form is a few megabytes, which is no more than a minute or two download. From source, add in the source download to the compile time, and you're way beyond that.

      In my work environment, we like to be productive. Taking a thirty minute break while new software compiles does not fit that criteria, when you could have spent five minutes downloading the software, and be configuring it for the twenty-five minutes left.
    • I remember compiling 2.0.x kernels on my own 100MHz Pentium. It took -forever-.

      Later on, I built a 350MHz K6-2 machine for a customer, and it was a screamer compiling its 2.0.x kernel, taking just a few minutes.

      Fast forward: I've got a very similar K6-2 350 as a miscellaneous server and firewall here. Compiling its 2.6 kernel takes -forever-.

      But the new 2.4GHz HT 800MHz FSB P4 box I built recently for work is again a screamer, compiling its 2.6 kernel in a few minutes. This box is in roughly the same
    • It's more likely that programmers will just use languages [paulgraham.com] that waste machine time but save programmer time.

      On the other hand the death of C or C++ has been predicted for about 20 years now, and it's still pretty popular. So I don't think large C packages will go out of style any time soon.
  • by Anonymous Coward
    Well, as someone who recompiles FreeBSD/DragonFly quite frequently, I've got to say that the best way to reduce the time it takes is to build eveything in a ramdisk. I've cut 100 minute compile times down to about half an hour by mounting /usr/obj in a ramdisk instead of on my hard drive.

    http://bsdvault.net/sections.php?op=viewarticle&ar tid=53 [bsdvault.net]
  • by brucehoult ( 148138 ) on Monday July 05, 2004 @05:12PM (#9616293)
    We use the distcc that Apple distributes with XCode even though we dont' use XCode itself. It really helps to get a few dual-CPU G5's working!

    The cool thing about Apple's version is that by default it uses Rendezvous to determine which machines are available to distribute work to.
  • No, no, no! (Score:2, Interesting)

    by JustinXB ( 756624 )
    Reducing compile time by distributing the load isn't reducing it all, it's just distributing it. Try using a compiler that compiles fast -- such as Plan 9s compilers.
    • Re:No, no, no! (Score:4, Insightful)

      by evilviper ( 135110 ) on Monday July 05, 2004 @05:47PM (#9616452) Journal
      Try using a compiler that compiles fast -- such as Plan 9s compilers.

      Well, that won't help anyone, since almost nobody runs Plan 9.

      Also, advising people to use faster compilers is bad advice. The point is to make the application faster, and the slower the compile, the faster the application is likely to run. eg. GCC2 vs GCC3

      Reducing compile time by distributing the load isn't reducing it all

      Yes, you are reducing the time it takes to complete the process. It doesn't reduce CPU-time, it reduces real-time. You know, the real world, in which we live... The only thing that really matters.
  • Right, whatever (Score:3, Insightful)

    by Anonymous Coward on Monday July 05, 2004 @05:16PM (#9616306)
    Precompiled binaries will never run as quickly as those compiled with the right optimizations for your own machine

    And there are maybe about ten to fifteen people on all of slashdot who actually know how to go about setting the right optimizations for their own machine.
    • Re:Right, whatever (Score:2, Interesting)

      by Helamonster ( 778370 )
      If you have a pretty sticker on the front of your computer that says "Intel inside," "AMD Athlon XP," or whatever, and you can click a button corresponding to that, then you know enough to optimize your binaries ;)
  • by Kjella ( 173770 ) on Monday July 05, 2004 @05:18PM (#9616317) Homepage
    "(...) a distributed C compiler based on gcc, that gives you significant productivity gains."

    Assuming
    a) That compiling will give you any significant performance increase (which I kinda doubt, it's not like the defaults are braindead either)
    b) You don't spend more time mucking about with distCC / compiling than you'll actually use the software
    c) Your software is actually code bound (and not "What do I type/click now?" human bound, or bandwidth bound or whatever)

    I can't think of a single thing I do that's code bound. And I actually do a bit of compiling, but I spend those seconds thinking about what to code next. Either that, or it is bandwidth bound or non-time critical (i.e. does it take 6,5 hours or 7 hours? Who cares. The difference is half an hours work for my computer, 0 for me. So the time I'd spend to improve it is - gasp - 0.

    Kjella
    • by evilviper ( 135110 ) on Monday July 05, 2004 @05:44PM (#9616437) Journal
      I can't think of a single thing I do that's code bound.

      The obvious and most popular answer is encoding video. I think a great many people do a lot of this. Since no processor is fast enough to encode DVD-res video at 16X, it isn't bound by IO speeds either. I can start videos encoding in far less time than it takes to complete the process as well. Pure CPU number-crunching.

      Other applications are any form of crypto. Reduce the time you have to wait for PGP to encrypt. Reduce the delay on your SSH sessions.

      Then there are databases. Sure, they're often IO bound, but it is commonly a CPU limitation.

      Also any heavy-load service. If apache is serving lots of threads, especially PHP/Perl compiled pages, you are going to be maxing out your CPU.

      Then there are the programs that are just bloated. Mozilla/Firefox is still quite slow, and I can open pages far, far faster than they can be rendered. Anything that makes it even 1% faster is very welcome, as those savings eventually add-up to large ammounts of time.

      If you really never use any of those, hooray for you, but most people certainly do.
      • While I've come across your argument a lot, regarding lots of little amounts of saved time, I struggle to see any actual value in these "micro" time savings.

        How do you collect up all these micro time savings, making up an "new" hour, in which you can usefully do something else ?

        The only application you've mentioned where I think these optimisations matter is DVD encoding. More broadly, useful time savings can be gained as these jobs tend to take a lot of time eg. I would consider saving 5 minutes on an

  • I have 2 machines with K6-2 processors. Currently, I use "-Os" because I have read that reducing the code size can improve perfomance better than "-O3" on machines with small caches.

    Does anyone have any advice about this? Are there any objective comparisons that relate to my configurations?
    • Fagetaboutit

      O2 is your best all-around setting. Os does make smaller code, but the stuff it outputs is slower. It also causes weird problems with certain apps. It could be useful to condense the memory footprint of properly designed code (like GlibC.) But remember, decreased memory footprint=more hoops the computer has to go through. Think of it like employing fold out-tables. Sure it saves space, but you spend time folding it and unfolding it.

      O3 is a waste of time, except for certain scientific computi

  • by SmileeTiger ( 312547 ) on Monday July 05, 2004 @05:24PM (#9616341)
    Other then distributed compiler tools like distcc and nc are there any other ways of speeding up a linux compile with gcc?

    I was blown away when my project group compiled a Qt app that we developed on the Linux platform with the MS VC++ compiler. The compilation took 1/10th the time! We were using Makefiles generated by QMake in both cases.

    Should I just switch compilers? If so does anyone have any suggestions?
    • Precompiled headers can give more than a 50% performance improvement, though they don't work for a lot of things, but it kinda works with QT [trolltech.com]

      In the real world pre-compiled headers, offset caching and all that kinda crap would be seemless, quick and easy, but in the GCC (portability) world they havn't quite made it yet.
  • by Rolman ( 120909 ) on Monday July 05, 2004 @05:25PM (#9616350)
    The problem with compiling your own binaries is that you are effectively forking code from the original distribution at the low level. To do this you really must know what you're doing, and that can be a very difficult thing when working with applications you didn't write yourself.

    Just look at the Linux Kernel Mailing List and how many errors can be traced to a GCC specific version. That's why Linus enforces a standardized compiler environment, developers can't be wasting their time fixing compiler-induced errors.

    I know it's attractive to just recompile your whole distribution to your specific hardware combination because there are real world performance gains, but sometimes there are weird bugs caused by it and you'll probably be out of luck trying to find some documentation on them. What are the chances of somebody having the same hardware configuration? And remember we're not talking about branded components and specific models, we must throw in firmware, drivers, BIOS settings and whatnot into the mix.

    As long as the PC components are not standardized, this problem is never going to be away. I seriously considered Mandrake and Gentoo a couple of times in the past and they had very different bugs on each version every time I tried them. Even though they have gotten better on each release, I'd still refuse to put them on a production machine, there's a reason why every distro ships with a precompiled i386 kernel.

    I, for one, just recompile the most important parts of a system that do require most of the CPU time, like the kernel, Apache, and other runtime libraries whenever I do need that extra punch, not a second before. distcc is a geek tool and has that coolness-factor and all, but I'm not on a frenzy to use it to recompile all my servers' software, I care about stability first.
    • "The problem with compiling your own binaries is that you are effectively forking code from the original distribution at the low level."

      Wow. That is blatantly false and misleading. "Forking code" refers to one piece of code diverging into two separate projects, ala X.org and Xfree86. All binaries have to be compiled somewhere...when the guys at Red Hat or Suse do it for you, they generally do it in a very generic, compatible way, and when you do it yourself, you can take some more risk or tailor it to y
  • Wow! My company has been doing distributed compiles for about fifteen years now (with gcc, nonetheless). It's old hat. But along comes some guy telling Gentoo users to use distributed compiles, and suddenly it's the next best thing to sliced bread! This is like the sixth or seventh distcc article I've seen in the last month.

    It's really nice that you guys have discovered this, but don't act like it's something new and amazing, or even that it's something unique to Linux.
  • Honest question: gcc has the reputation of not producing the fastest code for x86, so why should I bother compiling gentoo with gcc or distcc?
    Does anyone know if there are distro's compiled with, say, the Intel compiler?
  • by 21mhz ( 443080 ) on Monday July 05, 2004 @05:27PM (#9616364) Journal
    While I haven't RTFA yet, I find the premise stated in the posting somewhat far-fetched. If I need binaries tuned finer than those provided by binary .rpm's, I can take their respective .src.rpm's and rebuild them. The RPM build system, in the distributions I know, provides a convenient way to override optimization flags via system- or user-settable macros. As for compilation time, it's not an issue for most packages these days, as many Gentoo users here can testify.
  • Today, if the boss catches me reading /. I can say "I only do it during long compiles, honest!"
    This could ruin EVERYthing!

    --
  • I'd be interested to see what speed improvements would be found if say a Gentoo system was built using the Intel compiler (with an intel CPU, obviously ;-) instead of GCC. Anyone tried created RPMs or a whole distro using another compiler?
  • Works well! (Score:2, Interesting)

    by Sansavarous ( 78763 )
    I've got two Gentoo systems that run distccd.

    I did some non-scientific testing with distcc.

    System one:
    Athalon 1400 XP
    512 Meg RAM

    System two:
    Pentium III 450
    512 Meg RAM

    I compiled GAIM on System one with distccd running on system one and two, also compiled with just distccd running on system one.

    I found that with both systems running distccd I got about a two minute faster compile. Then with just distccd running on system one.

    With distccd running on just system one I found that it would process many of th
  • Wait a sec (Score:3, Insightful)

    by EvilStein ( 414640 ) <spamNO@SPAMpbp.net> on Monday July 05, 2004 @05:35PM (#9616400)
    The Anti-Gentoo zealots always say stuff like "It's not going to be faster.." etc etc..

    Now someone is saying: "precompiled binaries will never run as quickly as those compiled with the right optimizations for your own machine."

    So, the Gentoo users that are claiming that stuff compiled with the right optimizations *is* faster?

    I'm confused. Which is it supposed to be? Are Gentoo users full of crap, or are they correct?

    I use Gentoo and have found things to be a hell of a lot easier to deal with than RPM based binary distros anyway.
    I just want the scoop. :P

    Oh, distcc has been in Gentoo for a while.. surprised to see it listed like it's a new thing.
    • Oh, distcc has been in Gentoo for a while.. surprised to see it listed like it's a new thing.

      I'm not surprised it's been there--heck, if anything could benefit from a compile farm, it's Gentoo!

      • While Gentoo helps you once, the people who get a benefit from a compile farm are developers.

        Typical developer will go through many:

        Change code,
        Compile
        Test
        Repeat

        Sometime hundreds of times a day. If you can make your compile times drop form 2 minutes to 1 minute you will save over an hour a day.

    • Re:Wait a sec (Score:4, Interesting)

      by caluml ( 551744 ) <slashdotNO@SPAMspamgoeshere.calum.org> on Monday July 05, 2004 @05:44PM (#9616443) Homepage
      From what I understand, it is the fact that you configure your system to never build packages with support for a, b, and c, and with support for x, y, and z.
      No point building mozilla with GTK2 support if you don't need it, is there? Or Samba with any of the following with question-marks:
      _COMMON_DEPS="dev-libs/popt
      readline? sys-libs/readline
      ldap? ( kerberos? ( virtual/krb5 ) )
      mysql? ( dev-db/mysql sys-libs/zlib )
      postgres? ( dev-db/postgresql sys-libs/zlib )
      xml? ( dev-libs/libxml2 sys-libs/zlib )
      xml2? ( dev-libs/libxml2 sys-libs/zlib )
      acl? sys-apps/acl
      cups? net-print/cups
      ldap? net-nds/openldap
      pam? sys-libs/pam
      python? dev-lang/python"
      It's the fact that you get packages on your system that match the settings you set before, not the fact you can compile every package with -fomit-frame-pointer that gives Gentoo its strength.
    • Re:Wait a sec (Score:3, Insightful)

      by cas2000 ( 148703 )
      > I'm confused. Which is it supposed to be? Are
      > Gentoo users full of crap, or are they correct?

      for almost all programs, it's not going to be *noticably* any faster. on average, you can get maybe 1% to 5% performance improvement from CPU-optimised binaries. this generally isn't worth the time and effort it takes to do the custom compile.

      for heavy graphics processing or number crunching, you'll probably notice that. you almost certainly wouldn't notice it on anything else.

      so, yes, the gentoo user
  • by meowsqueak ( 599208 ) on Monday July 05, 2004 @05:37PM (#9616407)
    I've spent the last week setting up a Gentoo cluster with distcc and I've noticed a few things:

    1. when *recompiling*, the advantage due to ccache far outweighs the performance of distcc on the first compile. If you're testing distcc you need to be aware of this and disable ccache.

    2. most large packages either disable distcc (e.g. xfree by limiting make -jX) or compile small sets of files in bursts and spend the majority of time performing non-compilation and linking. Distcc helps with the compilation but because it's only a small part of the total build time, the overall improvement isn't as great as you might have hoped.

    3. distccmon-gnome is very cool.

    4. using distcc with Gentoo transparently involves modifying your path and this can make non-root compilations troublesome (permissions on distcc lock files). I haven't figured this one out yet other than to specify the full path to the compiler: make CC=/usr/bin/gcc rather than CC=gcc.

    5. the returns from adding an extra distcc server to the pool drop considerably after the first few machines. Even on a 1 gigabit LAN the costs of distcc catch up with the benefits after a while. This is more of a concern when compiling lots of small files.

    6. it can handle cross-compilation with a bit of configuration.

    So although distcc can often reduce build time, it's not quite as effective as you might assume or hope at first.
  • Obligatory "No" (Score:5, Insightful)

    by timotten ( 5411 ) on Monday July 05, 2004 @05:42PM (#9616429) Homepage
    precompiled binaries will never run as quickly as those compiled with the right optimizations for your own machine

    A straw man. Precompiled binaries may have been compiled with the optimal settings for your machine, and binaries which you compile may not have the optimal settings. Identifying the optimal settings can actually be non-trivial. Source-based distributions are not necessarily the best fix to the 'one-size-fits-all' approach used by some distro's.
    • Identifying the optimal settings can actually be non-trivial.

      True. No task for a human. Tedious work.

      But have you heard of this [coyotegulch.com], a genetic optimizer for the compile time optimization parameters to GCC?

      Oh and to start dreaming about new features: Maybe GCC can implement such a feature to find the best optimization parameters for each function which is being compiled? :)
    • The claim also contains the assumption that applications are CPU-bound. All the recompiling in the world won't make something go faster if it's waiting on a disk or a UART or a NIC. Many applications are fast enough anyway -- who cares if /bin/cat gets a 2% improvement of its CPU use? I bet I could add a 20 microsecond gratuitous delay in the main loop of cat, and not noticably affect its performance!

      That said, the kinds of things I would like to have extra-optimized for speed are generally big, huge,
  • by inflex ( 123318 ) on Monday July 05, 2004 @05:42PM (#9616432) Homepage Journal
    I came across distcc by chance about 4 months ago, and I must say, it has utterly improved things around here.

    We reguarly develop/compile/debug a moderate-small sized software package, typically taking about 1 minute per compile. Now, while 1 minute doesn't sound like a long time, it starts adding up when you find yourself recompiling 100+ times a day.

    With the inclusion of distcc into the whole situation, we're able to reduce that 1 minute compile down to a little less than 20 seconds; highly appreciated (although now we have less excuses to go get a coffee :-( ).

    Distcc is a great package which can be extremely useful.

    PLD.
  • by Flexagon ( 740643 ) on Monday July 05, 2004 @05:48PM (#9616459)

    But this can be a false economy...

    Every time something that is distributed in binary is rebuilt from source for local use, by definition it's to change some assumption that was inherent in the testing of the original binary (or else the binary distribution would suffice). And with that, some non-0 confidence that was built into the binary release by that testing is wiped out and must be recovered by local analysis and testing (i.e., time and effort) or reduced expectations. Otherwise, it's running on blind faith. This is particularly true with programs that are used frequently, i.e., one expects to depend on them repeatedly. So in my mind, "the best of both worlds" is more meaningful if it refers to fast and reliable apps. I don't care how fast the compiler is if I can't trust the results anymore. That is a different economy equation, and completely justifes the "convenience" of pre-compiled binaries in many applications.

  • I use it for OSX (Score:3, Interesting)

    by streak ( 23336 ) on Monday July 05, 2004 @06:11PM (#9616618) Journal
    My main use for distcc currently is building software for my powerbook.
    I do a lot of work with Qt on both Linux and Mac, and lets just say Qt compiles very slow on my powerbook (which is an older 800 mhz G4).
    Also, I've had to build all of Qt on this machine because the fink packages are old and don't even use the Mac version (they use the X11 version which really sucks and makes apps on Macs look like crap).

    So at work we have a couple dual G5s I use, and also a few Linux machines which I've built darwin cross-compilers for (yes its a pain in the ass).
  • by kidlinux ( 2550 ) <duke.spacebox@net> on Monday July 05, 2004 @06:27PM (#9616718) Homepage
    At the moment there's a bug in Linux kernel 2.4.26 that causes the remote compiling systems to encounter a kernel panic (and crash.)

    It's a known bug and has been discussed on the lkml [google.com]. The bug is also discussed on the gentoo bugzilla [gentoo.org]. A patch [gentoo.org] is also available, though the patch program didn't work for me so I had to apply it manually.

    The patch seems to be holding up, too. If you're using distcc on systems with vanilla 2.4.26 kernels, I'd suggest patching them.
  • by Anonymous Coward on Monday July 05, 2004 @06:46PM (#9616814)
    When building large Macintosh applications, we've been using Objective-C and this distributed compiler.

    It's so PAINFULLY SLOW to build anything for the Mac, with this inefficient Objective-C compiler and large linking requirements for Carbon, that without these distributed tools and some G5 servers, it would be hard for us to develop.

    Interestingly, our Windows version of this product, built in C#, compiles extremely fast with no distributed trickery needed.

  • Incredibuild (Score:5, Informative)

    by kingos ( 530288 ) on Monday July 05, 2004 @07:18PM (#9617021)
    For those using Visual Studio on Windows, I highly recommend a tool called Incredibuild [xoreax.com] to do the same job. It is not free like distcc, but is very effective and integrates nicely with Visual Studio. It cut my build time for a project at work from 15 minutes to 1 minute 20 seconds. Nice!

    kingos
  • by eddy ( 18759 ) on Monday July 05, 2004 @08:16PM (#9617361) Homepage Journal

    ... was just released [gnu.org].

    Only available on mirrors [gnu.org], currently.

  • Security? (Score:3, Interesting)

    by Ed Avis ( 5917 ) <ed@membled.com> on Tuesday July 06, 2004 @08:09AM (#9620465) Homepage
    Worryingly the article does not mention *at all* the obvious security questions. If you run a distcc service on a host then who is authorized to connect to it and compile programs? How do they authenticate? What about protection against man-in-the-middle attacks (you may not be paranoid enough to worry about people fiddling with the object code before it is sent back, but at least you ought to know if it's possible). I hope it's not another case of 'ignore security in the service, but it's okay, we'll just put it behind a firewall'.

    FWIW, distributed compliation programs like distcc are a good reason to check for buffer overruns and other memory trampling in the compiler. If you've ever managed to segfault gcc by feeding it a bad piece of code, there is a potential exploit via distcc if you can craft a C program that makes the compiler misbehave in the way you want.
  • by OrangeTide ( 124937 ) on Tuesday July 06, 2004 @11:03AM (#9622360) Homepage Journal
    Xcode build system: Distributing Builds Among Multiple Computers [apple.com]

    Yes, Apple has come standard with distcc for quite some time.

"Conversion, fastidious Goddess, loves blood better than brick, and feasts most subtly on the human will." -- Virginia Woolf, "Mrs. Dalloway"

Working...