Reduce C/C++ Compile Time With distcc 292
An anonymous reader writes "Some people prefer the convenience of pre-compiled binaries in the form of RPMs or other such installer methods. But this can be a false economy, especially with programs that are used frequently: precompiled binaries will never run as quickly as those compiled with the right optimizations for your own machine. If you use a distributed compiler, you get the best of both worlds: fast compile and faster apps. This article shows you the benifits of using distcc, a distributed C compiler based on gcc, that gives you significant productivity gains."
DistCC is good, but there's some info missing (Score:5, Informative)
While distCC is a great tool, there are a couple things to mention. First, the article blurb states that distCC is "a distributed compiler based on GCC." It is actually a method of passing files to GCC on a remote computer in such a way that the build scripts think it was done locally.
The article also says that other than distCC, the computers need not have anything in common; this is not strictly true. Different major versions of GCC can cause problems if you are trying to compile with optimization flags that are only on the newer version. I have run into this on my gentoo box, trying to use an outdated version of GCC on a redhat box.
Another thing is that some very large packages have trouble with distributed building of any sort (either multiple threads on the same machine, or over a network like with distCC). As far as I know, at least parts of xfree86, KDE and the kernel turn off distributed compiling during the build. Some of this might just be in the gentoo ebuilds, but I tink some of it is in the actual Makefiles. If a program has trouble compiling, it's always worth a shot to turn off distCC.
A good resource for setting up distCC on a gentoo system (since compiling is so large of gentoo, this is particularly important) is gentoo.org's own distCC guide [gentoo.org]
Re:DistCC is good, but there's some info missing (Score:2, Informative)
Um...Can I mod myself down -1 Idiot?
The article clearly states that you have to have the same version of GCC. For some reason I read it as distCC, hence the comment about different versions of GCC causing problems.
I stand by my other points
Re:DistCC is good, but there's some info missing (Score:5, Informative)
Re:DistCC is good, but there's some info missing (Score:5, Informative)
No problems. That I noticed. Wouldn't compile a production system kernel this way, though.
Re:DistCC is good, but there's some info missing (Score:5, Informative)
Re:DistCC is good, but there's some info missing (Score:3, Informative)
Re:DistCC is good, but there's some info missing (Score:2)
major.minor.??? (revision?).
Re:DistCC is good, but there's some info missing (Score:2)
You are missinformed.
However kernels cant be compiled with make -j
You have to set CONCURRENCY_LEVEL instead.
First part of compile only uses one thread, but 90% or so of the kernel-compile will take advantage of distcc
Re:DistCC is good, but there's some info missing (Score:3, Interesting)
Re:DistCC is good, but there's some info missing (Score:3, Informative)
when I look in distccmon, I clearly see paralell compiles, also the compile times clearly shows that its much faster than a singlle compile would be.
I never tried compiling KDE / Gnome / X with distcc so I wouldnt know about them.
What I have noticed is that on FreeBsd all the ports seem to fail when compiling with -j > 1
Yes, it doesn't work with everything... (Score:3, Insightful)
I have tried to use distcc for a lot of stuff, but it doesn't work on some packages, and that's enough to make me not use it.
I don't want to have to hand-pick which packages to use it with and which ones to not use it with. Fortunately, a lot of Gentoo packages have a rule built in to not use distcc automatically, but it's not always the case.
The other thing about distcc is that it won't increase the speed of the compile by any large magnitude with each ma
Also of Note. (Score:3, Informative)
Many programs that can use the SSE or MMX extensions (such as video codecs or Software OpenGL) will fail to compile with DistCC, reporting "Couldn't find a register of class BREG".
If the failing package is one of many, you can emerge just the one package by changing make.conf FEATURES to not include distcc.
Note here I've used the ccache feature. It's really handy because it won't re-compile any parts that've already compiled sucessfully. When
Close, but still wrong... (Score:3, Informative)
Heh. No. GCC pretty regularly breaks binary compatibility for the C++ ABI. Breaks were at GCC 2.95 -> 2.96 (though 2.96 was just a RH/Mandrake thing), 2.x -> 3.0, 3.1 -> 3.2 and 3.3 -> 3.4.
You can't mix C++ compiled with any of the c
Re:DistCC is good, but there's some info missing (Score:2, Funny)
I'm not sure how this is news (Score:5, Informative)
Distcc is great for installing Gentoo on an older computer because you can have other (faster) computers help with the compile, and if you like distcc, you may also like ccache [samba.org].
It's news to some people (Score:5, Insightful)
I don't mind revisiting older topics once in awhile - it's only annoying when it's two days in a row. And even then, it's not that big of a deal, I simply pass over it.
Posts like this are more waste of space then then a duplicate article post, and you get a lot more posts like yours then we do dupes. It's especially annoying when people say "We talked about this TWO YEARS AGO!!!" Well here's some news for you: I don't memorize every slashdot story since the beginning, and there's been a lot of new members since then.
Re:It's news to some people (Score:2)
When I miss slashdot, I just browse the old stories bit for anything interesting. I don't expect the stories to be repeated just for me.
D00d! (Score:5, Funny)
Re:D00d! (Score:2, Funny)
Awesome! (Score:3, Funny)
nc: a better tool for distributed builds (Score:5, Interesting)
I think nc can be used like distcc by redefining CC="nc gcc". However, more commonly it is done by putting $(NC) at the beginning of the build rules. Then you can use nc for any build rules, not just C compiles.
In addition to use with make, nc works well with SCons [scons.org].
Re:nc: a better tool for distributed builds (Score:5, Informative)
Re:nc: a better tool for distributed builds (Score:2)
regards,
the management
Re:nc: a better tool for distributed builds (Score:2)
I suggest changing to the name 'Phoenix'. No, wait...
Really does help (Score:2, Informative)
It also requires rather minimal configuratio on my part and for the most part "Just Works[tm]"
Hehe.. now if only I had a beowolf cluster....
Re:Really does help (Score:3, Interesting)
Re:Really does help (Score:2)
What you can do (and I have before) is use a faster box to build a gentoo userland inside a chroot, and then transfer that to a slower machine.
Re:Really does help (Score:2)
I have only one computer (Score:2, Interesting)
My family can't afford more than one computer, you insensitive clod!
But seriously, is there a way to make use of the concepts embodied in distcc in a home computing environment? Or is distcc designed for use by for businesses and schools?
Re:I have only one computer (Score:3, Insightful)
In my home environment, I have my slow lil' K6 400 ship it's compile jobs to my Athlon XP with 512MB of RAM. Both are on the same local network. In a home environment, if you have a brawny box and a scrawny box, DISTCC is
Re:I have only one computer (Score:2, Funny)
Try ccache (Score:3, Informative)
Re:I have only one computer (Score:5, Interesting)
If you use the LZO compression [samba.org] option then it's quite useful even on 5Mbps wireless.
You can also tell distcc to run as many jobs remotely as possible to keep the laptop from scorching your lap.
It's really nice to be able to build the kernel from source in a reasonable time on a 800MHz machine.
Productivity Gains My Arse (Score:2, Insightful)
Re:Productivity Gains My Arse (Score:2)
Re:Productivity Gains My Arse (Score:2)
Re:Productivity Gains My Arse (Score:2)
if 'productivity' is dependant on compile time then anything to speed it up is useful.
It's quite hardcore to develope in c or c++ if you never compile anything.
false savings (Score:5, Insightful)
Then look at the time required to compile the optimised copy.
How often, in the lifetime of a particular version of a binary, do you really need to reload it?
The promise of distcc is closely related to source distributions like Gentoo. The benefit is overstated. Don't waste your time.
Re:false savings (Score:2)
Like many code optimizations, this sounds like one of those things that you spend a lot of time on when really, you can spend a few dollars and buy the next faster up processor to get the same gain. Not really worth it, espessially if the amount of money you're getting paid (assuming you're getting paid) is greater than or equal to the cost of the upgrade..
Re:false savings (Score:2, Interesting)
If you like binaries, fine, there's no shortage. You use what you want, I'll use what I want. Keep your insults to yourself.
Re:false savings (Score:3)
in this kind of scenario distcc can be really useful, cheapo clustering power.
Re:false savings (Score:3, Insightful)
The promise of distcc is closely related to source distributions like Gentoo. The benefit is overstated.
The promise has nothing to do with Gentoo. distcc was released before Gentoo was well-known, and is useful to people who never run Gentoo.
You see, some of us actually develop our own software, rather than building distributions from source. If you write your own code in C or C++, you need to compile it, probably many times per day. If you work on larg
Less and less necessary in the future (Score:3, Interesting)
Personally, I think that distcc will become more and more useless as computers get faster. My new machine (P4 2.8, 1 GB RAM, SATA drives) can compile a complete Gentoo desktop system in just about two hours. That's pretty damn cool considering that it used to take like 24 on my old laptop when I first started using Gentoo several years ago. It would probably only take about an hour to setup a server system on Gentoo on my same machine since the biggest component, X, would not need to be compiled.
Computing power is outstripping the size of source code that needs to be compiled. Soon there will be little difference in install time between the source and binary distros, and all the jokes about Gentoo's compile time will be pretty much obsolete. Already, once you have your system installed, the time required to keep current/install new apps is minimal. My system can compile any new program (except maybe OpenOffice) in under 25 minutes. Even Mozilla can be compiled in that time.
Re:Less and less necessary in the future (Score:5, Insightful)
Phht. (Score:2)
Seriously though -- Microsoft is the one company that can guarantee that their source code sizes will continually outstrip computational power. I wonder what kind of clustering solution they use to get their Windows builds to compile in a reasonable timeframe?
2 hours!!! it only takes me 2 seconds (Score:2)
Not that it doesn't take me two seconds, but that 'Computing power is outstripping the size of source code that needs to be compiled.'
Re:Less and less necessary in the future (Score:4, Interesting)
As your CPU gets faster, installing the binary will be quicker too.
But most of all, you will see programs getting a Mozilla complex... Lots and lots of bloat, with no effort going into optimizing anything. KDE and GNOME have that problem. Even formerly lightweight programs like XFce are now heavy programs (thanks in no small part to the bloat of GTK2).
If processing power continues to rise, pretty soon you'll see programming becomming far sloppier, and waste a lot more time. Sure, you can compile mozilla in under 25 minutes now, but you could do the same with other browsers before Mozilla, when slower CPUs were king. When Mozilla 2 comes along, it'll be massive, and we'll be back where we started. Telling people to waste tons of money on new hardware, rather than paying a bit larger salary for a better programmer than can make a full-featured browser that will run on a 100MHz processor. Think about it, is there anything fundamental that Mozilla can do that Netscape 3 couldn't?
Re:Less and less necessary in the future (Score:2)
I can see that this argument is going to hinge on a redefinition of "fundamental." Nevertheless, Netscape 3 is closed source, and doesn't support CSS.
Re:Less and less necessary in the future (Score:2)
In my work environment, we like to be productive. Taking a thirty minute break while new software compiles does not fit that criteria, when you could have spent five minutes downloading the software, and be configuring it for the twenty-five minutes left.
Time is a constant. distcc helps. (Score:3, Interesting)
Later on, I built a 350MHz K6-2 machine for a customer, and it was a screamer compiling its 2.0.x kernel, taking just a few minutes.
Fast forward: I've got a very similar K6-2 350 as a miscellaneous server and firewall here. Compiling its 2.6 kernel takes -forever-.
But the new 2.4GHz HT 800MHz FSB P4 box I built recently for work is again a screamer, compiling its 2.6 kernel in a few minutes. This box is in roughly the same
Re:Less and less necessary in the future (Score:3, Interesting)
On the other hand the death of C or C++ has been predicted for about 20 years now, and it's still pretty popular. So I don't think large C packages will go out of style any time soon.
Speeding up compile times (Score:2, Interesting)
http://bsdvault.net/sections.php?op=viewarticle&a
distcc and rendezvous (Score:5, Interesting)
The cool thing about Apple's version is that by default it uses Rendezvous to determine which machines are available to distribute work to.
No, no, no! (Score:2, Interesting)
Re:No, no, no! (Score:4, Insightful)
Well, that won't help anyone, since almost nobody runs Plan 9.
Also, advising people to use faster compilers is bad advice. The point is to make the application faster, and the slower the compile, the faster the application is likely to run. eg. GCC2 vs GCC3
Yes, you are reducing the time it takes to complete the process. It doesn't reduce CPU-time, it reduces real-time. You know, the real world, in which we live... The only thing that really matters.
Re:No, no, no! (Score:4, Insightful)
I wouldn't. I compile my programs once, then use them for MONTHS on end. Even a tiny speed improvement in something like Mozilla will save a HUGE ammount of time overall, and by far make up for the compile time.
When I am programming, I'll disable ops so I can test my changes quicker, but that's not what we are talking about here...
Re:No, no, no! (Score:2)
And then they spend most of their time waiting for IO.
Only in HPC do you ever look for ultra-optimized code and even then you end up running your code twice (or more) to verify your answer.
Most machines (including yours) are running at load levels way below 1 (unless they aren't dedicating wasted cycles to SETI or somesuch, which itself confirms they have cycles to waste).
Plan 9's compilers may produce 20% slower code, but when a (true) develo
Right, whatever (Score:3, Insightful)
And there are maybe about ten to fifteen people on all of slashdot who actually know how to go about setting the right optimizations for their own machine.
Re:Right, whatever (Score:2, Interesting)
Productivity gains if and only if.... (Score:4, Insightful)
Assuming
a) That compiling will give you any significant performance increase (which I kinda doubt, it's not like the defaults are braindead either)
b) You don't spend more time mucking about with distCC / compiling than you'll actually use the software
c) Your software is actually code bound (and not "What do I type/click now?" human bound, or bandwidth bound or whatever)
I can't think of a single thing I do that's code bound. And I actually do a bit of compiling, but I spend those seconds thinking about what to code next. Either that, or it is bandwidth bound or non-time critical (i.e. does it take 6,5 hours or 7 hours? Who cares. The difference is half an hours work for my computer, 0 for me. So the time I'd spend to improve it is - gasp - 0.
Kjella
Re:Productivity gains if and only if.... (Score:5, Interesting)
The obvious and most popular answer is encoding video. I think a great many people do a lot of this. Since no processor is fast enough to encode DVD-res video at 16X, it isn't bound by IO speeds either. I can start videos encoding in far less time than it takes to complete the process as well. Pure CPU number-crunching.
Other applications are any form of crypto. Reduce the time you have to wait for PGP to encrypt. Reduce the delay on your SSH sessions.
Then there are databases. Sure, they're often IO bound, but it is commonly a CPU limitation.
Also any heavy-load service. If apache is serving lots of threads, especially PHP/Perl compiled pages, you are going to be maxing out your CPU.
Then there are the programs that are just bloated. Mozilla/Firefox is still quite slow, and I can open pages far, far faster than they can be rendered. Anything that makes it even 1% faster is very welcome, as those savings eventually add-up to large ammounts of time.
If you really never use any of those, hooray for you, but most people certainly do.
Sadly, you can't put saved time in the bank. (Score:3, Informative)
While I've come across your argument a lot, regarding lots of little amounts of saved time, I struggle to see any actual value in these "micro" time savings.
How do you collect up all these micro time savings, making up an "new" hour, in which you can usefully do something else ?
The only application you've mentioned where I think these optimisations matter is DVD encoding. More broadly, useful time savings can be gained as these jobs tend to take a lot of time eg. I would consider saving 5 minutes on an
Any advice on flags for K6-2 CPUs? (Score:2)
Does anyone have any advice about this? Are there any objective comparisons that relate to my configurations?
Re:Any advice on flags for K6-2 CPUs? (Score:3, Informative)
O2 is your best all-around setting. Os does make smaller code, but the stuff it outputs is slower. It also causes weird problems with certain apps. It could be useful to condense the memory footprint of properly designed code (like GlibC.) But remember, decreased memory footprint=more hoops the computer has to go through. Think of it like employing fold out-tables. Sure it saves space, but you spend time folding it and unfolding it.
O3 is a waste of time, except for certain scientific computi
Re:Any advice on flags for K6-2 CPUs? (Score:2)
Linux (GCC) Compile Times VS MS Compile times? (Score:3, Interesting)
I was blown away when my project group compiled a Qt app that we developed on the Linux platform with the MS VC++ compiler. The compilation took 1/10th the time! We were using Makefiles generated by QMake in both cases.
Should I just switch compilers? If so does anyone have any suggestions?
precompiled headers (Score:2)
In the real world pre-compiled headers, offset caching and all that kinda crap would be seemless, quick and easy, but in the GCC (portability) world they havn't quite made it yet.
distcc is only workaround to an unsolved problem (Score:3, Informative)
Just look at the Linux Kernel Mailing List and how many errors can be traced to a GCC specific version. That's why Linus enforces a standardized compiler environment, developers can't be wasting their time fixing compiler-induced errors.
I know it's attractive to just recompile your whole distribution to your specific hardware combination because there are real world performance gains, but sometimes there are weird bugs caused by it and you'll probably be out of luck trying to find some documentation on them. What are the chances of somebody having the same hardware configuration? And remember we're not talking about branded components and specific models, we must throw in firmware, drivers, BIOS settings and whatnot into the mix.
As long as the PC components are not standardized, this problem is never going to be away. I seriously considered Mandrake and Gentoo a couple of times in the past and they had very different bugs on each version every time I tried them. Even though they have gotten better on each release, I'd still refuse to put them on a production machine, there's a reason why every distro ships with a precompiled i386 kernel.
I, for one, just recompile the most important parts of a system that do require most of the CPU time, like the kernel, Apache, and other runtime libraries whenever I do need that extra punch, not a second before. distcc is a geek tool and has that coolness-factor and all, but I'm not on a frenzy to use it to recompile all my servers' software, I care about stability first.
Re:distcc is only workaround to an unsolved proble (Score:2)
Wow. That is blatantly false and misleading. "Forking code" refers to one piece of code diverging into two separate projects, ala X.org and Xfree86. All binaries have to be compiled somewhere...when the guys at Red Hat or Suse do it for you, they generally do it in a very generic, compatible way, and when you do it yourself, you can take some more risk or tailor it to y
Wow! (Score:2)
It's really nice that you guys have discovered this, but don't act like it's something new and amazing, or even that it's something unique to Linux.
Compiling Gentoo for speed.... with gcc? (Score:2, Insightful)
Does anyone know if there are distro's compiled with, say, the Intel compiler?
Re:Compiling Gentoo for speed.... with gcc? (Score:2)
I don't know of such distro but at least according to Gentoo documentation [gentoo.org] Gentoo has some kind of support for icc. I don't know how widely and how well it works. (I've never used Gentoo.)
I'm content with .src.rpm, thank you (Score:3, Informative)
Say it ain't so! (Score:2)
This could ruin EVERYthing!
--
Testing a distro built with the Intel compiler? (Score:2)
Works well! (Score:2, Interesting)
I did some non-scientific testing with distcc.
System one:
Athalon 1400 XP
512 Meg RAM
System two:
Pentium III 450
512 Meg RAM
I compiled GAIM on System one with distccd running on system one and two, also compiled with just distccd running on system one.
I found that with both systems running distccd I got about a two minute faster compile. Then with just distccd running on system one.
With distccd running on just system one I found that it would process many of th
Wait a sec (Score:3, Insightful)
Now someone is saying: "precompiled binaries will never run as quickly as those compiled with the right optimizations for your own machine."
So, the Gentoo users that are claiming that stuff compiled with the right optimizations *is* faster?
I'm confused. Which is it supposed to be? Are Gentoo users full of crap, or are they correct?
I use Gentoo and have found things to be a hell of a lot easier to deal with than RPM based binary distros anyway.
I just want the scoop.
Oh, distcc has been in Gentoo for a while.. surprised to see it listed like it's a new thing.
Re:Wait a sec (Score:2)
I'm not surprised it's been there--heck, if anything could benefit from a compile farm, it's Gentoo!
Re:Wait a sec (Score:2)
Typical developer will go through many:
Change code,
Compile
Test
Repeat
Sometime hundreds of times a day. If you can make your compile times drop form 2 minutes to 1 minute you will save over an hour a day.
Re:Wait a sec (Score:4, Interesting)
No point building mozilla with GTK2 support if you don't need it, is there? Or Samba with any of the following with question-marks: It's the fact that you get packages on your system that match the settings you set before, not the fact you can compile every package with -fomit-frame-pointer that gives Gentoo its strength.
Re:Wait a sec (Score:3, Insightful)
> Gentoo users full of crap, or are they correct?
for almost all programs, it's not going to be *noticably* any faster. on average, you can get maybe 1% to 5% performance improvement from CPU-optimised binaries. this generally isn't worth the time and effort it takes to do the custom compile.
for heavy graphics processing or number crunching, you'll probably notice that. you almost certainly wouldn't notice it on anything else.
so, yes, the gentoo user
My experiences with distcc (Score:5, Informative)
1. when *recompiling*, the advantage due to ccache far outweighs the performance of distcc on the first compile. If you're testing distcc you need to be aware of this and disable ccache.
2. most large packages either disable distcc (e.g. xfree by limiting make -jX) or compile small sets of files in bursts and spend the majority of time performing non-compilation and linking. Distcc helps with the compilation but because it's only a small part of the total build time, the overall improvement isn't as great as you might have hoped.
3. distccmon-gnome is very cool.
4. using distcc with Gentoo transparently involves modifying your path and this can make non-root compilations troublesome (permissions on distcc lock files). I haven't figured this one out yet other than to specify the full path to the compiler: make CC=/usr/bin/gcc rather than CC=gcc.
5. the returns from adding an extra distcc server to the pool drop considerably after the first few machines. Even on a 1 gigabit LAN the costs of distcc catch up with the benefits after a while. This is more of a concern when compiling lots of small files.
6. it can handle cross-compilation with a bit of configuration.
So although distcc can often reduce build time, it's not quite as effective as you might assume or hope at first.
Obligatory "No" (Score:5, Insightful)
A straw man. Precompiled binaries may have been compiled with the optimal settings for your machine, and binaries which you compile may not have the optimal settings. Identifying the optimal settings can actually be non-trivial. Source-based distributions are not necessarily the best fix to the 'one-size-fits-all' approach used by some distro's.
Re:Obligatory "No" (Score:2)
True. No task for a human. Tedious work.
But have you heard of this [coyotegulch.com], a genetic optimizer for the compile time optimization parameters to GCC?
Oh and to start dreaming about new features: Maybe GCC can implement such a feature to find the best optimization parameters for each function which is being compiled?
CPU is rarely the bottleneck (Score:3, Interesting)
That said, the kinds of things I would like to have extra-optimized for speed are generally big, huge,
Excellent for software development (Score:5, Interesting)
We reguarly develop/compile/debug a moderate-small sized software package, typically taking about 1 minute per compile. Now, while 1 minute doesn't sound like a long time, it starts adding up when you find yourself recompiling 100+ times a day.
With the inclusion of distcc into the whole situation, we're able to reduce that 1 minute compile down to a little less than 20 seconds; highly appreciated (although now we have less excuses to go get a coffee
Distcc is a great package which can be extremely useful.
PLD.
If you need perf, fine. But you lose testing (Score:5, Interesting)
But this can be a false economy...
Every time something that is distributed in binary is rebuilt from source for local use, by definition it's to change some assumption that was inherent in the testing of the original binary (or else the binary distribution would suffice). And with that, some non-0 confidence that was built into the binary release by that testing is wiped out and must be recovered by local analysis and testing (i.e., time and effort) or reduced expectations. Otherwise, it's running on blind faith. This is particularly true with programs that are used frequently, i.e., one expects to depend on them repeatedly. So in my mind, "the best of both worlds" is more meaningful if it refers to fast and reliable apps. I don't care how fast the compiler is if I can't trust the results anymore. That is a different economy equation, and completely justifes the "convenience" of pre-compiled binaries in many applications.
I use it for OSX (Score:3, Interesting)
I do a lot of work with Qt on both Linux and Mac, and lets just say Qt compiles very slow on my powerbook (which is an older 800 mhz G4).
Also, I've had to build all of Qt on this machine because the fink packages are old and don't even use the Mac version (they use the X11 version which really sucks and makes apps on Macs look like crap).
So at work we have a couple dual G5s I use, and also a few Linux machines which I've built darwin cross-compilers for (yes its a pain in the ass).
distcc causes kernel panic... (Score:5, Informative)
It's a known bug and has been discussed on the lkml [google.com]. The bug is also discussed on the gentoo bugzilla [gentoo.org]. A patch [gentoo.org] is also available, though the patch program didn't work for me so I had to apply it manually.
The patch seems to be holding up, too. If you're using distcc on systems with vanilla 2.4.26 kernels, I'd suggest patching them.
It's great for OS-X compiles (Score:3, Interesting)
It's so PAINFULLY SLOW to build anything for the Mac, with this inefficient Objective-C compiler and large linking requirements for Carbon, that without these distributed tools and some G5 servers, it would be hard for us to develop.
Interestingly, our Windows version of this product, built in C#, compiles extremely fast with no distributed trickery needed.
Incredibuild (Score:5, Informative)
kingos
And by the way, GCC 3.4.1 (Score:4, Informative)
... was just released [gnu.org].
Only available on mirrors [gnu.org], currently.
Security? (Score:3, Interesting)
FWIW, distributed compliation programs like distcc are a good reason to check for buffer overruns and other memory trampling in the compiler. If you've ever managed to segfault gcc by feeding it a bad piece of code, there is a potential exploit via distcc if you can craft a C program that makes the compiler misbehave in the way you want.
Another reason to buy a Mac (Score:3, Interesting)
Yes, Apple has come standard with distcc for quite some time.
Re:Gentoo has that covered (Score:2)
The reason it wouldn't work just by default, is because your binaries are being targeted for different platforms.
Re:Gentoo has that covered (Score:2)
The reason it wouldn't work just by default, is because your binaries are being targeted for different platforms.
I'm pretty sure the .o files produced by gcc on cygwin are the same as the .o files produced by gcc on Linux. I expect that the link step on cygwin produces different binaries, but generally gcc isn't used to invoke the linker (on non-toy projects, anyway).
I actually have been able to use distcc with a cygwin box doing some of the building for my Gentoo system. It seems to work fine for so
Re:Gentoo has that covered (Score:2)
One of these days I'm going to get around to fixing the Gatos ebuild that is supposed to take care of all this for me. I guess while I'm at it I'll fix
Re:Gentoo has that covered (Score:3, Informative)
Gentoo has a HOWTO entitled:
"HOWTO: Use a Windows box as a d
Re:Gentoo has that covered (Score:2)
Really? I just plug a gcc-containing system into the network and Gentoo magically discovers and utilizes its compiler? Wow!
Or is some new definition of "automatically" that I wasn't previously aware of?
Re:Pre-compiled (Score:4, Funny)