Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
GNU is Not Unix Programming IT Technology

GCC 3.3 Released 441

devphil writes "The latest version of everyone's favorite compiler, GCC 3.3, was released today. New features, bugfixes, and whatnot, are all available off the linked-to page. (Mirrors already have the tarballs.) Let the second-guessing begin!"
This discussion has been archived. No new comments can be posted.

GCC 3.3 Released

Comments Filter:
  • by Anonymous Coward on Thursday May 15, 2003 @08:18AM (#5962874)
    Caveats
    The preprocessor no longer accepts multi-line string literals. They were deprecated in 3.0, 3.1, and 3.2.
    The preprocessor no longer supports the -A- switch when appearing alone. -A- followed by an assertion is still supported.
    Support for all the systems obsoleted in GCC 3.1 has been removed from GCC 3.3. See below for a list of systems which are obsoleted in this release.
    Checking for null format arguments has been decoupled from the rest of the format checking mechanism. Programs which use the format attribute may regain this functionality by using the new nonnull function attribute. Note that all functions for which GCC has a built-in format attribute, an appropriate built-in nonnull attribute is also applied.
    The DWARF (version 1) debugging format has been deprecated and will be removed in a future version of GCC. Version 2 of the DWARF debugging format will continue to be supported for the foreseeable future.
    The C and Objective-C compilers no longer accept the "Naming Types" extension (typedef foo = bar); it was already unavailable in C++. Code which uses it will need to be changed to use the "typeof" extension instead: typedef typeof(bar) foo. (We have removed this extension without a period of deprecation because it has caused the compiler to crash since version 3.0 and no one noticed until very recently. Thus we conclude it is not in widespread use.)
    The -traditional C compiler option has been removed. It was deprecated in 3.1 and 3.2. (Traditional preprocessing remains available.) The header, used for writing variadic functions in traditional C, still exists but will produce an error message if used.
    General Optimizer Improvements
    A new scheme for accurately describing processor pipelines, the DFA scheduler, has been added.
    Pavel Nejedly, Charles University Prague, has contributed new file format used by the edge coverage profiler (-fprofile-arcs).

    The new format is robust and diagnoses common mistakes where profiles from different versions (or compilations) of the program are combined resulting in nonsensical profiles and slow code to produced with profile feedback. Additionally this format allows extra data to be gathered. Currently, overall statistics are produced helping optimizers to identify hot spots of a program globally replacing the old intra-procedural scheme and resulting in better code. Note that the gcov tool from older GCC versions will not be able to parse the profiles generated by GCC 3.3 and vice versa.

    Jan Hubicka, SuSE Labs, has contributed a new superblock formation pass enabled using -ftracer. This pass simplifies the control flow of functions allowing other optimizations to do better job.

    He also contributed the function reordering pass (-freorder-functions) to optimize function placement using profile feedback.

    New Languages and Language specific improvements
    C/ObjC/C++
    The preprocessor now accepts directives within macro arguments. It processes them just as if they had not been within macro arguments.
    The separate ISO and traditional preprocessors have been completely removed. The front-end handles either type of preprocessed output if necessary.
    In C99 mode preprocessor arithmetic is done in the precision of the target's intmax_t, as required by that standard.
    The preprocessor can now copy comments inside macros to the output file when the macro is expanded. This feature, enabled using the -CC option, is intended for use by applications which place metadata or directives inside comments, such as lint.
    The method of constructing the list of directories to be searched for header files has been revised. If a directory named by a -I option is a standard system include directory, the option is ignored to ensure that the default search order for system directories and the special treatment of system header files are not defeated.
    A few more ISO C99 features now work correctly.
    A new function attribute, nonnull, has been added which allows pointer arguments to functions to be specified as requiring
  • by Anonymous Coward on Thursday May 15, 2003 @08:26AM (#5962926)

    There is one. Its called Visual SlickEdit, it costs $249 dollars.
  • Sigh, indeed... (Score:4, Informative)

    by squarooticus ( 5092 ) on Thursday May 15, 2003 @08:28AM (#5962941) Homepage
    You DO realize that most of the problems compiling the Linux kernel with succeeding releases of gcc is due primarily to the kernel team making incorrect assumptions about the kernel output...

    Right?
  • Re:ABI? (Score:1, Informative)

    by Anonymous Coward on Thursday May 15, 2003 @08:29AM (#5962951)
    No.
  • Re:Sigh (Score:5, Informative)

    by Ed Avis ( 5917 ) <ed@membled.com> on Thursday May 15, 2003 @08:33AM (#5962974) Homepage
    In the past, when a kernel has not compiled with a new gcc version it has been more often a bug in the kernel than one with gcc. The same goes for most apps. Looking at the list archives, the main problem seems to be with __inline__ which was a gcc extension to start with, so the problem is presumably that the meaning of that keyword has been deliverately changed.
  • Re:Sigh (Score:5, Informative)

    by norwoodites ( 226775 ) <{pinskia} {at} {gmail.com}> on Thursday May 15, 2003 @08:35AM (#5962986) Journal
    Linux, the kernel, depends on old gcc extensions that are slowly being removed from gcc, extensions that were not documented. Also c++ compile time is a hard thing to fix if you want a full c++ compiler in a short period of time. 3.3 is very stable compiler, even 3.4 in the cvs is a stable compiler. The gcc team are all volunteers so why do you not help them and fix some problems, and/or report some problems to us (I am slowing helping out now).
  • by oliverthered ( 187439 ) <oliverthered@NOSpAM.hotmail.com> on Thursday May 15, 2003 @08:35AM (#5962989) Journal
    look here [gnu.org]

    Using Precompiled Headers
    Often large projects have many header files that are included in every source file. The time the compiler takes to process these header files over and over again can account for nearly all of the time required to build the project. To make builds faster, GCC allows users to `precompile' a header file; then, if builds can use the precompiled header file they will be much faster.
  • by Hortensia Patel ( 101296 ) on Thursday May 15, 2003 @08:38AM (#5963004)
    Yes, this release (like all 3.x releases) is a lot slower than 2.9x was. This is particularly true for C++, to the point where the compile-time cost of standard features like iostreams or STL is prohibitive on older, slower machines. I've largely gone back to stdio.h and hand-rolled containers for writing non-production code, just to keep the edit-compile-test cycle ticking along at a decent pace.

    The new support for precompiled headers will help to some extent but is by no means a panacea. There are a lot of restrictions and caveats. The good news is that the GCC team are very well aware of the compile-time issue and (according to extensive discussions on the mailing list a few weeks back) will be making it a high priority for the next (3.4) release.

    Incidentally, for those wanting a nice free-beer-and-speech IDE to use with this, the first meaningful release of the Eclipse CDT is at release-candidate stage and is looking good.
  • by Anonymous Coward on Thursday May 15, 2003 @08:42AM (#5963032)
    The preprocessor no longer accepts multi-line string literals.

    Why was this removed?


    Link to GCC mail archive on this topic [gnu.org]. It seems like overkill, for sure.

    The header, used for writing variadic functions in traditional C, still exists but will produce an error message if used.

    This I don't understand at all. Does it mean we can't write void somefunc(int argc, ...) style funcs any more?


    No. The funcionality is still there, it's just included via <stdarg.h> instead of <varargs.h>.
  • by spakka ( 606417 ) on Thursday May 15, 2003 @08:42AM (#5963036)

    The preprocessor no longer accepts multi-line string literals.

    Standard C doesn't allow newline characters in strings. You can still put in '\n' escape characters to represent newlines.

    Does it mean we can't write void somefunc(int argc, ...) style funcs any more?

    You can, but in the implementation, you need to use the Standard C header stdarg.h instead of the traditional C header varags.h.

  • The new breed of gcc compiler are anywhere from 3 %to 5% slower [gnu.org] with file processing using the C++ library. So, compiling the kernel with gcc 3.x is fine, but I suspect that something like KDE which is mostly written in C++ is impacted seriously. At least, all software using the C++ library for IO (fstream) will be much slower. On the other hand, the support for C++ standards is much better so what I do is that I compile using gcc 3.2.3 to validate my C++ and then I run the real thing with a pre 3.x compiler.
  • Re:Sigh (Score:5, Informative)

    by Horny Smurf ( 590916 ) on Thursday May 15, 2003 @08:44AM (#5963051) Journal
    gcc 3.4 is slated to include a hand-written (as oppsed to yacc-built) recursive descent parser (for c++ only). That should give a nice speed bump (and fixes over 100 bugs, too).
  • by norwoodites ( 226775 ) <{pinskia} {at} {gmail.com}> on Thursday May 15, 2003 @08:48AM (#5963081) Journal
    look at http://gcc.gnu.org/ [gnu.org] and see under January 10, 2003:
    Geoffrey Keating of Apple Computer, Inc., with support from Red Hat, Inc., has contributed a precompiled header implementation that can dramatically speed up compilation of some projects.

    GCC will have it for 3.4.
  • by r6144 ( 544027 ) <r6k@@@sohu...com> on Thursday May 15, 2003 @08:50AM (#5963099) Homepage Journal
    According to this [gnu.org], if your program is multi-threaded, uses spinlocks in libstdc++, and runs on x86, then you'll have to configure gcc-3.3 for a i486+ target (instead of i386) in order to make it binary compatible with gcc-3.2.x configured for a i386 target. Otherwise when the code is mixed, the bus isn't locked when accessing the spinlock, which IMHO may cause concurrency problems on SMP boxes (?)
  • The changes that made C++ compiling slower were for correctness of the compiler so they are needed. I know 3.4 will be faster than 3.3 is(/was), and should be able to speed up even faster.
  • by noda132 ( 531521 ) on Thursday May 15, 2003 @08:53AM (#5963121) Homepage

    Not many visible changes. Developers have better profiling, which means eventually if they care they can make software faster. Also, you're going to find a lot more compiler warnings, and perhaps the odd piece of software which doesn't compile at all. In the short run, nothing changes. In the long run, programs become better as they stick to better programming guidelines (since gcc doesn't support "bad" programming as well as the previous version).

    I've been using gcc 3.3 for months from CVS, and have had no problems with it (except for compiling with -Werror).

  • by r6144 ( 544027 ) <r6k@@@sohu...com> on Thursday May 15, 2003 @08:55AM (#5963134) Homepage Journal

    If you use stdarg.h, nothing should break.

    I think here "the header" means "varargs.h", which is the old way for writing variadic functions. Should not appear in any reasonably new (post 1995) code.

  • by norwoodites ( 226775 ) <{pinskia} {at} {gmail.com}> on Thursday May 15, 2003 @08:59AM (#5963162) Journal
    You can write that code but not:

    char *A="A
    B";
    char *B="B";
    char *C="C";

    Anymore see how much problems this can cause if you did not have the second quote on the thrid line?
  • Everyone loves GCC? (Score:4, Informative)

    by Call Me Black Cloud ( 616282 ) on Thursday May 15, 2003 @08:59AM (#5963166)
    Not for me, thanks. I prefer the dynamic duo of Borland's C++ Builder/Kylix. Cross platform gui development? How you say...ah yes...w00t!

    For Java, Sun One Studio (crappy name)/Netbeans (inaccurate name) floats my boat. There is a light C++ module for Netbeans but I haven't tried it...no need.

    Give Kylix a try [borland.com] - there is a free version you know:

    Borland® Kylix(TM) 3 Open Edition delivers an integrated ANSI/ISO C++ and Delphi(TM) language solution for building powerful open-source applications for Linux,® licensed under the GNU General Public License

    Download it here [borland.com].
  • by Anonymous Coward on Thursday May 15, 2003 @09:01AM (#5963182)
    You might also like to try 'ccache'.
  • Re:Bounds Checking (Score:5, Informative)

    by asuffield ( 111848 ) <asuffield@suffields.me.uk> on Thursday May 15, 2003 @09:06AM (#5963222)
    I hear they have added in some more advanced, and aggressive bounds checking. Now when i screw up something i wont have to wait for a seg-v to tell me that pointer moved a little too far.

    Indeed, that SIGSEGV becomes a SIGABRT instead. This is dynamic bounds checking; it won't find anything until the bounds error occurs at runtime, so you won't find it any earlier. All it does is make sure that no bounds errors escape *without* crashing the process.

    Although it dosnt seem to work with glibc....this is quite annyoing, although it probably will be fixed and re-released in a few days

    I guess you didn't read the documentation. This is a "feature". It breaks the C ABI, forcing you to recompile all libraries used in the program, including glibc.

  • by dimitri_k ( 106607 ) on Thursday May 15, 2003 @09:18AM (#5963312)
    If anyone else was curious to see an example of the new nonnull function attribute, the following is reformatted from the end of the relevant patch [gnu.org], posted to gcc-patches by Marc Espie:

    nonnull (arg-index,...)
    nonull attribute
    The nonnull attribute specifies that some function parameters should
    be non null pointers. For instance, the declaration:

    extern void *
    my_memcpy (void *dest, const void *src, size_t len)
    __attribute__ ((nonnull (1, 2)));

    causes the compiler to check that, in calls to my_memcpy, arguments dest
    and src are non null.

    Using nonnull without parameters is a shorthand that means that all
    non pointer [sic] arguments should be non null, to be used with a full
    function prototype only. For instance, the example could be
    abbreviated to:

    extern void *
    my_memcpy (void *dest, const void *src, size_t len)
    __attribute__ ((nonnull));

    Seems useful, though I suspect many derefernced pointers are set NULL at runtime, and so not spottable during build.

    Note: I didn't change the wording above at the [sic], but I believe that this should read "all pointer arguments" instead.
  • by Anonymous Coward on Thursday May 15, 2003 @09:33AM (#5963428)
    Yes. Precompiled headers make much sense for projects that do not include everything.

    Including everything into a precompiled header file will infact make the compilation much slower than if you didnt use precompiled headers at all.

    <windows.h> is a very bad example on how to make an efficient precompiled header. Granted, it may been what started the entire precompiled header business, but the time won from precompiled headers in general is greater than what you'd think.

    The key to create a good precompiled header is finding all those header files included in the majority of all source files, and precompile that. This saves considerable parsing time during compilation. Even despite not all source files use all the header data of your precompiled headers - as long as it uses a significant amount.

    Even with your good programming book, you will still see yourself include the same headers in the majority of your source files. Its not unnecessary included headers, just commonly used data structures and functions for your application.
  • by Anonymous Coward on Thursday May 15, 2003 @10:08AM (#5963733)
    I believe that the 7.x versions can now compile the kernel, and Intel have benchmarks to show it.

    Of course as always, Gcc is still your number 1 choice for anything other than x86 compilation.
  • inline (Score:5, Informative)

    by Per Abrahamsen ( 1397 ) on Thursday May 15, 2003 @10:08AM (#5963735) Homepage
    The inline flag in C and C++ is a hint to the compiler that inlining this function is a good idea, just like register is a hint to the compiler.

    GCC has always treated inline as such a hint, but the heuristics of how to use the hint has changed, so some functions that used to be inlined no longer is inlined.

    The kernel has some function that *must* be inlined, not for speed but for correctness. GCC provide a difference way to specify this, a "inline this function or die" flag. Development kernels use this flag.
  • by powerlinekid ( 442532 ) on Thursday May 15, 2003 @10:08AM (#5963739)
    All configurations of the following processor architectures have been declared obsolete:

    Matsushita MN10200, mn10200-*-*
    Motorola 88000, m88k-*-*
    IBM ROMP, romp-*-*
    Also, some individual systems have been obsoleted:

    Alpha
    Interix, alpha*-*-interix*
    Linux libc1, alpha*-*-linux*libc1*
    Linux ECOFF, alpha*-*-linux*ecoff*
    ARM
    Generic a.out, arm*-*-aout*
    Conix, arm*-*-conix*
    "Old ABI," arm*-*-oabi
    StrongARM/COFF, strongarm-*-coff*
    HPPA (PA-RISC)
    Generic OSF, hppa1.0-*-osf*
    Generic BSD, hppa1.0-*-bsd*
    HP/UX versions 7, 8, and 9, hppa1.[01]-*-hpux[789]*
    HiUX, hppa*-*-hiux*
    Mach Lites, hppa*-*-lites*
    Intel 386 family


    According to the changelog, i386 support (and source) will be removed in 3.4 unless someone tries to revive it.
  • by Per Abrahamsen ( 1397 ) on Thursday May 15, 2003 @10:12AM (#5963774) Homepage
    Does amazing thing for correctness, and is much easier to understand. However, it is not faster in general. It is faster at some tasks and slower at others, same on average.

    It also exposes tons of errors in existing C++ programs, so expect lots of whining when GCC 3.4 is released.

    GCC 3.4 will have precompiled headers (thanks Apple), which will speed compilation up a lot for project that uses them.
  • by Dan-DAFC ( 545776 ) on Thursday May 15, 2003 @10:34AM (#5963938) Homepage

    ...I can tell you with certainty that Visual SourceSafe 6.0 is a steaming pile of dog turd that needs to be exorcised, not bug fixed.

    Amen. I wouldn't have thought it possible to write a product that makes CVS look like a sensible choice for source control if I hadn't seen it with my own eyes.

    We used to use SourceSafe, now we use CVS. CVS is an horrific train-wreck of an application but compared to VSS it's a triumph of software engineering.

  • by red_dragon ( 1761 ) on Thursday May 15, 2003 @10:39AM (#5963993) Homepage
    Intel 386 family

    According to the changelog, i386 support (and source) will be removed in 3.4 unless someone tries to revive it.

    Say what?? Don't you mean that only support for Windows NT 3.x has been obsoleted within the x386 family, which is what the changelog actually says?

  • by Anonymous Coward on Thursday May 15, 2003 @10:53AM (#5964117)
    uhm ... it doesn't break binary compatibility.

    It's a minor release, primarily bug fixes and improvements. Would you rather wait months for your favorite show-stopper to get fixed in commercial compilers?

    It may show as broken some code which is broken, but was previously allowed. Expect even more of that with 3.4
  • by norwoodites ( 226775 ) <{pinskia} {at} {gmail.com}> on Thursday May 15, 2003 @11:10AM (#5964297) Journal
    Try this link instead: http://gcc.gnu.org/PR8610 [gnu.org], it is smaller and easier to remember.
  • by be-fan ( 61476 ) on Thursday May 15, 2003 @11:26AM (#5964483)
    Actually, GCC is really close to Intel C++. If you check out the benchmarks [coyotegulch.com] it's neck-and-neck on most code except for some Pentium 4 code and some numeric code. These differences are mainly due to Intel C++'s better inliner and automatic vectorizer. I do agree, though, that Intel C++ rocks. It's free for personal use on Linux, very GCC compatible, and almost as conformant as GCC to the C++ standard. It's also got extremely good error messages, which very important for C++ programmers.
  • by jmv ( 93421 ) on Thursday May 15, 2003 @11:30AM (#5964523) Homepage
    Typically, all gcc releases break the kernel somewhere. This is because many kernel rely (unintentionally) on some behaviour of gcc that is not guaranteed by the standard. When a new gcc release comes, they need to make sure they fix that. That's why there's always a "list of supported compilers for the kernel". There's no reason why the gcc folks should refrain from using some optimizations because it would break bad code in the kernel.
  • Re:Sigh (Score:3, Informative)

    by Per Abrahamsen ( 1397 ) on Thursday May 15, 2003 @11:46AM (#5964699) Homepage
    Yacc and similar tools are optimized for languages with "nice" grammars, and have hacks to support languages with less nice grammars. The number of hacks needed to support C++ is so large that it evidently slow down the whole parser, not to mention make it unreadable. At that point, writting an add-hoc parser is better.
  • by AT ( 21754 ) on Thursday May 15, 2003 @12:20PM (#5965078)
    The GNU make has this bug even worse, since it only checks file size most of the time.

    False. GNU make checks the last file modification time. If you have any problems getting consistant incremental builds, it is probably because the Makefile has incorrect dependency information. For example, if a header file changes the size of a struct, every source file that includes it should be recompiled. An automated tool like "gcc -MM" is the best way to ensure the dependencies are correct.
  • by Fzz ( 153115 ) on Thursday May 15, 2003 @01:27PM (#5965722)
    We're using g++ with heavy use of templates in a project that currently has ~400,000 lines of code. gcc 3.2.1 takes about 50% longer than gcc 2.95.4. But, gcc 3.2.1 found loads of bugs that gcc 2.95 didn't notice, even with all the error checking enabled. I'd much rather have the extra checking and have to upgrade my compilation machines 6 months earlier, rather than have stupid errors go unreported by the compiler. So far today it looks like gcc 3.3 finds still more bugs in our code than 3.2.1 did.

    Thank you gcc team!!!!

  • by jmv ( 93421 ) on Thursday May 15, 2003 @01:53PM (#5965947) Homepage
    It's not necessarily someone who doesn't know the language. Sometimes "normal" mistakes happen and don't get corrected because nobody noticed (i.e. because the compiler generated code that didn't expose the bug). Alan Cox once said that if you don't have access to a big endian machine, there's no way your code will work perfectly on such machine (though you can come close, you'll always miss one). The same is true with compilers: unless you have a compiler that generated bad code for 100% of the cases where you're not following the language perfectly, you won't catch all bugs.

    Most of the times, this is not obvious stuff. A while ago, gcc 2.95 (or was it 2.96?) broke the kernel because of the strict aliasing rules: gcc assumes that an (int*) and a (float*) can't point to the same area (even now, the kernel needs to be compiled with -fno-strict-aliasing). One of the reasons why gcc 3.3 breaks the kernel now is that at some places, the kernel assumes that an inline function will be inlined, otherwise it breaks. The older versions of gcc always made those functions inline, but the new version take inline merely as a hint (like "register"), which is compliant with the standard. The kernel needs to be fixed to say "inline this or die" or something like that instead.
  • by Anonymous Coward on Thursday May 15, 2003 @02:17PM (#5966172)
    The whole gdb situation has had us tearing our hair out for ages. IMHO, it is a serious impediment to being productive with the gcc toolset.

    Etnus (http://www.etnus.com) has a gcc debugger that is not derived from gdb. We're reasonably happy with it so far, although we haven't been banging too hard on it yet. It, BTW, the only commercial debugger for gcc that I've been able to find.
  • Re:inline (Score:5, Informative)

    by WNight ( 23683 ) on Thursday May 15, 2003 @02:28PM (#5966278) Homepage
    That relies on the assumption that you can always page in the memory containing the subroutine. If you're writing paging code this might not be possible.

    It was a lot harder in real-mode programming, where you couldn't jump to distant code because you had to change segment registers and you had to make sure you backed them up first. Hard to guarantee with C, easy with ASM.

    Besides, there are many optimizations that a compiler has to guess about. It's very hard for it to know if you're relying on the side effects of an operation. If you're looping and not doing anything, are you touching volatile memory each time (where the reads could be controlling memory-mapped hardware) or doing something else similar. That's the most obvious example. There are a ton of pages about compiler optimization. It's really quite fascinating.
  • by Aardpig ( 622459 ) on Thursday May 15, 2003 @03:01PM (#5966577)

    At the moment, I wouldn't bother switching to the Intel F95 compiler. Although it is a very good compiler (in terms of efficient code generation), it lags behind other compilers in terms of features supported. By this, I mean the TR-15580 and TR-15581 extensions to the FORTRAN 95 language, which most vendors support, and which are linguistically-important extensions which fix mistakes made in the original FORTRAN 90 language spec. Both extensions are endorsed by J3 (the ISO body which publishes FORTRAN standards) and WG5 (the working group which develops new FORTRAN standars -- at the moment, they are working on FORTRAN 2000).

    I am eaglerly awaiting the completion of g95, the GNU gcc-based frontend for FORTRAN 95. At the moment, I use a combination of Intel, Lahey and NAG compilers for my FORTRAN 95 needs, but projects I am programming are specifically geared towards what g95 will support. If you want to program with the future in mind, sure, switch to the Intel F95 compiler; but have in mind that the FORTRAN language itself is advancing, and it might be better to (temporarily) spend some cash and get a compiler which is moving with the the language.

  • Re:Ridiculous (Score:3, Informative)

    by TheRaven64 ( 641858 ) on Thursday May 15, 2003 @04:48PM (#5967615) Journal
    Okay, while libc and gcc are technically different projects, as I understand it, I agree that it would seem reasonable to drop a note to the libc folks saying "hey, gcc can't compile libc" and waiting for an update before releasing.

    libc and glibc are not quite the same, however. libc refers to any implementation of the standard c library, while glibc is the GNU version. I use gcc with the MSVCRT, the cygwin libc and the FreeBSD libc. To me glibc is just another piece of software that people who are not me use... When someone misuses English, do you correct them or change the entire language to fit their mistake?

    Actually, often the second. This is how natural languages evolve. On the other hand, doing this with a language like C would be silly (or alternatively you could just rename their abuse of the language C+ or something...)

  • by Per Abrahamsen ( 1397 ) on Saturday May 17, 2003 @08:02AM (#5979347) Homepage
    The compile server was still experimental last I heard.

    You need to reorganize your Make files to use pch's efficiently. You should

    1) Not change any of your source or header files.

    2) Add a new "all.h" header including all other headers, and precompile that header and only that header, whenever any other header changes.

    3) all.h should not be included directly from any file, instead compile with a special flag that "pre-includes" all.h.

    4) Because of header guards (which you must use), none of the normal header files will be included.

    5) Because no file include all.h directly, it will not figure in the autmatic generated dependencies, and you should not add it manually. Thus, any source file will be recompiled only when the header files it includes directly are changed.

    This solves the "only one include file" problem AC mentions, and means the source and include files are identical to the non-PCH version.

    The danger is that there might creep in hidden dependencies, i.e. source files that does not include all the headers they should, yet compile due to the pre-include of all.h. So you will have to make an occational build without PCH.

If a train station is a place where a train stops, what's a workstation?

Working...