Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
GNU is Not Unix Programming IT Technology

GCC 3.3 Released 441

devphil writes "The latest version of everyone's favorite compiler, GCC 3.3, was released today. New features, bugfixes, and whatnot, are all available off the linked-to page. (Mirrors already have the tarballs.) Let the second-guessing begin!"
This discussion has been archived. No new comments can be posted.

GCC 3.3 Released

Comments Filter:
  • woo! (Score:5, Funny)

    by KrON ( 2656 ) on Thursday May 15, 2003 @07:16AM (#5962858) Homepage
    mmm i'll install this one only cuz it rhymes ;)
  • by Anonymous Coward on Thursday May 15, 2003 @07:16AM (#5962863)
    just for information re-compiling glibc 2.3.2 with gcc 3.3 fails. i don't see the point releasing a compiler or standard glibc which doesn't allow the existing compiler to be used to compile it.
    • by bconway ( 63464 ) on Thursday May 15, 2003 @07:46AM (#5963062) Homepage
      I don't see the point in making changes to a compiler that shouldn't be made solely to satisfy a single piece of software. If the problem is with glibc, it should be fixed, not worked around. What if XFree86 failed to compile, should GCC work around that? How about Mozilla or OpenOffice.org?
      • by Anonymous Coward
        don't you think that a compiler should at least be able to deal with the last release of a core system component such as glibc ?

        The compiler should at least be able to compile a) the Kernel b) the Glibc.

        From the release timeframe a) is no problem. But b) usually takes half - one year to show up with a new release. They should at least release gcc and glibc at the same time or at least provide patches. Regardless to that the glibc and gcc people are almost the same persons so they should know best.
        • Ridiculous (Score:4, Insightful)

          by Anonymous Coward on Thursday May 15, 2003 @10:14AM (#5964359)
          Okay, while libc and gcc are technically different projects, as I understand it, I agree that it would seem reasonable to drop a note to the libc folks saying "hey, gcc can't compile libc" and waiting for an update before releasing.

          On the other hand, the argument that the gcc folks should make sure that the *kernel* (presumably the Linux kernel) compiles is absolutely ridiculous. The kernel has been long broken and not language-compliant. I think recent compilers can compile it, but that's very recent, and hardly the fault of the gcc people. The Linux kernel has no association with gcc, and is not an amazingly clean project. Gcc is used in far more places than Linux is -- on just about every OS and architecture in the world. Blocking a gcc release because the Linux kernel doesn't compile would be insane. Gcc is *far* bigger than Linux. It is the standard available-everywhere compiler.

          When someone misuses English, do you correct them or change the entire language to fit their mistake?
          • Re:Ridiculous (Score:3, Informative)

            by TheRaven64 ( 641858 )
            Okay, while libc and gcc are technically different projects, as I understand it, I agree that it would seem reasonable to drop a note to the libc folks saying "hey, gcc can't compile libc" and waiting for an update before releasing.

            libc and glibc are not quite the same, however. libc refers to any implementation of the standard c library, while glibc is the GNU version. I use gcc with the MSVCRT, the cygwin libc and the FreeBSD libc. To me glibc is just another piece of software that people who are not

        • by jmv ( 93421 ) on Thursday May 15, 2003 @10:30AM (#5964523) Homepage
          Typically, all gcc releases break the kernel somewhere. This is because many kernel rely (unintentionally) on some behaviour of gcc that is not guaranteed by the standard. When a new gcc release comes, they need to make sure they fix that. That's why there's always a "list of supported compilers for the kernel". There's no reason why the gcc folks should refrain from using some optimizations because it would break bad code in the kernel.
  • by Anonymous Coward on Thursday May 15, 2003 @07:18AM (#5962874)
    Caveats
    The preprocessor no longer accepts multi-line string literals. They were deprecated in 3.0, 3.1, and 3.2.
    The preprocessor no longer supports the -A- switch when appearing alone. -A- followed by an assertion is still supported.
    Support for all the systems obsoleted in GCC 3.1 has been removed from GCC 3.3. See below for a list of systems which are obsoleted in this release.
    Checking for null format arguments has been decoupled from the rest of the format checking mechanism. Programs which use the format attribute may regain this functionality by using the new nonnull function attribute. Note that all functions for which GCC has a built-in format attribute, an appropriate built-in nonnull attribute is also applied.
    The DWARF (version 1) debugging format has been deprecated and will be removed in a future version of GCC. Version 2 of the DWARF debugging format will continue to be supported for the foreseeable future.
    The C and Objective-C compilers no longer accept the "Naming Types" extension (typedef foo = bar); it was already unavailable in C++. Code which uses it will need to be changed to use the "typeof" extension instead: typedef typeof(bar) foo. (We have removed this extension without a period of deprecation because it has caused the compiler to crash since version 3.0 and no one noticed until very recently. Thus we conclude it is not in widespread use.)
    The -traditional C compiler option has been removed. It was deprecated in 3.1 and 3.2. (Traditional preprocessing remains available.) The header, used for writing variadic functions in traditional C, still exists but will produce an error message if used.
    General Optimizer Improvements
    A new scheme for accurately describing processor pipelines, the DFA scheduler, has been added.
    Pavel Nejedly, Charles University Prague, has contributed new file format used by the edge coverage profiler (-fprofile-arcs).

    The new format is robust and diagnoses common mistakes where profiles from different versions (or compilations) of the program are combined resulting in nonsensical profiles and slow code to produced with profile feedback. Additionally this format allows extra data to be gathered. Currently, overall statistics are produced helping optimizers to identify hot spots of a program globally replacing the old intra-procedural scheme and resulting in better code. Note that the gcov tool from older GCC versions will not be able to parse the profiles generated by GCC 3.3 and vice versa.

    Jan Hubicka, SuSE Labs, has contributed a new superblock formation pass enabled using -ftracer. This pass simplifies the control flow of functions allowing other optimizations to do better job.

    He also contributed the function reordering pass (-freorder-functions) to optimize function placement using profile feedback.

    New Languages and Language specific improvements
    C/ObjC/C++
    The preprocessor now accepts directives within macro arguments. It processes them just as if they had not been within macro arguments.
    The separate ISO and traditional preprocessors have been completely removed. The front-end handles either type of preprocessed output if necessary.
    In C99 mode preprocessor arithmetic is done in the precision of the target's intmax_t, as required by that standard.
    The preprocessor can now copy comments inside macros to the output file when the macro is expanded. This feature, enabled using the -CC option, is intended for use by applications which place metadata or directives inside comments, such as lint.
    The method of constructing the list of directories to be searched for header files has been revised. If a directory named by a -I option is a standard system include directory, the option is ignored to ensure that the default search order for system directories and the special treatment of system header files are not defeated.
    A few more ISO C99 features now work correctly.
    A new function attribute, nonnull, has been added which allows pointer arguments to functions to be specified as requiring
    • That's great... but can anyone tell us what a difference all that will make? I don't really care about compile times (too much)... but will mpeg2enc or ffmpeg run faster?

      BTW, there is a preliminary ebuild in Gentoo.
      • by noda132 ( 531521 ) on Thursday May 15, 2003 @07:53AM (#5963121) Homepage

        Not many visible changes. Developers have better profiling, which means eventually if they care they can make software faster. Also, you're going to find a lot more compiler warnings, and perhaps the odd piece of software which doesn't compile at all. In the short run, nothing changes. In the long run, programs become better as they stick to better programming guidelines (since gcc doesn't support "bad" programming as well as the previous version).

        I've been using gcc 3.3 for months from CVS, and have had no problems with it (except for compiling with -Werror).

        • In the short run, nothing changes. In the long run, programs become better as they stick to better programming guidelines

          Not very promising!! Basically you're saying this won't make much difference to the end user in terms of speed. I'm not arguing -- I'm agreeing.

          Personally, I would much rather have a slow compiler which gets the most out of my system. Apparently the gcc2.95-age compilers are faster than the gcc3 series: in my book that's a good thing. But has anyone done any testing? How long does
    • >Also, some individual systems have been obsoleted: ...
      >Intel 386 family
      >Windows NT 3.x, i?86-*-win32

      Does this mean that they are not going to support win32 any more. I still use mingw (Windows Port of gcc) as my primary compiler because it's free, and I don't have money to replace my PII 350 Windows 98 system.
    • It's a shame that no one managed to fix bug #10392 [gnu.org] before release. Until that one's fixed, those of us who do Dreamcast hacking are stuck using GCC 3.0.4.
  • Sigh (Score:5, Interesting)

    by unixmaster ( 573907 ) on Thursday May 15, 2003 @07:19AM (#5962885) Journal
    And "cant compile kernel with gcc 3.3" messages started to appear on lkml. Is it me or gcc team goes for quantity rather than quality that they even postponed many bugs ( like c++ compile time regression ) to gcc 3.4 to release 3.3...
    • Sigh, indeed... (Score:4, Informative)

      by squarooticus ( 5092 ) on Thursday May 15, 2003 @07:28AM (#5962941) Homepage
      You DO realize that most of the problems compiling the Linux kernel with succeeding releases of gcc is due primarily to the kernel team making incorrect assumptions about the kernel output...

      Right?
    • Re:Sigh (Score:5, Informative)

      by Ed Avis ( 5917 ) <ed@membled.com> on Thursday May 15, 2003 @07:33AM (#5962974) Homepage
      In the past, when a kernel has not compiled with a new gcc version it has been more often a bug in the kernel than one with gcc. The same goes for most apps. Looking at the list archives, the main problem seems to be with __inline__ which was a gcc extension to start with, so the problem is presumably that the meaning of that keyword has been deliverately changed.
      • inline (Score:5, Informative)

        by Per Abrahamsen ( 1397 ) on Thursday May 15, 2003 @09:08AM (#5963735) Homepage
        The inline flag in C and C++ is a hint to the compiler that inlining this function is a good idea, just like register is a hint to the compiler.

        GCC has always treated inline as such a hint, but the heuristics of how to use the hint has changed, so some functions that used to be inlined no longer is inlined.

        The kernel has some function that *must* be inlined, not for speed but for correctness. GCC provide a difference way to specify this, a "inline this function or die" flag. Development kernels use this flag.
        • Re:inline (Score:3, Insightful)

          by Shimmer ( 3036 )
          I don't understand this. Inlining is an optimization -- it has no semantic effect. How could failure to inline cause something to break?

          -- Brian
          • Re:inline (Score:5, Informative)

            by WNight ( 23683 ) on Thursday May 15, 2003 @01:28PM (#5966278) Homepage
            That relies on the assumption that you can always page in the memory containing the subroutine. If you're writing paging code this might not be possible.

            It was a lot harder in real-mode programming, where you couldn't jump to distant code because you had to change segment registers and you had to make sure you backed them up first. Hard to guarantee with C, easy with ASM.

            Besides, there are many optimizations that a compiler has to guess about. It's very hard for it to know if you're relying on the side effects of an operation. If you're looping and not doing anything, are you touching volatile memory each time (where the reads could be controlling memory-mapped hardware) or doing something else similar. That's the most obvious example. There are a ton of pages about compiler optimization. It's really quite fascinating.
      • Re:Sigh (Score:4, Interesting)

        by uncleFester ( 29998 ) on Thursday May 15, 2003 @09:24AM (#5963865) Homepage Journal
        In the past, when a kernel has not compiled with a new gcc version it has been more often a bug in the kernel than one with gcc..

        not really; it's a combination of kernel developers trying things to deal with 'intelligent' inlining, or implementing hacks when they discover an idiosyncrasy with GCC. As the gcc team 'resolves' (fixes? ;) such things the hacks may then fail, resulting in compile errors. unfortunately, this can make the code MORE fun as you then have to add compiler version tests to check which code should be used.

        The goal, though, is using the latest kernel with the latest compiler will generate the most correct code. Simply pointing a finger at the kernel developers is incorrect; both sides can be the cause of compiler failures.

        disclaimer: not a kernel developer, just a more-than-casual observer.

        'fester
    • Re:Sigh (Score:5, Informative)

      by norwoodites ( 226775 ) <pinskia@BOHRgmail.com minus physicist> on Thursday May 15, 2003 @07:35AM (#5962986) Journal
      Linux, the kernel, depends on old gcc extensions that are slowly being removed from gcc, extensions that were not documented. Also c++ compile time is a hard thing to fix if you want a full c++ compiler in a short period of time. 3.3 is very stable compiler, even 3.4 in the cvs is a stable compiler. The gcc team are all volunteers so why do you not help them and fix some problems, and/or report some problems to us (I am slowing helping out now).
    • Re:Sigh (Score:5, Informative)

      by Horny Smurf ( 590916 ) on Thursday May 15, 2003 @07:44AM (#5963051) Journal
      gcc 3.4 is slated to include a hand-written (as oppsed to yacc-built) recursive descent parser (for c++ only). That should give a nice speed bump (and fixes over 100 bugs, too).
      • by Per Abrahamsen ( 1397 ) on Thursday May 15, 2003 @09:12AM (#5963774) Homepage
        Does amazing thing for correctness, and is much easier to understand. However, it is not faster in general. It is faster at some tasks and slower at others, same on average.

        It also exposes tons of errors in existing C++ programs, so expect lots of whining when GCC 3.4 is released.

        GCC 3.4 will have precompiled headers (thanks Apple), which will speed compilation up a lot for project that uses them.
  • Bounds Checking (Score:5, Interesting)

    by the-dude-man ( 629634 ) on Thursday May 15, 2003 @07:21AM (#5962889)
    I hear they have added in some more advanced, and aggressive bounds checking. Now when i screw up something i wont have to wait for a seg-v to tell me that pointer moved a little too far.

    Although it dosnt seem to work with glibc....this is quite annyoing, although it probably will be fixed and re-released in a few days
    • Re:Bounds Checking (Score:5, Informative)

      by asuffield ( 111848 ) <asuffield@suffields.me.uk> on Thursday May 15, 2003 @08:06AM (#5963222)
      I hear they have added in some more advanced, and aggressive bounds checking. Now when i screw up something i wont have to wait for a seg-v to tell me that pointer moved a little too far.

      Indeed, that SIGSEGV becomes a SIGABRT instead. This is dynamic bounds checking; it won't find anything until the bounds error occurs at runtime, so you won't find it any earlier. All it does is make sure that no bounds errors escape *without* crashing the process.

      Although it dosnt seem to work with glibc....this is quite annyoing, although it probably will be fixed and re-released in a few days

      I guess you didn't read the documentation. This is a "feature". It breaks the C ABI, forcing you to recompile all libraries used in the program, including glibc.

    • Re:Bounds Checking (Score:4, Interesting)

      by swillden ( 191260 ) <shawn-ds@willden.org> on Thursday May 15, 2003 @10:09AM (#5964293) Journal

      I hear they have added in some more advanced, and aggressive bounds checking.

      What are the run-time performance implications of this bounds checking? It sounds very nice for debugging, and a great thing to turn on even in production code that may be vulnerable to buffer overflow attacks, but it can't be free. A bit of Googling didn't turn up anything; does anyone know how expensive this is?

  • Be careful (Score:3, Funny)

    by swagr ( 244747 ) on Thursday May 15, 2003 @07:23AM (#5962910) Homepage
    I'm waiting for SCO to show it's copied code before I pick up any GNU software.
  • ABI? (Score:4, Interesting)

    by 42forty-two42 ( 532340 ) <bdonlan@g m a i l . c om> on Thursday May 15, 2003 @07:25AM (#5962922) Homepage Journal
    Does this release break binary compatibility?
    • by r6144 ( 544027 ) <r6k&sohu,com> on Thursday May 15, 2003 @07:50AM (#5963099) Homepage Journal
      According to this [gnu.org], if your program is multi-threaded, uses spinlocks in libstdc++, and runs on x86, then you'll have to configure gcc-3.3 for a i486+ target (instead of i386) in order to make it binary compatible with gcc-3.2.x configured for a i386 target. Otherwise when the code is mixed, the bus isn't locked when accessing the spinlock, which IMHO may cause concurrency problems on SMP boxes (?)
      • All configurations of the following processor architectures have been declared obsolete:

        Matsushita MN10200, mn10200-*-*
        Motorola 88000, m88k-*-*
        IBM ROMP, romp-*-*
        Also, some individual systems have been obsoleted:

        Alpha
        Interix, alpha*-*-interix*
        Linux libc1, alpha*-*-linux*libc1*
        Linux ECOFF, alpha*-*-linux*ecoff*
        ARM
        Generic a.out, arm*-*-aout*
        Conix, arm*-*-conix*
        "Old ABI," arm*-*-oabi
        StrongARM/COFF, strongarm-*-coff*
        HPPA (PA-RISC)
        Generic OSF, hppa1.0-*-osf*
        Generic BSD, hppa1.0-*-bsd*
        HP/UX versions 7, 8, and 9,
        • Intel 386 family

          According to the changelog, i386 support (and source) will be removed in 3.4 unless someone tries to revive it.

          Say what?? Don't you mean that only support for Windows NT 3.x has been obsoleted within the x386 family, which is what the changelog actually says?

          • The thing is, if you have to configure gcc-3.3 for and i486 target in order to be binary compatible with gcc-3.2.x comfigured for i386, then gcc support for i386 might as well be dead, because all the OS distributions will be compiling it for i486 (or better). I doubt we'll see too many gcc or glibc packages for i386 after that.
            • Is there any reason NOT to declare i386 support dead? No one is going to compile newer software for a 386 (bloat, etc) and older compilers work fine for older software.
  • by torpor ( 458 ) <ibisumNO@SPAMgmail.com> on Thursday May 15, 2003 @07:27AM (#5962938) Homepage Journal
    I don't understand these two points, maybe someone can explian:

    The preprocessor no longer accepts multi-line string literals.

    Why was this removed?

    And:

    The header, used for writing variadic functions in traditional C, still exists but will produce an error message if used.

    This I don't understand at all. Does it mean we can't write void somefunc(int argc, ...) style funcs any more?

    Someone, please explain...
    • by Anonymous Coward on Thursday May 15, 2003 @07:42AM (#5963032)
      The preprocessor no longer accepts multi-line string literals.

      Why was this removed?


      Link to GCC mail archive on this topic [gnu.org]. It seems like overkill, for sure.

      The header, used for writing variadic functions in traditional C, still exists but will produce an error message if used.

      This I don't understand at all. Does it mean we can't write void somefunc(int argc, ...) style funcs any more?


      No. The funcionality is still there, it's just included via <stdarg.h> instead of <varargs.h>.
    • by spakka ( 606417 ) on Thursday May 15, 2003 @07:42AM (#5963036)

      The preprocessor no longer accepts multi-line string literals.

      Standard C doesn't allow newline characters in strings. You can still put in '\n' escape characters to represent newlines.

      Does it mean we can't write void somefunc(int argc, ...) style funcs any more?

      You can, but in the implementation, you need to use the Standard C header stdarg.h instead of the traditional C header varags.h.

    • The first one was removed because of the extension was causing more harm than good, so you have to change all the inline asm, not to have multi-line strings.

      The second is taking about the `varargs.h' header, the orginal way of doing variadic functions.
    • by r6144 ( 544027 )

      If you use stdarg.h, nothing should break.

      I think here "the header" means "varargs.h", which is the old way for writing variadic functions. Should not appear in any reasonably new (post 1995) code.

  • SuSE already uses it (Score:5, Interesting)

    by red_dragon ( 1761 ) on Thursday May 15, 2003 @07:30AM (#5962954) Homepage

    I just got SuSE 8.2 installed this week, which includes GCC 3.3, and noticed that the kernel is also compiled with GCC 3.3. From 'dmesg':

    Linux version 2.4.20-4GB (root@Pentium.suse.de) (gcc version 3.3 20030226 (prerelease) (SuSE Linux)) #1 Wed Apr 16 14:50:03 UTC 2003
    • Read "PRERELEASE" , not "RELEASE".
      Yes, there is a diffrence.
      • Yes, it does say 'prerelease', which I didn't miss (I did type that dmesg string in, y'know). The date isn't too far in the past, though, so differences between that and the release version shouldn't be that great.

  • by Hortensia Patel ( 101296 ) on Thursday May 15, 2003 @07:38AM (#5963004)
    Yes, this release (like all 3.x releases) is a lot slower than 2.9x was. This is particularly true for C++, to the point where the compile-time cost of standard features like iostreams or STL is prohibitive on older, slower machines. I've largely gone back to stdio.h and hand-rolled containers for writing non-production code, just to keep the edit-compile-test cycle ticking along at a decent pace.

    The new support for precompiled headers will help to some extent but is by no means a panacea. There are a lot of restrictions and caveats. The good news is that the GCC team are very well aware of the compile-time issue and (according to extensive discussions on the mailing list a few weeks back) will be making it a high priority for the next (3.4) release.

    Incidentally, for those wanting a nice free-beer-and-speech IDE to use with this, the first meaningful release of the Eclipse CDT is at release-candidate stage and is looking good.
    • I wonder, with the increase in CPU speeds since 2.9x was current, whether a $500 machine now can compile faster or slower than a $500 machine a few years back.

      Of course it would be still nice for gcc not to get slower, or maybe even to get faster :-(.
    • That's all very nice to say that 3.x is slower to compile than 2.95.s, but the end result, the executable, is faster (by ~10% on SPEC benchmarks), and 3.x releases are a lot closer to the ISO standards (both C and C++) than 2.95.x, so I don't see why we should all weep.

  • The new breed of gcc compiler are anywhere from 3 %to 5% slower [gnu.org] with file processing using the C++ library. So, compiling the kernel with gcc 3.x is fine, but I suspect that something like KDE which is mostly written in C++ is impacted seriously. At least, all software using the C++ library for IO (fstream) will be much slower. On the other hand, the support for C++ standards is much better so what I do is that I compile using gcc 3.2.3 to validate my C++ and then I run the real thing with a pre 3.x compiler.
    • The changes that made C++ compiling slower were for correctness of the compiler so they are needed. I know 3.4 will be faster than 3.3 is(/was), and should be able to speed up even faster.
    • I would not consider 3-5% a serious impact unless it came completely without other benefits. The bug report you linked to talks about a runtime performance impact of 2-5 times slower, not 3-5%. Nobody is going to accept an impact like that except to fix serious bugs in the generated code.
    • We're using g++ with heavy use of templates in a project that currently has ~400,000 lines of code. gcc 3.2.1 takes about 50% longer than gcc 2.95.4. But, gcc 3.2.1 found loads of bugs that gcc 2.95 didn't notice, even with all the error checking enabled. I'd much rather have the extra checking and have to upgrade my compilation machines 6 months earlier, rather than have stupid errors go unreported by the compiler. So far today it looks like gcc 3.3 finds still more bugs in our code than 3.2.1 did.

      Than

    • by devphil ( 51341 ) on Thursday May 15, 2003 @12:51PM (#5965914) Homepage


      "...to some extent." Why give a Subject: line textbox that won't let me use all of it? Grrr.

      Anyhow. One of the big speed hits for iostream code was the formatting routines. Some other reply has a subject like "if you're using fstream you're not interested in performance anyhow," which is so wrongheaded I won't even bother to read it. There's no reason why iostreams code shouldn't be faster than the equivalent stdio code: the choice of formatting operations is done at compile-time for iostreams, but stdio has to parse the little "%-whatever" formatting specs at runtime.

      However, many iostreams libraries are implemented as small layers on top of stdio for portability and compatability, which means that particular implementation will always be slower.

      We were doing something similar until recently. Not a complete layer on top of stdio, but some of the formatting routines were being used for correctness' sake. We all knew it sucked, but none of the 6 maintainers had time to do anything about it, and the rest of the world (that includes y'all, /.) was content to bitch about it rather than submit patches. Finally, Jerry Quinn started a series of rewrites and replacements of that section of code, aimed at bringing performance back to 2.x levels. One of the newer maintainers, Paolo Carlini, has been working unceasingly at iostream and string performance since.

      So, all of that will be in 3.4. Chunks of it are also in 3.3, but not all. (I don't recall exactly how much.)

  • Everyone loves GCC? (Score:4, Informative)

    by Call Me Black Cloud ( 616282 ) on Thursday May 15, 2003 @07:59AM (#5963166)
    Not for me, thanks. I prefer the dynamic duo of Borland's C++ Builder/Kylix. Cross platform gui development? How you say...ah yes...w00t!

    For Java, Sun One Studio (crappy name)/Netbeans (inaccurate name) floats my boat. There is a light C++ module for Netbeans but I haven't tried it...no need.

    Give Kylix a try [borland.com] - there is a free version you know:

    Borland® Kylix(TM) 3 Open Edition delivers an integrated ANSI/ISO C++ and Delphi(TM) language solution for building powerful open-source applications for Linux,® licensed under the GNU General Public License

    Download it here [borland.com].
    • by turgid ( 580780 )
      Kylix is all well and good if you only need to compile code for the i386 architecture. If you need UltraSPARC, MIPS, PowerPC, M68k etc. you're up the creek. If you want a nice Free IDE you could try anjuta [sf.net]. It needs the GNOME libs though.
  • Comment removed based on user account deletion
  • by dimitri_k ( 106607 ) on Thursday May 15, 2003 @08:18AM (#5963312)
    If anyone else was curious to see an example of the new nonnull function attribute, the following is reformatted from the end of the relevant patch [gnu.org], posted to gcc-patches by Marc Espie:

    nonnull (arg-index,...)
    nonull attribute
    The nonnull attribute specifies that some function parameters should
    be non null pointers. For instance, the declaration:

    extern void *
    my_memcpy (void *dest, const void *src, size_t len)
    __attribute__ ((nonnull (1, 2)));

    causes the compiler to check that, in calls to my_memcpy, arguments dest
    and src are non null.

    Using nonnull without parameters is a shorthand that means that all
    non pointer [sic] arguments should be non null, to be used with a full
    function prototype only. For instance, the example could be
    abbreviated to:

    extern void *
    my_memcpy (void *dest, const void *src, size_t len)
    __attribute__ ((nonnull));

    Seems useful, though I suspect many derefernced pointers are set NULL at runtime, and so not spottable during build.

    Note: I didn't change the wording above at the [sic], but I believe that this should read "all pointer arguments" instead.
    • by GlassHeart ( 579618 ) on Thursday May 15, 2003 @12:52PM (#5965931) Journal
      Seems useful, though I suspect many derefernced pointers are set NULL at runtime, and so not spottable during build.

      It's possible to check at compile time. It's not so much that the compiler detects whether a parameter is null or not at compile time, but whether it can be. For example:

      extern void do_work(void *) __attribute__ ((nonnull));
      void *p = malloc(100);
      do_work(p);
      can trigger a warning or error, because malloc() does not return a "nonnull" pointer, and so passing p to do_work is dangerous. On the other hand, given the code:
      extern void do_work(void *) __attribute__ ((nonnull));
      void *p = malloc(100);
      if (p != NULL) {
      do_work(p);
      }
      then the compiler can work out that the call is safe. This is how LCLint, for example, can do with its /*@null@*/ attribute. The logic will be a little like tracing whether a const variable is passed to a function expecting a non-const parameter. I don't know how far gcc plans to take this feature.
  • by Anonymous Coward on Thursday May 15, 2003 @08:46AM (#5963540)
    Intel's compiler [intel.com] smokes gcc in most benchmarks (not surprising, given that Intel knows how to squeeze every last bit of performance out of their own processors). Although it is not 100% compatible with all the gcc features, and therefore can't compile the Linux kernel, each release adds more and more compatibility. I hope the day will soon come when we can compile a whole Linux distribution with the Intel compiler.
    • by Anonymous Coward
      I believe that the 7.x versions can now compile the kernel, and Intel have benchmarks to show it.

      Of course as always, Gcc is still your number 1 choice for anything other than x86 compilation.
    • Actually, GCC is really close to Intel C++. If you check out the benchmarks [coyotegulch.com] it's neck-and-neck on most code except for some Pentium 4 code and some numeric code. These differences are mainly due to Intel C++'s better inliner and automatic vectorizer. I do agree, though, that Intel C++ rocks. It's free for personal use on Linux, very GCC compatible, and almost as conformant as GCC to the C++ standard. It's also got extremely good error messages, which very important for C++ programmers.
    • Intel's compiler smokes gcc in most benchmarks ...

      How's its performance on SPARC III? Does it optimize well for the Athlons? How about the PowerPC CPUs? And the MIPS CPUs? Does it cross-compile for the IBM mainframes? Does it run on them?

      Although it is not 100% compatible with all the gcc features, and therefore can't compile the Linux kernel, ...

      Oh.

      How about the object code? Can its object code be linked to code compiled by gcc, or is using this an all-or-nothing proposition?

      I hope the day

  • OK, I'm not a developer. I don't write code--I compile it when the tools I need as an admin aren't available as trusted binaries.

    Why, for the love of god, is there a new version of the de facto standard C compiler every week or two? Why can't binary compatability be maintained? WHAT sort of changes and development occur in the land of compiling a language that (as far as I know) isn't changing??!!

    This isn't a rant--these are serious questions. I don't understand why so many changes are being done to a com
  • From the changes (Score:2, Insightful)

    by pheared ( 446683 )

    The C and Objective-C compilers no longer accept the "Naming Types" extension (typedef foo = bar); it was already unavailable in C++. Code which uses it will need to be changed to use the "typeof" extension instead: typedef typeof(bar) foo. (We have removed this extension without a period of deprecation because it has caused the compiler to crash since version 3.0 and no one noticed until very recently. Thus we conclude it is not in widespread use.)


    Or rather, gcc version >= 3.0 is not in widespread u
  • by Anonymous Coward on Thursday May 15, 2003 @10:15AM (#5964369)
    It's great to hear about all these new improvements in gcc, but what I really want is a working debugger for C++ code compiled with gcc. The gdb debugger is buggy as all hell. It gives wrong values for variables, has no end of troubles with C++, and often enough when I want to print out the value of a variable it tells me that it can't access that memory, with no further explanation.

Trap full -- please empty.

Working...