Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
GNU is Not Unix Programming IT Technology

GCC 3.3 Released 441

devphil writes "The latest version of everyone's favorite compiler, GCC 3.3, was released today. New features, bugfixes, and whatnot, are all available off the linked-to page. (Mirrors already have the tarballs.) Let the second-guessing begin!"
This discussion has been archived. No new comments can be posted.

GCC 3.3 Released

Comments Filter:
  • by Anonymous Coward on Thursday May 15, 2003 @08:16AM (#5962863)
    just for information re-compiling glibc 2.3.2 with gcc 3.3 fails. i don't see the point releasing a compiler or standard glibc which doesn't allow the existing compiler to be used to compile it.
  • GCC 3.3 (Score:0, Interesting)

    by Anonymous Coward on Thursday May 15, 2003 @08:16AM (#5962864)
    Congratulations, so many bugs fixed, cool!
    I wonder ow much slower than the last release is it...
  • Sigh (Score:5, Interesting)

    by unixmaster ( 573907 ) on Thursday May 15, 2003 @08:19AM (#5962885) Journal
    And "cant compile kernel with gcc 3.3" messages started to appear on lkml. Is it me or gcc team goes for quantity rather than quality that they even postponed many bugs ( like c++ compile time regression ) to gcc 3.4 to release 3.3...
  • Bounds Checking (Score:5, Interesting)

    by the-dude-man ( 629634 ) on Thursday May 15, 2003 @08:21AM (#5962889)
    I hear they have added in some more advanced, and aggressive bounds checking. Now when i screw up something i wont have to wait for a seg-v to tell me that pointer moved a little too far.

    Although it dosnt seem to work with glibc....this is quite annyoing, although it probably will be fixed and re-released in a few days
  • ABI? (Score:4, Interesting)

    by 42forty-two42 ( 532340 ) <bdonlan.gmail@com> on Thursday May 15, 2003 @08:25AM (#5962922) Homepage Journal
    Does this release break binary compatibility?
  • by oliverthered ( 187439 ) <oliverthered@hotmail. c o m> on Thursday May 15, 2003 @08:27AM (#5962936) Journal
    The optimiser has been vastly improved and ....

    The following changes have been made to the IA-32/x86-64 port:
    SSE2 and 3dNOW! intrinsics are now supported.
    Support for thread local storage has been added to the IA-32 and x86-64 ports.
    The x86-64 port has been significantly improved.

    If you wan't compile time performance look at

    Precompiled Headers [gnu.org]

  • by torpor ( 458 ) <ibisum AT gmail DOT com> on Thursday May 15, 2003 @08:27AM (#5962938) Homepage Journal
    I don't understand these two points, maybe someone can explian:

    The preprocessor no longer accepts multi-line string literals.

    Why was this removed?

    And:

    The header, used for writing variadic functions in traditional C, still exists but will produce an error message if used.

    This I don't understand at all. Does it mean we can't write void somefunc(int argc, ...) style funcs any more?

    Someone, please explain...
  • SuSE already uses it (Score:5, Interesting)

    by red_dragon ( 1761 ) on Thursday May 15, 2003 @08:30AM (#5962954) Homepage

    I just got SuSE 8.2 installed this week, which includes GCC 3.3, and noticed that the kernel is also compiled with GCC 3.3. From 'dmesg':

    Linux version 2.4.20-4GB (root@Pentium.suse.de) (gcc version 3.3 20030226 (prerelease) (SuSE Linux)) #1 Wed Apr 16 14:50:03 UTC 2003
  • by Anonymous Coward on Thursday May 15, 2003 @08:32AM (#5962961)
    I have been wondering about this for more than half a decade now:

    Why is it that GCC never seem to get precompiled header support added?

    The compilation speed of GCC is one of its biggest weaknesses when you compare it to eg MSVC. All Windows compilers seem to have had this since the DOS days, but still GCC have never gotten this feature. How come?
  • by double_u_b ( 649587 ) on Thursday May 15, 2003 @08:41AM (#5963030)
    Visual Studio (v6) is a really bad IDE. I prefer Borland C++ Builder from far. And Visual C++ is really abject on some aspects: why does it compiles things differently if you are in debug mode? I used VC++ to code a file compression utility. Operator precedence is not the same between debug and release compiler mode. Debug had the same behaviour than GCC, Watcom or Borland. Release mode had a different behaviour. Not nice, since it took me hours to find wherethe bug was, since you can't easily debug a release executable... Moreover, speed-optimisation nearly always produces bad code. And debugging 'const' functions in C++ make the debugger go wild. Borland had neither of those problems. And the code looked cleaner when showed by BC++ Builder. Nevertheless, what I really like is the EclipseIDE, and the IDE of BeOS.
  • What does it mean? (Score:3, Interesting)

    by commanderfoxtrot ( 115784 ) on Thursday May 15, 2003 @08:50AM (#5963095) Homepage
    That's great... but can anyone tell us what a difference all that will make? I don't really care about compile times (too much)... but will mpeg2enc or ffmpeg run faster?

    BTW, there is a preliminary ebuild in Gentoo.
  • by Anonymous Coward on Thursday May 15, 2003 @08:58AM (#5963155)
    don't you think that a compiler should at least be able to deal with the last release of a core system component such as glibc ?

    The compiler should at least be able to compile a) the Kernel b) the Glibc.

    From the release timeframe a) is no problem. But b) usually takes half - one year to show up with a new release. They should at least release gcc and glibc at the same time or at least provide patches. Regardless to that the glibc and gcc people are almost the same persons so they should know best.
  • by Dionysus ( 12737 ) on Thursday May 15, 2003 @09:16AM (#5963302) Homepage
    Is precompiled header support even necessary?
    If you follow the advice of any good programming book, you shouldn't be including unnecessary header files anyway.

    One of the reason you need precompiled header support in Windows, is because to write any meaningful Windows program, you need to include <windows.h>, which include everything.
  • by rmathew ( 3745 ) <rmathew&gmail,com> on Thursday May 15, 2003 @09:18AM (#5963311) Homepage
    *Why* do you say that? I mean, can you point to some page that details the problems
  • by turgid ( 580780 ) on Thursday May 15, 2003 @09:46AM (#5963539) Journal
    Kylix is all well and good if you only need to compile code for the i386 architecture. If you need UltraSPARC, MIPS, PowerPC, M68k etc. you're up the creek. If you want a nice Free IDE you could try anjuta [sf.net]. It needs the GNOME libs though.
  • by Anonymous Coward on Thursday May 15, 2003 @09:46AM (#5963540)
    Intel's compiler [intel.com] smokes gcc in most benchmarks (not surprising, given that Intel knows how to squeeze every last bit of performance out of their own processors). Although it is not 100% compatible with all the gcc features, and therefore can't compile the Linux kernel, each release adds more and more compatibility. I hope the day will soon come when we can compile a whole Linux distribution with the Intel compiler.
  • by pommiekiwifruit ( 570416 ) on Thursday May 15, 2003 @09:53AM (#5963601)
    Operator precedence is not the same between debug and release compiler mode.

    Are you sure you are not getting precedance confused with order of evaluation between sequence points?

    C++ has fairly flexible rules in that regard, the much discussed (on comp.lang.c++) undefined behavior and implementation-dependant behaviour. For example i=i++; invokes undefined behaviour that may vary between compiler settings. My instinct would be that that is more likely to be the problem than compiler error in most cases. You should post the problem code to see if that is the case.

  • Re:Yes but... (Score:1, Interesting)

    by Anonymous Coward on Thursday May 15, 2003 @09:59AM (#5963650)
    Agreed. But I see two paths to optimization (probably more--I'm not a compiler writer). First, there are optimizations that work well on all processors. Things like inline function calls, loop unrolling, etc. Second, there are optimizations that work on a per processor basis. For example, using instructions in such a way that the processor's pipelines are filled and that its branch prediction doesn't choke, etc. Clearly Intel has the advantage in making the second type of optimizations. But do you think they also have the advantage in the first type as well? The only way to find out would be for Intel to work on gcc's intel-compling mode. Then benchmarks would show which does the better job of doing optimizations of the first type.
  • by smittyoneeach ( 243267 ) on Thursday May 15, 2003 @09:59AM (#5963651) Homepage Journal
    in response to
    Windows - developer friendly. Linux - developer hostile.

    that open source requires more skill on the part of the developer to get through the learning curve.
    A greater amount of knowledge about what is happening at all levels is mandatory to make that GNU\Linux system happen.
    Whether this is a but or feature probably depends on your current location on the learning curve. The more I interact with open source, the more I like the fact that there are relatively fewer secrets about what is occuring, a feature that seems lost by the time you reach the West Coast...
  • by Per Abrahamsen ( 1397 ) on Thursday May 15, 2003 @10:02AM (#5963685) Homepage
    Treating casts as lvalues is a GCC extension, and an extension that has been deprecated for C++ since 3.0 because it causes problems for valid C++ code.

    I believe the plan is to add a warning in 3.4 and remove it in 3.5.
  • Re:Sigh (Score:1, Interesting)

    by Anonymous Coward on Thursday May 15, 2003 @10:10AM (#5963751)
    Aren't hand-coded recursive descent parsers generally slower than machine-generated ones? Isn't that the point of YACC?
  • Re:Sigh (Score:4, Interesting)

    by uncleFester ( 29998 ) on Thursday May 15, 2003 @10:24AM (#5963865) Homepage Journal
    In the past, when a kernel has not compiled with a new gcc version it has been more often a bug in the kernel than one with gcc..

    not really; it's a combination of kernel developers trying things to deal with 'intelligent' inlining, or implementing hacks when they discover an idiosyncrasy with GCC. As the gcc team 'resolves' (fixes? ;) such things the hacks may then fail, resulting in compile errors. unfortunately, this can make the code MORE fun as you then have to add compiler version tests to check which code should be used.

    The goal, though, is using the latest kernel with the latest compiler will generate the most correct code. Simply pointing a finger at the kernel developers is incorrect; both sides can be the cause of compiler failures.

    disclaimer: not a kernel developer, just a more-than-casual observer.

    'fester
  • by jmccay ( 70985 ) on Thursday May 15, 2003 @10:25AM (#5963872) Journal
    Actually, Visual Studio is a great IDE. It's one of the few things Microsoft did well. It's not easy to understand at first, but it you take the time to learn it, you'll appreciate it.
    My favorite feature was the scripting ability. You could write VB Scripts (or start by recording them as a macro) to accomplish tasks. I wrote several VB Scripts that wrote out comments in the code.
    KDevelop is the only thing I have seen that's close to Visual Studio. I have C++ Builder 3.0 Professional at home, but I still like the design and easy of use of Visual Studio. The C++ Builder interface is missing some things--like scripting.
  • Re:Bounds Checking (Score:4, Interesting)

    by swillden ( 191260 ) <shawn-ds@willden.org> on Thursday May 15, 2003 @11:09AM (#5964293) Journal

    I hear they have added in some more advanced, and aggressive bounds checking.

    What are the run-time performance implications of this bounds checking? It sounds very nice for debugging, and a great thing to turn on even in production code that may be vulnerable to buffer overflow attacks, but it can't be free. A bit of Googling didn't turn up anything; does anyone know how expensive this is?

  • by Anonymous Coward on Thursday May 15, 2003 @11:15AM (#5964369)
    It's great to hear about all these new improvements in gcc, but what I really want is a working debugger for C++ code compiled with gcc. The gdb debugger is buggy as all hell. It gives wrong values for variables, has no end of troubles with C++, and often enough when I want to print out the value of a variable it tells me that it can't access that memory, with no further explanation.
  • by commanderfoxtrot ( 115784 ) on Thursday May 15, 2003 @11:29AM (#5964515) Homepage
    In the short run, nothing changes. In the long run, programs become better as they stick to better programming guidelines

    Not very promising!! Basically you're saying this won't make much difference to the end user in terms of speed. I'm not arguing -- I'm agreeing.

    Personally, I would much rather have a slow compiler which gets the most out of my system. Apparently the gcc2.95-age compilers are faster than the gcc3 series: in my book that's a good thing. But has anyone done any testing? How long does it take to do something CPU intensive with each compiler version? It wouldn't take much skill to make a script encoding an SVCD using mencoder/transcode compiled with different gcc versions -- any takers? (I'm in my master's exams...)

    And when will there be proper support for my Morgan Duron? At the moment I use athlon-xp in order to use my SSE instructions: but surely the cache size makes a difference to the code gcc should put out?
  • by Forkenhoppen ( 16574 ) on Thursday May 15, 2003 @12:09PM (#5964949)
    It's a shame that no one managed to fix bug #10392 [gnu.org] before release. Until that one's fixed, those of us who do Dreamcast hacking are stuck using GCC 3.0.4.
  • by Anonymous Coward on Thursday May 15, 2003 @12:13PM (#5964988)
    AFAIK there are no automatic cache optimizations that would be relevant in a relatively low-tech compiler such as GCC. There are some commercial bleeding-edge compilers that can do cache blocking in some circumstances, but languages such as C and C++ make such optimizations fiendishly complicated. Also, I've never seen cache sizes mentioned as an issue with regards to GCC optimizations, and I've been using GCC and following its development for close to a decade now.
  • by GlassHeart ( 579618 ) on Thursday May 15, 2003 @01:52PM (#5965931) Journal
    Seems useful, though I suspect many derefernced pointers are set NULL at runtime, and so not spottable during build.

    It's possible to check at compile time. It's not so much that the compiler detects whether a parameter is null or not at compile time, but whether it can be. For example:

    extern void do_work(void *) __attribute__ ((nonnull));
    void *p = malloc(100);
    do_work(p);
    can trigger a warning or error, because malloc() does not return a "nonnull" pointer, and so passing p to do_work is dangerous. On the other hand, given the code:
    extern void do_work(void *) __attribute__ ((nonnull));
    void *p = malloc(100);
    if (p != NULL) {
    do_work(p);
    }
    then the compiler can work out that the call is safe. This is how LCLint, for example, can do with its /*@null@*/ attribute. The logic will be a little like tracing whether a const variable is passed to a function expecting a non-const parameter. I don't know how far gcc plans to take this feature.
  • by jmv ( 93421 ) on Thursday May 15, 2003 @02:05PM (#5966069) Homepage
    About the multi-line strings, I think there's one place where I think they were really useful: inline assembly. I've got source files with big chunks (> 100 lines) of assembly and it's a pain to have a ".... \n" on each line.
  • Re:Hmph (Score:5, Interesting)

    by devphil ( 51341 ) on Thursday May 15, 2003 @02:07PM (#5966079) Homepage


    Now I understand what Bjarne Stroustrup wrote, when he described /. as "ignorant, and proud of it." Indeed, let the second-guessing begin...

    especially for C++, as it's standard keeps "refining" constantly,

    The standard hasn't changed since 1998.

    as does GCC's interpretation of it. Not to mention the extensions.

    The extensions are, in many cases, older than the standard. Now they conflict with rules added by the standard. One or the other has to give. And, of course, no matter what happens, somebody out there will declare that GCC "obviously" made the wrong choice.

    If you think it's easy, why don't you give it a try? Hundreds of GCC developers await your contributions on the gcc-patches mailing list.

    If you don't like it, you should demand your money back.

    Right now I'm [making changes]. What next?

    Again, the standard was published in 1998. The three changes you describe were decided upon even before then, and they haven't changed since. You've had 5 years to walk down to the corner bookstore and buy a decent book, or search on the web for "changes to C++ since its standardization". None of those changes are due to GCC, and trying to shift the blame to GCC only points out your employer's laziness.

    You've had half a decade. Catch the hell up.

  • by Micah ( 278 ) on Thursday May 15, 2003 @04:17PM (#5967300) Homepage Journal
    Is there any reason NOT to declare i386 support dead? No one is going to compile newer software for a 386 (bloat, etc) and older compilers work fine for older software.
  • by dhazeghi ( 147879 ) on Thursday May 15, 2003 @09:30PM (#5969520)
    Well, remember the kernel is to a large extent already optimized. So most generic optimizations won't help a whole lot. Still you can always try. Plus it's a great way to shake out hidden bugs in either gcc or the kernel...
  • by nusuth ( 520833 ) <oooo_0000us&yahoo,com> on Friday May 16, 2003 @06:01AM (#5971088) Homepage
    glibc is part of GNU project and gcc compiled stuff is expected to link with it more often than not, so I concur about compatibility. I think glibc people be warned and should be fixed before release of gcc though.

    As for the kernel, which "the" kernel is that?

One man's constant is another man's variable. -- A.J. Perlis

Working...