Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming IT Technology

GCC 3.2.1 Released 56

Szplug writes "GCC 3.2.1 has been released; many C++ bugs, & notably for x86 users, MMX code generation has been fixed. From the notice, ".. the number of bug fixes is quite large, so it is strongly recommended that users of earlier GCC 3.x releases upgrade to GCC 3.2.1."
Here are overview and detailed change notices. Download here [gnu mirror site]."
This discussion has been archived. No new comments can be posted.

GCC 3.2.1 Released

Comments Filter:
  • Er...

    The "overview" and "detailed" changes links lead to the same URL, which seems either mistaken or redundant. It might be advisable to either correct whichever doesn't lead to its intended target or, if there is only one relevant changes list, cut it down to just one link.

    • > The "overview" and "detailed" changes links lead to the same URL, which seems either mistaken or redundant.

      They figure you'll get the big picture the first time you visit the link, and you'll pick up more details if you go back and read it again.

  • But is it stable? (Score:5, Insightful)

    by Anonymous Coward on Saturday November 23, 2002 @03:23PM (#4739385)
    ".. the number of bug fixes is quite large, so it is strongly recommended that users of earlier GCC 3.x releases upgrade to GCC 3.2.1." ...and if you're still using 2.95.3 for real work, continue to do so.
  • Kernel? (Score:3, Interesting)

    by fulldecent ( 598482 ) on Saturday November 23, 2002 @04:24PM (#4739647) Homepage
    Would it be a viable consideration to recompile our kernels in light of this better MMX code generation? Better yet, is it generally a good idea to recompile our kernels whenever a bugfix release of GCC comes out?
    • Re:Kernel? (Score:2, Insightful)

      by Wiz ( 6870 )
      I wouldn't recompile a kernel unless I had to really, no matter what gcc does - that is assuming you are not having any strange problems. New bugs can creep into the gcc code as well, so you'd need to be careful. It is only since gcc 3.2 that some distros (RH & MDK & others?) have even started using the 3.x series for kernel compiles. The kernel can be very sensitive to compiler issues.

      As for MMX? Hmm, I'm not sure if I would. I'm not sure the kernel would benefit from any compiles like that anyway. Also, given MMX, SSE, etc have all seen compiler issues that have since been fixed in 3.2.1 it might be worth waiting a bit longer until we are sure the code for MMX is safe.

      Having said that, day to day (I run 3.2) here is what my default CFLAGS are set to for my Athlon:

      CFLAGS=-march=athlon-xp -mfpmath=sse -mmmx -msse -m3dnow

      I'd never use that lot to compile the kernel though, just whatever optimisations it turns on when you select your target processor.
      • I'm running 2.4.19 compiled with gcc 3.1 and I haven't had a problem. What I do have is a 42 day uptime
        • Re:Kernel? (Score:2, Informative)

          by tzanger ( 1575 )

          I'm running 2.4.19 compiled with gcc 3.1 and I haven't had a problem. What I do have is a 42 day uptime

          Same kernel, same compiler (maybe a 3.1-pre though): 85 day uptime. And my notebook's been running 2.4.19 and 2.4.20-pre definately compiled with 3.0.4 with no troubles.

          I am pretty sure that all the bad things related to gcc3 were in C++, not C.

          • There were some stack issue reguarding 3.0.x with the kernel. Maybe 3.0.4 has them fixed or you are just getting lucky but there was a good reason why no major distro shipped kernels using 3.0.x. or 3.1.x and it was only until 3.2 that they started appearing. As the major difference between 3.1 and 3.2 was a fixed C++ ABI then I believe 3.1 would be ok for kernels as well.
    • the mmx fix will prolly only affect you if you've compiled something with "-mmmx" in your cflags or cxxflags. generally, programs don't assume you have mmx support unless you explicitly tell them. My kernel has

      -Wall -Wstrict-prototypes -Os -fomit-frame-pointer

      so i'm not worri
      • i've fooled around with the mmx a little bit. for mmx-optimizable code, you can get a 2x-5x performance increase, but the generated code doesn't look as fast as hand-written mmx asm. I'd rather write a c for or while loop though :)

        My understanding was that -mmmx only enable recognition for the builtin mmx functions. It doesn't automatically compile your code with mmx register usage, but it does let you #include , which gives you a C front end the the mmx instructions.

    • Re:Kernel? (Score:3, Informative)

      by jericho4.0 ( 565125 )
      I don't think you would realize any real gains by recompiling your kernel. Recompiling your video/graphics/sound stuff, maybe. Recompiling might fix a bug or two, but if your system is stable thats no excuse for a recompile.

      Of course, you don't need an excuse to make a new kernel. Go nuts. If you pull another 20fps out of UT2003, please tell us about it.

      • Re:Kernel? (Score:3, Informative)

        by rplacd ( 123904 )
        Kernels are a good test for gcc. Often gcc's optimization or instruction scheduling code has led to unusual system behavior. Sometimes your system'll panic, sometimes things will seem flakey, especially with device i/o (think inline assembly code).

        For what it's worth, I've been using FreeBSD 5.0-CURRENT with gcc 3.[12].x for months now. I've compiled my entire system with -march=athlon. To be fair, it's just my desktop -- not a server.
        • Re:Kernel? (Score:3, Informative)

          by dvdeug ( 5033 )
          Kernels are a good test for gcc

          I don't know about FreeBSD, but the history of Linux (the kernel) and GCC has too many incidents of a GCC upgrade breaking the kernel, and the GCC hackers and the kernel hackers having a nice flame war over who's at fault. I'd rather let one of them test it out, rather than become a roasted guinea pig.

  • I installed this on my Gentoo box two days ago, by typing "emerge -u gcc". Everyone else is hopelessly behind the curve :)
    • So are gentoo users the new version of debian users, with s/apt-get bla bla/emerge bla bla/ ?

    • A few brave LFSers out there have been installing glibc 2.3.1 based-systems for over a month via gcc cvs (3.2 can't build 2.3.1).

      Everyone else is hopelessly behind the curve :)

      LFS is in the lead, Gentoo is in a distant second, and everyone else is, well, everyone else.


      • You've got it wrong, sorry :)

        My current gentoo box is running gcc 3.2.1 and glibc-2.3.1. Everything on my box was compiled with either gcc-3.2 or 3.2.1, including that glibc. I'm not sure exactly when glibc-2.3.1 went into Gentoo, but the current ebuild (-r2, probably the third version) is dated Nov 18th.
      • both glibc 2.3.1 and gcc 3.2.1 are in slackware-current, and have been for a while (few days at least). Everyone talks about the slow releases of slackware, but damn if -current isn't bleeding edge & stable at the same time.
    • When you upgrade gcc on your Gentoo box, does it recompile everything with the new gcc?
  • From all I've read and the benchmarks I've looked at, ICC (Intel Compiler) is 99% compatible with GCC and code generated is 30%-50% faster.

    This difference may be enough to push Linux way past Microsoft if Linux apps run that much faster than Microsoft apps.

    It seems like its crazy that the distros (REDHAT, SUSE, etc) don't use ICC as a drop in replacement for 386+ compiling.

    For other platforms use GCC, but why should 90% of users be punished for the sake of cross-platform features (sounds like java)?

    When will the linux kernel be compatible with ICC and why aren't more using it??

    • When will the linux kernel be compatible with ICC

      It seems to be [iu.edu]

      why aren't more using it??

      It's proprietary software. A better question is "Why doesn't Intel dedicate engineers to optimizing gcc's code generation for ia32 and ia64?". This would be a much more useful contribution.

      ~Phillip

      • It's proprietary software. A better question is "Why doesn't Intel dedicate engineers to optimizing gcc's code generation for ia32 and ia64?". This would be a much more useful contribution.

        Intel wants to sell it's own compiler, so why should they try to optimize gcc and make their own compiler useless?

      • Q: "Why doesn't Intel dedicate engineers to optimizing gcc's code generation for ia32 and ia64?"

        A: Intel is not a charity. Software Engineers do not optimize compilers for free. Giving away the work of well-paid engineers is not a very intelligent business decision.
        • Software Engineers do not optimize compilers for free. Giving away the work of well-paid engineers is not a very intelligent business decision.

          Then why does Apple work on optimizing gcc? Because giving away the work of well-paid engineers can be a very intelligent business decision. The deeper response is that ia32 dominates the field, and it's hard to optimize for Intel chips and not AMD, and that Intel makes enough money from its compiler that it offsets the need to have the best compilers for their chips.
          • You circled around the answer alot.

            Apple does not sell a C compiler. Intel does. Intel's bread and butter is ia32 chips running Microsoft OS's -- contributing to project that would improve a free replacement to Microsoft's OS would be rather dumb.

            Would it make sense for Toyota to provide engineering support to Fiat for free?

            • Something is wrong though. The cost of computers have come down to a couple hundred dollars at the low end and yet the Operating System is still 100+ dollars. Intel must realize that Microsoft's monopoly/non-competition and price fixing is going to start costing them chip sales at some point. Free OS software may actually benefits their sales significantly at some point in the near future.
      • by cimetmc ( 602506 ) on Sunday November 24, 2002 @05:22AM (#4742310)
        I think one of the main reasons why Intel does not contribute to GCC is not that they want to make money selling their compiler, but rather that they are not satisfied with the results of prior contributions they did to GCC. I might be wrong, but I think Intel contributed to GCC at least twice in the past. When the Pentium processor was released, Intel was quite dissatisfied by by the performance of the GCC code on Pentium processors, and so they took the GCC source code and made a number of improvements to it to generate much better Pentium code. They than gave back the modifed source code as some igcc compiler or so. This modified GCC compiler was then used as a bases of the PGCC compiler, but they were never really picked up by the GCC project until EGCS became GCC. Later on, I think Intel sponsored the GCC project to pay a developer to improve Pentium II support, but once again, it took I think something like 2 years until the contributed effort was mirrored in a released GCC version. So I think that in the end, Intel decided that it was not effficient to contribute to the GCC project as the contributions took too long to show an efffect on actual GCC compilers and this would make newer Intel processors run inefficiently for too long a time on GCC based systems. With their own compilers, Intel themselves are master of the release process, and they can make sure that they have a new compiler at the same time they officially start shipping a new processor generation. Finally, there is another big hurdle Intel may face. Especially processors like the Itanium and the P4 take a lot of modifications to the compiler to make efficient code. However because GCC has to support a wide range of processors, big architecture changes aren't easy to implement at the risk of breaking compatibility with other processors supported by GCC. So overall, I think Intel chose to make their own compilers because to them, that is the most efficient approach that guarantees them to have good compilers for new processors in an acceptable time frame. Marcel
        • Moderators, moderate parent post up. He's got it right.
          • Here is some more background information:

            References to Intel's early offorts on GCC can be found in the documentation of the PGCC project:
            http://www.goof.com/pcg/

            In June 1999 the new x86 backend sponsored by Intel was announced on the GCC mailing list:
            http://gcc.gnu.org/ml/gcc/1999-06/msg00548. html

            In June 2001, GCC 3.0 which first included the new backend was released:
            http://gcc.gnu.org/ml/gcc-announce/2001 /msg00002.h tml

            Marcel
        • Then you have to ask, if Intel has its own compiler, why do they not release it as free software? They must think that the money they make from compiler sales outweighs the increased sales of Intel processors from having a good free compiler for them. I can imagine this is true for IA-32 since code optimized for a P4 also runs well on an Athlon, so Intel wouldn't particularly be promoting their own chips (except for those choosing between say SPARC and i386 for their new supercomputer, and there aren't many of those).
          • I think it's mainly a support issue. The money Intel gets from the compiler is probably used to pay the people that have to support the compiler. Software companies would certainly want to get professional support, and the cost of the compiler is certainly not an issue for them. For those people who pay for the Intel compiler, Intel gives support. If Intel gave away the compiler to everyone, they could not give the same level of support to everyone, and professional users might not be keen to use the compiler without proper support behind it.

            Marcel
            • This is the old argument that RMS addressed in the GNU manifesto. If companies want support for the compiler, and are willing to pay for compiler plus support, surely they would be willing to buy support and get the compiler as free software.

              No, it must be the case that there isn't enough demand for support to make it economical to make the compiler free and sell support. Intel has to make the compiler itself payware in order to get the most money from it.
    • Well, the same as in every other Open Source project, if it doesn't exist and you want it, do it yourself.

      On the other hand, it's not just apps and a kernel, it is the whole operating and development environment that make up Gnu. GCC is the cornerstone of Gnu. It would take a lot energy to overcome the inertia and get tens of thousands of programmers to add "#ifdef __ICC__" or whatever the flag is, all over the place. Remember, too, that the distros don't really 'own' the code, in terms of who is in control of the development tree for each project. If RedHat, for example, made tweaks to a program for ICC, then they would have to do it every time that program had a new release.

      So what you might would do, is go to each of your favorite projects, find the alterations needed to build with ICC, and send the patch to the responsible individuals. When accepting bug reports, developers give so much more credence to them when the the problem is accompanied by the solution.

      I agree with the others who think that the best course would be to get the latest Intel assembly optimizations into GCC. I'm on the GCC mailing list, and those individuals are very interested in doing whatever they can to improve performance on Intel.

    • Hi,

      > and code generated is 30%-50% faster.

      Yeah, right. Only if you compile with gcc -O0

      Why is it that people keep bashing gcc's speed? it is a *little bit* slower than icc on x86, I'll give you that, and on some rare applications significantly slower, but on average gcc does a splendid job.

      Don't belive me, try it for yourself, or read:

      http://www.coyotegulch.com/reviews/intel_comp/in te l_gcc_bench2.html

      (I hope they update their review for gcc 3.2.x and icc 7.0)
  • One of the biggest fixed I've noticed is that warning about the system include path. When you specify something like -I/usr/include (redundant, which often happens when you configure with your prefix as /usr instead of /usr/local), you'll get warnings about the system include path search order being changed. Sometimes it's treated as an error, other times just a warning, but I've had 20-30 packages fail to compile because of this, and it's a bitch to get rid of when you have to sift through about 10-20 makefiles. I upgraded to 3.2.1 and haven't looked back.

"A car is just a big purse on wheels." -- Johanna Reynolds

Working...