Follow Slashdot stories on Twitter


Forgot your password?

Benchmarking Intel C++ 6.0 to GNU g++ 3.0.4 59

axehind writes: "Here is a good article detailing a benchmark [comparison] between the two compilers. The results are very interesting."
This discussion has been archived. No new comments can be posted.

Benchmarking Intel C++ 6.0 to GNU g++ 3.0.4

Comments Filter:
  • I have used the Intel Compiler at work, and Its very nice. It would be interesting to see if somebody was able to produce a distro compiled on something else than gcc.
    • Intel compiler works great, except for handling exceptions in a multi-threaded program. The exception stacks get mangled if multiple threads throw/catch exceptions at the same time. gcc exception handling is completely thread-local so there is no locking and it works great. That's why I can't use icc for my own product. My info is about 3 months old. I wonder if they fixed this...?
  • The subject says it all but abstraction is good if used in moderation. I think game programmers can attest to that.
    • Abstraction is something that has to be balanced against all forces which impact development, including development time, execution speed, and complexity of the code. Needless to say, these are often divergent factors, and there is often no single answer to them all. Therefore, it is important that the developer have a clear understanding of all constraints on the project and proceed accordingly.

      Remember the XP tenet of design something only as complicated as it needs to be to solve the problem. This really isn't anything new; rather, it is just clering up the misconception some people have when they take a good thing too far (see UML, XML, and OOP in general)
  • The author points out that while gcc is cool because you can (A) compile the kernel, and (B) view the source code, a typical C++ developer probably doesn't need to do those things on a daily basis, and when they do -- well, gcc is always available.
  • by Talonius ( 97106 ) on Thursday May 09, 2002 @04:50PM (#3492977)
    More support from mainstream companies like Intel means more recognizable brand names associated with Linux which means more "reputation."

    I still use gcc just because everything I have (theKompany's KSG, for instance) is presetup for gcc. :)

    Maybe I'll play with the Intel compiler though I do nothing that "intensive." Most of my stuff waits on the user.

  • From the article:
    The options used for compiling were: [...]
    icc -O3 -axK -ipo .
    Article (in German) [] describing the bug in vers. 5.5 of the compiler. In short: when using the "inter procedural optimization" ompimization, the compiler would sometimes generate faulty code, as an example they give POV-Ray rendering tinted images.
  • Kudos to GCC (Score:5, Interesting)

    by zulux ( 112259 ) on Thursday May 09, 2002 @05:16PM (#3493161) Homepage Journal
    Given that GCC is cross-platform to the extreme, I'm verry impressed with GCC ability to hold up well to Intel's finest. Plus GCC has diferent front-ends for other languages, it gets even more impressive.

    Personally, for initial developemnt of cross platform stuff, I actually use Borland's C++ Builder compiler and linker. It produces slow code, but it's amazingly fast at compiling and linking. The debug and compile cycle goes so much faster - that I get more work done faster than with Emacs and GCC. After the code runs well on Windows - I move on to testing with GCC on other platforms.

  • by leastsquares ( 39359 ) on Thursday May 09, 2002 @05:23PM (#3493206) Homepage
    Often pure speed isn't a crucial requirement. (With my code speed _is_ critical, but that's besides the point). However, this isn't the only advantage of icc.

    Icc also:
    * Supports OpenMP.
    * Has a great debugger for multithreaded code.
    * Has handy profiling and optimisation tools.
    * Is highly standards compliant.

    Granted, gcc wins hands down on portability though.
  • Please (Score:4, Insightful)

    by photon317 ( 208409 ) on Thursday May 09, 2002 @05:43PM (#3493360)

    Please people, don't go making patches and whatnot for all the standard linux distro package source code to get everyone compiling on this Intel compiler. For normal system services, the performance gain isn't worth the loss of the highly multiplatform and highly GPL gcc we have today, and it would be a shame for gcc to fall by the wayside in common use (probably 80% of gcc compilations are x86 linux) because of small performance gains you won't really notice.

    The Intel compiler sounds pretty damn good, but it should really be left for those one-off needs (like compiling a custom scientific application for your beowulf cluster), not general use.
    • On the contrary, I would love it if Gentoo or Slackware were designed to use icc instead of gcc. Aside from the performance gain of the compiled code (which, as you said may be negligible in most situations), the actual time it takes to compile is much, MUCH shorter with icc than with gcc. Especially with C++ code (KDE/QT anyone?).

      I do love having gcc available everywhere, but I also wish they would spend some time optimizing actual compile time (maybe only when code output optimizations are tunred off?). Having a compile time for very large projects be 30 *minutes* shorter with icc is a real productivity boost.
      • Are you sure about that? In almost all tests I've seen, the Intel compiler takes longer to compile something, often several times longer.
    • Yes, but it would be nice if all applications could compile with gcc _and_ icc (see ./configure script to automaticly detect if available, maybe...).

      All the code I will personally develop will compile on both from now, since icc is said to be binary compatible with the pending gcc-3.1 and that the compile-time is far much better than gcc!

      The simple add of #include in my sources decreases dramatically the time of compiling... Using an 'old' AMD K6-2, I'm very happy to gain 50% of the time!

      And times to times, try compiling with gcc to see if it works...
      • Re:Please (Score:2, Insightful)

        by Bert64 ( 520050 )
        Not just icc, but also SGI cc, Sun cc and Compaq cc, gcc optimizes best for the x86 architecture, and look how much faster icc is in a lot of areas.

        All the code i write, is tested on Solaris, IRIX, Tru64, Linux, FreeBSD and NetBSD... using cc whenever possible, performance gains of 600% are possible in some circumstances.. but in almost every case, the vendor cc produces noticeably faster code.

        People are complaining about microsoft embracing and extending, yet the gnu folks are doing EXACTLY the same with gcc, glibc and to a lesser extent some other tools.
        gcc provides a number of new extensions which aren`t present in any other compiler, and it`s also far more tollerant of code errors than other compilers (remember msie?)
        This causes coders to use these nonstandard extensions, and write code which REQUIRES gcc, consequently users are forced to install it and put up with inferior output. Microsoft couldn`t have done it better themselves.
        glibc is equally as bad, there have been several security holes, the performance is seriously lacking compared to the old libc5 on older linux distro`s, and theres a high level of incompatibility requiring a lot of code to be rewritten. Then ofcourse we have the embrace and extend features, new features present ONLY in glibc which force users in and try to keep them there.
        Why couldn`t these extensions be implemented as a seperate library? for the same reason msie doesnt exist for linux, there would be no way to force users to use your other products.

        Now in many ways having a free crossplatform compiler is a good thing, just like microsoft software, but it should be a CHOICE, not something you are forced to use by sloppily written code and propriatory/nonstandard extensions.
        Besides, is it not the unix way to have one tool for one job, and do it WELL? how does a compiler for atleast 4 languages, capable of running on 10+ architectures fit with this?

        Ask any experienced SUN, DEC or SGI admin, go on irc and join #sgi or #solaris, they will tell you exactly the same as me... gcc is crippling performance ESPECIALLY on non x86 architectures.
    • Re:Please (Score:4, Insightful)

      by Permission Denied ( 551645 ) on Thursday May 09, 2002 @09:05PM (#3494226) Journal
      I don't particularly care for one compiler over the other - almost all of my code runs fine under both (I've only recently started playing with icc). The only compiler-dependent thing I've ever used was gcc's inline assembly (which is extremely useful when you need it), but that was only for one project.

      I personally like using more than one compiler/libraries just because it makes my code more portable, and it catches dumb mistakes. I usually develop on FreeBSD with gcc, but I'll occasionally I'll compile on a Linux/glibc or Solaris box (using Sun's cc, not gcc), just to make sure I'm not including the wrong header or something.

      I have this old Sparcstation 4 running SunOS 4.1. I don't use it for anything vital, but I just like playing with it for nostalgia purposes. Lots of stuff won't compile on it, even using gcc. People sometimes just compile some software for Linux and assume it will work everywhere - I kind of feel sorry for those that have to use HP/UX or AIX, as I know those guys are going to have all kinds of problems.

      In short, I agree that the minimal performance gain isn't useful, but portability is important.

    • The difference isn't as small as you say. We saw an immediate 15% gain in our production C code, and that was on a Pentium 3, the Pentium 4 test machine gained quite a bit more from ICC. In certain cases, others have claimed two or three times performance for their respective program.

      I believe in 'may the best man win,' Intel's compiler is certainly worth buying, and if your production code needs the speed boost the be more competitive, then there is no choice - it must be done the best way.

      This isn't about licensing, who makes, if its open or not, its about a meritocracy voting on what's the best way to see performance on a given platform.

      I would love for Gentoo to allow the use of ICC to compile the whole distribution. It not possible for certain things, but I'd like to see it done.

      • By all means use icc for your production code. What I'm anti-advocating is retrofitting standard linux distributions to compile with icc, because I feel that would detract large hordes from the gcc userbase, causing it to suffer. As stated elsewhere, the fact that gcc even comes close is remarkable, considering the age of the compiler, and the fact that it supports a gadzillion different platforms and several different language front-ends to the same code-generating back-end. icc just supports one language family on one architecture, and happens to be written by the same company that designed the cpu for that architecture :)

        For that matter, thinking about this icc thing has brought me around to another point of view as well: Since C is such a ubiquitous language, especially for systems software and kernels... A chip vendor providing a free compiler should almost be mandatory, much like a video card manufacturer providing a free driver. Kinda nice that Intel does seem to agree with this way of thinking. Of course I'd be 10 times happier if Intel would GPL icc and let the more robust gcc suck up their x86 optimizations. Other commercial companies and donated significant r&d efforts into donated optimization code to gcc, why not Intel?

        • I think that there is a complex sex going on between Intel, Microsoft and being more supportive of OSS endeavors. Obviously with AMD's shocking support of Redmond a few weeks back in the hearings against the 9 states and MFST, in addition to MSFT's Opteron support, one would think that embracing the x86 OSS/GNU/*BSD community would be favorable at this point.

          In addition to the C and C++ front-end, i do believe Intel supports Fortran with another compiler. They should round out front end support with Ada, and objective C, other than that - I have no complaints.

          In addition to that, the license is rather ridiculous, particularly in reference to the runtime restrictions, which seem to indicate that for IA-64 its very permissive and free, and for IA-32 it is not. This internal self competition is ridiculous, Itanic is ridiculous.

          As far as sharing the intellect behind optimizing ICC, I'm sure Intel doesn't make anything off of either the ICC for Win or Lin, and probably several orders of magnitude less on the Lin compiler than the Win stuff, so opening up the Lin stuff wouldn't hurt them financially. The only problem is, ICC makes AMD faster too. If the secret sauce was revealed maybe the GCC team would end up modifying it such that AMD chips do better because of it.

          Who knows. Open for OSS/*BSD/Lin makes more sense, and companies have to find that out the hard way. Its all a credibility issue.

          If I was Intel, I would try and make GCC as fast and nasty as possible to combat the VisualIdiot.NET stuff - every little bit counts. Intel will make bread anyway the cookie crumbles.
          • Let's ignore for a moment that Intel doesn't mind making money with the compiler (and the other SW development tools like VTune). Most of the speed-up of the compiler does not come from processor specific optimizations, but from general ones (as can be seen by speed-up for Athlons). So if Intel opened the source, they'ld lose the edge over AMD or even other ISAs.
            • I'd say then they owe it to x86 to open up - but they can't the for fiscal reasons. It's sad.

              If x86 was more aggressively challenged by other architectures, we may see the tides change. They've been in the driver's seat now for some time.

              I'm 'open' to new things, anytime. ;p

            • It's true about AMDs speeding up, but what I meant originally by "processor specific" still does apply. AMD is an IA32 clone of sorts and it's just that AMD and Intel x86 are close enough that there are general x86 techniques that help both. If icc were ever ported to MIPS or something, I would expect much of the speed gains they had in x86-land would have to be re-engineered from scratch.

              In contrast, while gcc has processor-specific optimizations for a number of targets, they have also concentrated on deeply generic optimizations in parts of the code generation that are common to all languages and platforms. Some of the code in gcc has a certain symmetry with very important and often-ill-defined central CS concepts because of this.
    • Re:Please (Score:4, Interesting)

      by 0x0d0a ( 568518 ) on Friday May 10, 2002 @01:18AM (#3494994) Journal
      Actually, the Linux kernel has lots of bugs with regards to being correct C code. If you port it to another compiler, you're going to fix a lot of bugs. Making Linux more portable is likely to clean up a lot of issues.

      • But it's also my understanding that the Linux kernel code likes to have more control over the resulting binary than ANSI C really dictates, and that they use lots of trickery specific to gcc (and even to certain revisions of gcc) to get thigns done. Seems supporting all this trickery in another compiler would lead to even more #ifdefs and bloat and bugginess to me.
        • One of the bug reports for gcc against the linux kernel was strcpy(string,"a"+0X40000) was being optimized into string[0]="" which is a legal optimization in ISO C(aka ANSI C) with -O2.
          Even Linus got involved but he even says that linux needs to compile with -O2 which is wrong for any program for that to happen. So linux is busted in terms of complicy with the ISO C. Also linux depends on some gcc extensions.

          Also gcc 3.0 start the depeation of strings that wrap over more than one line which was an undocumented extension and made the compiler not compliant. The source code of Linux used this a lot so they need to fix all of those problems before using gcc 3.0 and above.
    • Re:Please (Score:2, Interesting)

      by Bert64 ( 520050 )
      The performance gains on x86 are far from small, theyre quite noticeable, especially on the P4, and if the whole system was recompiled using this you would DEFINATELY notice.
      But just try benchmarking gcc to SUN Workshop compiler on Sparc, or the Compaq C Compiler on Alpha, or even MipsPRO on Mips, the difference between gcc and these compilers is MUCH bigger than icc`s.
      If code were written cleanly (ie not using ANY nonstandard extensions present in compilers, including gcc) then this wouldnt be an issue, people would just be able to go ahead and compile using the compiler of their choice. Many programs do infact compile without gcc, and result in huge performance gains.

      gcc is to C compilers what msie is to browsers - commonly used, and easier to code for - while making your code incompatible with others, and yet inferior in MANY ways.
  • by Anonymous Coward
    Why would function inlining slow performance? Ought the removal of function calling overhead (pushing and popping of params, jumps, etc) speed up execution in all cases? I can see how it would fatten up the object code, but I can't see how it would slow it down.
    • by V. Mole ( 9567 ) on Friday May 10, 2002 @11:06AM (#3496820) Homepage

      What can happen with inlining is that code size expands to be larger than the available CPU cache memory. (Re-)Loading code from main memory (or slower cache, for CPU's with multi-level caches) is slower than a few register updates. You can see the same effect with loop-unrolling, although given the way CPU caches have grown means it's becomeing less of a factor.

      Note in particular the article's observation that "inlining doesn't show any slowdown" is exactly that: an observation, applicable to that particular processor. A serious project would have to benchmark their particular application on their particular target machine to determine the best choice of compiler options.

  • Question? (Score:3, Interesting)

    by chfleming ( 556136 ) <chfleming AT home DOT com> on Thursday May 09, 2002 @11:13PM (#3494579)
    Theses two compilers both use glibc right?

    Intel's compiler beat gcc badly on the Monte Carlo and Mazebench(w/o image saves). Both these two apps use rand() and there are multitudes of different algorithms for random numbers.

    Perhaps the Intel compiler is using it's own algorithm for rand() that cuts corners?
  • Why use -fast-math? (Score:5, Interesting)

    by V. Mole ( 9567 ) on Friday May 10, 2002 @11:24AM (#3496949) Homepage

    While there are some uses for it, I doubt that any serious floating-point codes would use "fast-math" (shorthand for "not-quite-right-math"). IEEE math is not perfect, but it allows one to estimate and control error accumulation reliably. The correct response to discovering that ICC defaults to fast-math is not to enable it in GCC, but disable it in ICC.

    I've no idea whether it change the relative result of the benchmarks, but at least they'd be representative of actual use. (Or run them both ways, actually, to see which compiler is "cheating" more :-)).

    • There's a lot of code out there that uses -ffast-math. Perhaps the scientific simulations don't, but the kind of code that normal people actually use does.

      Think of mp3 decoding, divx playing, 3d games, etc...

      • ...but the kind of code that normal people actually use does.

        Hmmm, I guess I'm abnormal...I'd always suspected as much.

        I had thought of games, but aren't there fixed point algorithms for MP3 and DIVX that are faster than floating point? Or is it primarily that the fixed point algorithms are faster on machines w/o floating point hardware, but not in general?

      • by sasami ( 158671 )
        Think of mp3 decoding, divx playing, 3d games, etc...
        No, SSE or 3DNow is a much better choice than -ffastmath for that kind of thing. I'm pretty sure Winamp supports both instruction sets, and I'd be very surprised if the compute-heavy DivX didn't employ them, too.

        You might give up precision when using SIMD (on x86 anyway) but at least you can control that. Using -ffastmath is much more of a toss-up. Case in point, LAME has been observed to produce seriously different results when compiled with -ffastmath.

        Dum de dum.
    • That is why gcc is changing the option name to -funsafe-math-optimizations.
    • If the code isn't written with IEEE math in mind (and I suspect the vast majority of floating point code is written by people who not completely sure of the difference of real numbers and floating point numbers), then why not use -ffast-math?
      • A lot of C code is written by people who are not completely sure of the difference between C and Visual Basic; that doesn't justify benchmarking C compilers with with --skip-boring-loops enabled.

      • If the code isn't written with IEEE math in mind (and I suspect the vast majority of floating point code is written by people who not completely sure of the difference of real numbers and floating point numbers), then why not use -ffast-math?

        A person who understands completely how floating point numbers work can handle whatever imprecisions tossed at them. People who don't know how floating point numbers work, in order to get usable results, are going to need every bit of precision the computer can give them - they don't need fast-math burning all their precision for speed. If all you need is a meaningless number, there are quicker algorithms. If you need maximum speed and meaningful results, then you had better learn how to use your tools correctly.
        • It is not my impression that -ffast-math is a question of precision, is is a question of what the compiler guarentees. It may very well lead to higher precision, for example by skipping precion losing intermediate steps. However, if the code was written with ieee in mind, these steps may very well be necessary for a correct result.
  • Official Addendum (Score:5, Informative)

    by ChaoticCoyote ( 195677 ) on Friday May 10, 2002 @02:24PM (#3498264) Homepage

    At the least the site stayed running with the spike in hits...

    I'm putting together some "large" benchmarks for the "rematch" when gcc 3.1 hits reality next week. The problem with most "real world" programs is that they're interactive or I/O bound, masking the code generator's abilities.

    I need to be clear about one thing: Anyone who tosses out gcc over this review is a fool. Intel C++ is a good product for very specific applications, but it does not replace gcc. All the benchmarks show is that Intel C++ can provide a performance boost for certain classes of computationally-bound programs. For some of my scientific work, Intel kicks ass in comparison to gcc... for some other projects, gcc comes out on top. What's important is choice and competition, which fuel evolution...

    I urge people to read the entire article before making any assumptions about my goals or the results.

    • "What's important is choice and competition, which fuel"

      HERE HERE!

      So code should stop using the nonstandard extensions present in gcc, and should definately stop checking for gcc in the configure script and then exiting if it finds something else. Code written FOR gcc does not give us choice, code written for portability between compilers does. Look at openssl as an example, read the configure tool, look at all the architectures and compilers it supports just fine... if openssl can do it, why can`t all code compiler on such a wide range of compilers?
  • I haven't had a compelling need for my compiler's source code. I don't have the time to do compiler hacking when I'm trying to write code for my customers.

    This, to me, is like saying, I have no compelling reason to participate in democratic elections, so I don't vote!

    • I have no compelling reason to participate in democratic elections, so I don't vote!

      Yeah, but democracy is moving the way of the marketplace -- 1 vote per million dollars owned. :(

      Is there much point in voting if, by hook or by crook, the tools of the corporations will always end up in power? Is the choice between Tweedledum & Tweedledee really a choice at all?

  • Profiling feedback (Score:3, Informative)

    by Skuto ( 171945 ) on Friday May 10, 2002 @04:54PM (#3499275) Homepage
    They missed the most important part: profile feedback optimizations.

    This is one part where Intel C really gets way and beyond the GCC compiler. Compile first with -prof_genx, run program, recompile with -prof_use.

    The speedups are _big_. Intel C will totally kick GCC's butt with this option enabled.

    GCC can also do profiling feedback optimizations, but it is not nearly as good.

  • Under the "MazeBench" test, the comment is made: "Faster is better."

    We know faster is better. The question is, is a lower score faster or a higher one?
  • The Slashdot headline and the article headline read Intel C++ 6.0 vs GNU g++ 3.0.4, but the <TITLE> tag reads Intel C++ 5.0.1 vs GNU g++ 3.0.1... I would assume it's the newer one, but still.

  • Inlining problems (Score:3, Interesting)

    by p3d0 ( 42270 ) on Sunday May 12, 2002 @10:55AM (#3505793)
    I compared icc to gcc on a recent project, and came to two conclusions that surprised me. The first is that icc can understand gcc-oriented code very well, including the asm syntax. I was pleasantly surprised at how easy it was to switch from gcc to icc.

    The second conclusion is that gcc is better at massive inlining. The coding style I used on this project was to make heavy use of inline functions instead of macros. Often, to get decent code to be generated would require a few dozen functions to be inlined into each other, and the results to be attacked by -O3. Afterward, these things would produce small, fairly tight code sequences of only a dozen or so instructions.

    When I switched to icc, I noticed an immediate tenfold decrease in performance. The culprit: lack of inlining. icc has a number of strict requirements for functions that are to be inlined, and most of my functions broke at least one of these rules. (For instance, ironically enough, it can't inline functions that contain asm directives.) For some of them, I couldn't tell what rule was being broken; I could only see that the function wasn't inlined. Furthermore, icc essentially ignores the "inline" directive, so there was nothing I could do about it. By contrast, gcc obeys "inline" unless that is totally impossible.

    Granted, the optimizations that gcc does after inlining are less than ideal, but people are working on that, and I gather that the 3.x releases are supposed to be much better than the 2.x. Anyway, that was a price I was willing to pay in order to use inline functions instead of macros.

"It takes all sorts of in & out-door schooling to get adapted to my kind of fooling" - R. Frost