Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Intel

Intel C/C++ Compiler 8.0 Released 161

Peorth writes "Intel has released version 8.0 of their Intel C/C++ compiler for both Windows and Linux. This release has been rumored for a long time to contain 100% GCC source and binary compatibility. It seems great strides have been made in advancement of that goal, as well as of its performance, but it may have a long way to go yet. Has anyone had experiences with it yet, either good or bad?"
This discussion has been archived. No new comments can be posted.

Intel C/C++ Compiler 8.0 Released

Comments Filter:
  • by Anonymous Coward

    Has anyone had experiences with it yet, either good or bad?

    No, but I heard about it on this "nerd news" website that I frequent.

  • If that's true then for Intel is very important to have the compiler that can compile the fastest linux kernel, and to be able to link other software that runs on linux.

    Very good way to show the world that linux is more mainstream every day.

    I see no other reason to make the compiler binary-compatible with GCC. (Yes, BSD benefits too.)
  • by gl4ss ( 559668 ) on Sunday December 14, 2003 @08:14PM (#7721062) Homepage Journal
    yes this is for you gentoo folk, does it work as just drop in replaced? benefits?

    -
    • I've not tried it, but there is a USE flag to use icc instead of gcc if you have it, so it's supposed to be drop-in-and-tweak-a-flag. As for benefits, who knows? I know my gentoo box feels snappier than the same box running Mandrake or RedHat, but I have a sneaking suspicion that it's only snappier 'cause it's running fewer deamons--I doubt that source-based distros really get you much more performance, but I have no numbers to back up either side of the argument--so if compiling from source with gcc does
    • Using the ebuilds posted on gentoo's bugzilla, I can NOT use icc 8 as a drop in replacement. Attempting to compile 2.6.0-test11 results in screens of errors after executing 'make bzImage'. If this was supposed to be able to compile the linux kernel or be a drop in replacement, it fails. Horribly.
  • okay. (Score:1, Troll)

    by /dev/trash ( 182850 )
    But it's not free. Why would I want a compiler that isn't free?
    • Because... (Score:4, Informative)

      by DAldredge ( 2353 ) <SlashdotEmail@GMail.Com> on Sunday December 14, 2003 @08:37PM (#7721193) Journal
      Because in some/most cases it makes faster code and saving 10% execution time is worth the sub 900 USD price of the Intel C compiler.
      • And as an added bonus, if you give a copy to a friend, you are a criminal! cool. or something.
        • coriordan, please stop trying to convert the unwashed masses here at slashdot and lwn. Anybody who is at either site is well aware of GNU and their "philosophical" positions and has heard these regurgitated arguments before, and if we aren't members of the GNU/Cult it's because we've examined their positions and found Stallman and Co. to be frothing-at-the-mouth crazy. Your naive, ignorant, and overconfident opinions aren't the gospel you take them to be. Your sermons at LWN expounding the GPL and the GFDL
          • well aware of GNU and their "philosophical" positions
            I mentioned neither "GNU" or any "philosophical positions".
            Staying out of jail, and knowing what a compiler is doing, are *practical* arguments against using Intels compiler.

            expounding the GPL and the GFDL
            I denounce the GFDL.

            So you don't approve of its being protected by regular copyright law
            GPL uses regular copyright law.

            sit down and shut up
            Challenge, rebut, argue, but don't tell a person not to speak. (and I *am* sitting down.)
            • If someone has absolutely nothing worthwhile to say, it's worth telling him not to speak, at the very least as a service to him- helping get him to do better things with his time. The GFDL comment was just because I happened to see you defending Stallman and co regarding it on lwn once; I didn't read through everything you said in that discussion because I didn't have an vomit bag handy, and thus did not notice you denouncing the GFDL. I have no argument with the GPL and the way it utilizes copyright law (t
            • Staying out of jail, and knowing what a compiler is doing, are *practical* arguments against using Intels compiler.

              How is staying out of jail an argument against using Intel's compiler? It's only an argument against using ICC illegally. Same thing goes for distributing GCC under an incompatible license.

              As for "knowing what your compiler is doing", you never made this argument in your original post. Even so, I assume you're getting at one of three things:

              • You don't trust Intel enough to run t
              • > How is staying out of jail an argument against using Intel's compiler?

                If you have a copy of icc, and a friend or family member asks for a copy, what do you do?
                You could say "No, I promised Intel that I wouldn't give a copy to you or anyone else". Or you could break the promise to Intel and help your friend. I think the latter is the lesser of two evils, but it's easier to simply avoid this dilema by not making that promise to Intel in the first place.

                > As for "knowing what your compiler is doing
      • I guess. For the average programmer, it's not worth it.
    • Go look at the Dell SPEC numbers in Apple's G5 advertisements. Now go look at the official Dell SPEC numbers. IIRC the SPEC numbers dropped over 20% by moving from Intel to GCC. YMMV. Not a slam against GCC. Ubiquity is its goal not best performance on all platforms. GCC is also handicappd in that it can't use proprietary techniques Intel can easily afford to license.
  • by Sparr0 ( 451780 )
    so, if its 100% compatible with GCC source, and produces 100% compatible binaries... doesnt that make it GCC?
  • kernel (Score:5, Interesting)

    by portscan ( 140282 ) on Sunday December 14, 2003 @09:08PM (#7721395)
    My first thought was, "does this mean it can finally compile the Linux kernel?" But the website says "with a few kernel modifications, [icc] can build the kernel." Since gcc can compile it without modifications, doesn't this mean they are not 100% compatible? Also, there is no link to these patches anywhere, just this article [intel.com] on icc 7. Do you have to figure out the problems and fix them yourself?

    Obviously there is other software to compile besides the Linux kernel, but since the icc is so tuned to the Intel hardware, and Linux interacts so directly with the hardware, people believe that icc would give great benefits to the kernel. At the very least, nothing can claim 100% gcc compatibility unless it can compile Linux unmodified.
    • Re:kernel (Score:1, Informative)

      by Anonymous Coward
      Read the article. Previous to this version there was speculation that 100% compatability would be achieved. Clearly that is not the case, though great strides had been made.
    • Re:kernel (Score:5, Insightful)

      by jarek ( 2469 ) on Monday December 15, 2003 @06:33AM (#7723580)
      To tell the truth, not even all gcc versions are compatible with the specific version of gcc that is currently supposed to compile the kernel. Gcc compatibility is a moving target and kernel developers do not switch to the latest gcc version as soon as it appears. Examples of this are kgcc vs gcc in some distributions. Unless icc becomes the official compiler for the linux kernel, I doubt it will ever compile the kernel in a predictable way (what ever predictable means in this case). /jarek
  • > Has anyone had experiences with it yet, either good or bad?

    I went to the website and was told that I wasn't allowed to have a copy unless I paid them money and promised to prevent others from copying my copy.

    They've also denied my request for a copy of the source code. Appearently, I'm not allowed to know what my copy is doing when it's compiling my code.

    There were many other restrictions. Overall, a pretty bad experience :-(
    • I'm not allowed to know what my copy is doing

      An AC above pointed out that Intel are part of the Trusted Computing group. This all reminds me of Ken Thompsons compiler trojan [acm.org]. (where he hacked a c compiler to add a backdoor whenever it is compiling "login".)

      So, what might icc add to the security functions of glibc? to gnupg, sshd, lsh?

      In a way, the idea of using a proprietary compiler is very similar to that of proprietary voting software.
      • by 0x0d0a ( 568518 ) on Sunday December 14, 2003 @11:35PM (#7722206) Journal
        An AC above pointed out that Intel are part of the Trusted Computing group. This all reminds me of Ken Thompsons compiler trojan. (where he hacked a c compiler to add a backdoor whenever it is compiling "login".)

        So, what might icc add to the security functions of glibc? to gnupg, sshd, lsh?


        You're reaching pretty far with this argument. Intel is a damned large company with a lot of groups working on things and a lot of different opinions and people. They don't have to have a secret, nasty, ulterior motive, even if one group is working on something you don't like.

        You want to be paranoid about Intel? Give up -- they control the CPU. They could trojan you much more easily via the proecessor -- no reason to dick around with the compiler.

        Plus, look at the Trusted Computing Group membership list [trustedcom...ggroup.org]. Do you distrust all products from all of these companies?

        Let's see:

        * ARM is on there. You better avoid any embedded devices. They might be trojaned. Or using any devices in your system (drives, add-in cards) that have ARMs onboard.

        * ATI and NVidia are on there. Video cards are clearly out -- there are numerous standards that will let video cards push code to the processor, plus cards tend to have pretty much unstopped access to memory.

        * Fujitsu is on there. You want a trojan, a hard disk controller is a damned sweet place to put it.

        * Philips is on there. I hope you don't rely on CDs for anything. Who knows what they put in their reference CD drive controller code?

        * RSA is in there. A damned large number of companies license their prewritten libraries (and binary copies of the thing, as well). I hope you've never run Netscape Navigator 4.x, because if you did, RSA could be controlling your system, modifying binaries, etc.

        * Phoenix is on there. Boy, I hope you don't trust your BIOS for anything. You *are* using LinuxBIOS on a *completely* open-spec'd motherboard, right?

        Point is, trying to distrust huge companies because one small component of the company does something you dislike is simply a futile task. Maybe one day you can use all open-source and viewable software, but it isn't going to be in the next decade -- keep in mind all that controller hardware with unbounded privileges to all the goodies on your computer.

        Don't get me wrong. I like open source. I write open source. However, being irrationally fanatical about it is both stupid and counterproductive, and doesn't do diddly for the open source movement.
        • Well to be honest, no I don't trust them, not at all. Being trustworthy has nothing to do with the main goals of being a company. Raising the stock price is what companies do, that's about it. Fortunately I'm too small of a fish, far, far away form the likes of them. I'm relatively cautious about my internet connection and what I install on my home PC. But I think all of this is beside the point. The parent was simply engaging in a mind game, the same as Ken Thompson was. These can be useful occasion
        • You want to be paranoid about Intel? Give up -- they control the CPU. They could trojan you much more easily via the proecessor -- no reason to dick around with the compiler.

          What if you run an AMD?
      • i seem to remember that even non-open-source programs can still be run through a decompiler (albeit not necessarily legally, thanks to the shrink-wrap license.) and sheesh, some C people can read assembly code just as easily as if it -were- C. it's practically the same thing.

        sure, it's not as easy or convenient. and modifying it to do what you like may be an pain. but ... a program is a program (and never a true black box.) you can always find out what it's doing, eventually.

        besides, have you never seen p
        • by RML ( 135014 )
          i seem to remember that even non-open-source programs can still be run through a decompiler (albeit not necessarily legally, thanks to the shrink-wrap license.)

          Sadly, there don't seem to be any good free (either speech or beer) general-purpose decompilers. There are several for Java, but Java is easier to decompile because programs carry extra information for verification purposes.

          and sheesh, some C people can read assembly code just as easily as if it -were- C. it's practically the same thing.

          Depends
        • So you volunteer to check a decompiled ICC-compiled kernel for a trapdoor that Intel may have hidden? ;-)
    • Actually, just to note, there is (as there has been since 7.0 or so) a non-commercial version of the Linux compiler. You're not "allowed" to compile commercial stuff with it, but $0 is still cheaper than paying for 8.0 if you don't go for that sort of thing or just want to see how well it'll work with your libraries. (not to sound like an Intel saleswoman, gah)
  • What's the big deal? (Score:3, Informative)

    by RML ( 135014 ) on Sunday December 14, 2003 @09:44PM (#7721597)
    I'm looking at the feature summary [intel.com] and I don't understand the big deal.
    • ICC supports two-pass compilation. So does GCC.
    • ICC supports data prefetching. So does GCC.
    • ICC can do code coverage testing. So can GCC.
    • ICC can do interprocedural optimizations. Released GCC versions can't, but work is in progress.
    • ICC can do automatic vectorization, GCC can't. Advantage ICC.
    • ICC supports something called "processor dispatch". I'm not even sure what that is.
    • ICC supports a number of optimizations that might be interesting if you happen to have an Itanium 2 sitting around.
    • ICC supports parallel programming, GCC doesn't (not very well anyways). Advantage ICC.
    • ICC's debugger supports "GDB syntax and a subset of GDB commands". Why not just use GDB?

    Overall, it's probably not worth using unless you really need a compiler that generates fast code.
    • by stevesliva ( 648202 ) on Sunday December 14, 2003 @10:17PM (#7721769) Journal
      Overall, it's probably not worth using unless you really need a compiler that generates fast code.
      Yeah! Who needs fast code anyways?? Real C coders use only "#pragma I'll do it my goddamn self!" statements and asm{} blocks! Real men don't need any stinkin' optimizing compilers other than their own beautiful minds.
      • Theoretically, an optimizer would transform object code into something that always produces the same set of output for any given set of input, but it would do it faster than the un-optimized code. Since the direct correlation of objects in the object code and descriptions in the source code is broken, debuggers will often fail to work correctly on optimized code. Even if it worked perfectly all the time, there is no guarantee that optimized code will always be faster than unoptimized code. If the source co

        • The question I want to ask: why would you intentionally write source code whose object code can be significantly improved by optimizations?

          Let's consider a very basic example:

          {
          int i, j[1000];
          for (i = 0; i < 1000; i++)
          j[i] = foo();
          }

          If you read old C textbooks, they'll actually tell you to write the above code as something like

          /* Fill an array with the results of 1000 calls to foo() */
          {
          int *i, j[1000], *endptr;
          i = &j;
          endptr = i + 1000;
          while (i < endptr)
          *i++ = foo();
          }

          Why? Beca

          • I'm not sure that this is necessarily an optimization like I'm referring to. I'm not exactly sure what gcc does for instance, but I would expect sizeof(int) to be calculated and stored for easy reference when the program block or function provifing scope is pushed onto the stack. I would consider recomputing sizeof(int) every iteration to be foolish. Is it even legal to change the type of a variable? What's the point of declaring a variable type before you use it if you need to do sizeof(type) every time y

            • The pointer arithmetic example just illuminates why there's a distinction between a high-level languages and assembly language. A high-level language should allow you to not have to pick out hidden operations and resequence instructions to use accumulators, registers, memory or cycles efficiently. A basic and correct compiler could do a load and a store to/from memory every time you do an i++ rather than keeping the loop variable in a register. But recognizing that a loop condition variable shouldn't ha
              • I can't help but point out that the ridiculous amount of cache on processors these days can obviate concerns over the load-store-load thing, and processors (rather memory controller chipsets) will always optimize that sort of thing. Granted: fetching from cache is not as fast as direct operation on a register, but how many loops are tight enough to execute in less cycles than a fetch from (L1 or L2) cache? Now we're getting into "costing" operations where it pays to do architecture comparisons because of t

      • For a lot of programs the extra speed really is not needed. How much faster to you need ls to list your files?
        Of course faster is good but sometimes trading a little speed for smaller code is a valid trade off.
        Speed is not everything sometime portability is of high value.
        The best if of course small, fast, and portable.

        I look forward to playing with icc. I really want to see if it can boost KDE and or the kernel. Okay gcc time to step up to the plate :)
    • "What's the big deal? ...not worth using unless you really need a compiler that generates fast code."

      Is this some sort of cruel joke? Do you like running slow code?
      • Do you like paying $400 for a couple msecs? Hell, add it up over your programs lifespan and you might get a week or two if you're lucky.
      • Is this some sort of cruel joke? Do you like running slow code?

        The number of programs where a speedup of 5% or 10% makes a big difference is very small. Most programs are IO-bound or memory-bound anyways. For a huge number-crunching application that is going to be running for a week or two, though, a 10% speedup makes a big difference.

        • Garbage, you've never worked on a commercial enterprise scale product. 10% improvement in performance can make a significant difference to performance targets for an average software product, saving reasonable amount of development time, effort and cost.
          • Garbage, you've never worked on a commercial enterprise scale product.

            True enough, I've mainly done scientific programming. The project I work on is not performance-intensive, while the guy across the hall does heavy number crunching (and still uses gcc).

            10% improvement in performance can make a significant difference to performance targets for an average software product, saving reasonable amount of development time, effort and cost.

            Sorry, I've been thinking of compiling open-source and in-house soft
    • processor dispatch (Score:5, Informative)

      by DAldredge ( 2353 ) <SlashdotEmail@GMail.Com> on Sunday December 14, 2003 @11:00PM (#7722002) Journal
      Processor dispatch allows the compiler to generate multiple optimized code paths and dynamicaly select which optimized version of the routine to use based on the processor that the program is running on. This allows a single executiple to run with SSE/SSE2 support on the P4 and still run on processors that do not support SSE/SS2.

      I do not know what happens when the app is ran on AMD processors that support SSE/SSE2.
      • Processor dispatch allows the compiler to generate multiple optimized code paths and dynamicaly select which optimized version of the routine to use ...

        Interesting. I wonder how hard it would be to implement something like that for gcc.
    • by Screaming Lunatic ( 526975 ) on Sunday December 14, 2003 @11:01PM (#7722006) Homepage
      Others have mentioned performance. But the more compilers that you put your source through, the more robust your code will become. Different compilers emit different warnings. Different compilers adhere to different parts of the standard. Putting your code through different compilers makes your code easier to port to other platforms.

      At work we use both MSVC 7.0 and ICC 7.0. We'll probably also use MSVC 7.1 for our next product cycle. And maybe Comeau or GCC in the future. At home I use GCC and ICC.

      • Where I work the SGI Irix C compiler has been real useful at detecting programming mistakes, it produces warnings that none of the others do.

        We also use GCC on Linux, the Intel compiler (on Windows) and VC++ 6. I try to get all of them to produce no warnings, and this has detected a huge number of bugs that would not have been found if only one compiler had been used.
    • by Anonymous Coward
      Overall, it's probably not worth using unless you really need a compiler that generates fast code.

      Yeah, totally. I mean ... the whole idea of compilers generating fast code is over-hyped anyway. :)
      • > Yeah, totally. I mean ... the whole idea of compilers generating fast code is over-hyped anyway. :)

        Actually, for a few projects I've done, I'd settle for a fast compiler, since the program is so I/O bound, I could have written it in perl if memory consumption wouldn't go through the roof if I did. Python+psyco looks intriguing, but I'd rather move to lisp ... assuming I could find a decent free CL compiler for windows (not clisp, that's cygwin, and it doesn't even handle proper tail recursion)
    • idb 'kinda works' if you don't use the debug flags, but the main difference between gcc and icc, and gdb and idb, is that idb generally doesn't pick anything up unless you compile with debug flags explicitly, while gdb can at least give you intelligible "debugging" even without explicitly using -g. Though using icc itself doesn't usually make debugging -impossible- either, using -fomit-frame-pointer with gcc will quite horribly ruin your day if you're trying to debug those cross icc/gcc glitches.
  • by Anonymous Coward on Sunday December 14, 2003 @10:18PM (#7721771)
    I don't know if this applies to the newest version, but info about a non-commercial license is at:
    http://www.intel.com/software/products/compil ers/c lin/noncom.htm

    I compile using both gcc 3.2 and icc 7.0. I do this because different compilers emit different warnings and this has helped me to improve my software's quality.

    The fortran and c/c++ are both available, as long as you don't try to create a commercial product with it.
  • #include <humor.h>

    Hey, does anybody know how well it optimizes for the Athlon XP?
    • Re:Question (Score:5, Insightful)

      by AtrN ( 87501 ) on Sunday December 14, 2003 @11:17PM (#7722072) Homepage
      Actually, yes I do. A little while ago I was doing some micro-optimizations (i.e. not algorithmic), using gcc 3.3 and icc v7 on FreeBSD and testing the results on a number of processors available to me: Athlon XP, PIII and PIV Xeons.

      With my problem/code the Intel compiler generated code ran faster on the Athlon XP than gcc 3.3's code using its XP switches and other "go fast" options. Using whole program optimization resulted in a program running considerably faster than the gcc 3.3 generated binary. icc is also stricter in some ways regarding syntax and C++ gets to use the EDG parser (if its still using it, can't see why not).

      The various posts here from people going "why bother" show a great deal of naivety. There are good reasons you want to use multiple compilers other than just the fact that icc can generate better code than gcc (in many circumstances, other tests may show the opposite result, YMMV). For starters its going to pick up a different set of errors. Now gcc is pretty good at producing useful warnings, a whole bunch better than Visual C++ for instance, but it isn't perfect, adding icc into your toolkit helps you find problems in your code. A more important reason however is to avoid the mono-culture introduced with everyone using gcc. Years ago we called it "all the worlds a VAX", then it became "all the worlds a Sun", now its "all the world's Linux (with gcc)". A bit of variation (in implementation, not interface) is a good thing.

      • I was doing c/c++ development on the following platforms simultaneously:

        - Mac OS X
        - SGI Irix
        - Solaris Sparc
        - Linux x86
        - Linux PPC

        My experience was that you can get pretty much anything through the gcc x86 compiler and also run it successfully. When you move along to another platform you get Bus Error or Segmentation Faults all over the place, at runtime (the PPC is really picky).

        Using different platforms simultaneously really helped improve code quality.

        I usually code in linux/x86, but I do no
      • So icc runs on Athlon XP? Sweet.
    • AMD used version 7.0 to compile it's entries for SpecInt performance [spec.org], and I'm guessing that they didn't just pick it because they thought it had a cute name.

      Compiler:
      Intel C++ 7.0 build 20021021Z
      Microsoft Visual Studio .NET 7.0.9466 (libraries)
      MicroQuill Smartheap Library 6.0
  • There's no source code. So I'm not going to use it. Well, that was the easiest decision I made all year!

    There was a time that I would put up with binary-only compilers. That was before gcc. Sure, both free and non-free compilers have bugs, but it's so much easier just to fix those compiler bugs yourself in the compiler's own source code, rather than have to craft a binary patch to fix the bug while you wait 6 months for the vendor to think about releasing the next version.
    • Actually, Intel seems decently smart about support. At least for their -paid- customers (I get an 'Access Forbidden' on their Premier support site, even though it says I should be able to get there with the free version too), there are pretty frequent minor patches that you don't notice if you just happen to check on the website once every few weeks.

      It may be easy to some to patch their version of GCC for a bug they happen to hear about, but it's not a realistic expectation for most users. GCC is pretty "s
      • > Actually, Intel seems decently smart about support.

        You actually liked their Premier support site? I could never find anything useful on it (I was a paid customer)! That and the tastful shade of blue (#0000ff) kindof made my eyes hurt. I also couldn't get on with the password that changed every 30 days with some pretty strict selection rules enforced - all for something that jsut game me patches for a product that was FlexLM licensed anyway. Ho hum. Until v6 came out, that website was my only gri
    • There was a time that I would put up with binary-only compilers. That was before gcc. Sure, both free and non-free compilers have bugs, but it's so much easier just to fix those compiler bugs yourself in the compiler's own source code, rather than have to craft a binary patch to fix the bug while you wait 6 months for the vendor to think about releasing the next version.

      So is this something you've actually done--fix errors in gcc--or something you're just spouting off about? What were the errors specifi
      • by kyz ( 225372 )
        Some gcc 2.0 (I forget the version) would build 680x0 jump tables with unsigned shorts then jump as if they were signed. Thus it jumped to x-2 instead of x+fffe, if you see what I mean.

        I didn't have the internet at the time, so no, I didn't update the test suite or even tell anyone.

        I must admit that some closed source authors (e.g. Frank Wille when he was writing phxass) make a good effort to fix bugs immediately when they are told about them, but nothing's quite so satisfying as being able to say "there,
  • Real Timings (Score:4, Interesting)

    by menscher ( 597856 ) <[menscher+slashdot] [at] [uiuc.edu]> on Sunday December 14, 2003 @10:58PM (#7721995) Homepage Journal
    Since this is Slashdot, this will quickly turn into a stupid bashing of Intel in favor of gcc, since everyone likes Free stuff and hates corporations. And everyone will talk out of their asses about how the Intel compilers couldn't possibly be faster than gcc. So, I figured I'd throw out some real numbers:
    icc -static (Intel 5.2 compiler on RH7.2 box)
    199.490u 0.000s 3:19.87 99.8% 0+0k 0+0io 48pf+0w

    icc -static RH8.0-2.2.93 (Intel 7.0 compiler)
    236.860u 0.010s 3:56.96 99.9% 0+0k 0+0io 53pf+0w

    icc -static RH8.0-2.3.2 (Intel 7.0 compiler)
    253.980u 0.050s 4:15.03 99.6% 0+0k 0+0io 53pf+0w
    245.030u 0.020s 4:05.26 99.9% 0+0k 0+0io 53pf+0w

    icc -static RH8.0-2.2.93 (Intel 7.0 compiler)
    503.060u 0.770s 8:23.99 99.9% 0+0k 0+0io 58pf+0w

    icc -static RH8.0-2.3.2 (Intel 7.0 compiler)
    521.420u 0.020s 8:41.83 99.9% 0+0k 0+0io 58pf+0w

    icc -static -O0 (Intel 5.2 compiler on RH7.2 box)
    521.670u 0.000s 8:42.19 99.9% 0+0k 0+0io 53pf+0w

    gcc -O3 -static (on RH9 box)
    693.380u 0.010s 11:33.48 99.9% 0+0k 0+0io 46pf+0w

    gcc -O3 -static RH8.0-2.3.2
    728.680u 0.020s 12:09.57 99.8% 0+0k 0+0io 46pf+0w

    gcc -O3 -static (RH 7.2 box)
    731.560u 0.070s 12:12.17 99.9% 0+0k 0+0io 41pf+0w

    superior gcc -static (RH 9 box)
    856.170u 0.710s 14:18.18 99.8% 0+0k 0+0io 52pf+0w
    The notations indicate the compile options. All binaries were then run on the same machine (dual Xeon running RH9) to gather timing information.

    Now, you can take whatever you want outta that, but my view is that having your programs run three times faster just might be useful.

    Disclaimer: these results are for a specific program (dealing with computational astrophysics). Obviously your application may see other speedups.

    • Re:Real Timings (Score:3, Informative)

      by menscher ( 597856 )
      I should have mentioned: the slow timings with icc (entries 4,5,6 in the table above) were done with -O0 (optimization turned off).

      And ignore the word "superior" in the last entry. That's just an internal note that I forgot to remove... has nothing to do with the timing test.

      And for those who were wondering... the various tests comparing RH8 libraries (2.2.93 vs 2.3.2) were done because the 7.0 version of the Intel compiler did not support RedHat 9 (so we were forced to copy libraries over from a RedHa

      • I should have mentioned: the slow timings with icc (entries 4,5,6 in the table above) were done with -O0 (optimization turned off).

        I assume that with icc, -O0 doesn't really mean no optimization, it just means not to do any optimizations that take extra time. Some optimizations actually decrease compile time, or at least have no effect, because they decrease the amount of work later stages have to do.

        GCC currently interprets -O0 as meaning no optimization at all, which makes comparing the speed of gcc -O
    • It'd be awfully nice to have version numbers on this thing.
    • Hi, impressive numbers.

      Did you turn -ffast-math on with GCC? icc does that by default. On some applications it makes a significant difference.

      AFAIK gcc does not turn it on even with -O3 because it makes the application non-IEEE compliant as far as the FP math is concerned.
    • Re:Real Timings (Score:5, Informative)

      by RML ( 135014 ) on Monday December 15, 2003 @02:43AM (#7722952)
      Since this is Slashdot, this will quickly turn into a stupid bashing of Intel in favor of gcc, since everyone likes Free stuff and hates corporations.

      "Free stuff" and "corporations" are not mutually exclusive. Most of the work done on gcc is by people who are paid to work on it.

      And everyone will talk out of their asses about how the Intel compilers couldn't possibly be faster than gcc.

      There are still many interesting optimizations that gcc doesn't implement. A lot of work is being done on adding them to the tree-ssa [gnu.org] branch, which hopefully will be merged into mainline gcc for 3.5.

      So, I figured I'd throw out some real numbers:

      It sounds like you're doing floating-point intensive number crunching code, which quite honestly is where icc should give the greatest benefit over gcc. On integer workloads they should be much closer. Number crunching gets a big boost from vectorization, and icc does automatic vectorization. GCC doesn't (though work is underway), and it won't use vector instructions at all unless you supply -mmmx, -msse, -msse2, and/or an -march option that specifies an appropriate CPU type. You can still get the advantage of vectorization if you're willing to code it explicitly.
    • Re:Real Timings (Score:4, Interesting)

      by Elwood P Dowd ( 16933 ) <judgmentalist@gmail.com> on Monday December 15, 2003 @02:54AM (#7722980) Journal
      This might sound like a stupid question, but are you showing us executable running times or compilation times?
    • For numeric C code, automatic vectorization will often double or quadruple (if you use SSE) performance with automatic (or manual) vectorization, as some other post has said. Other factors, such as inter-procedural optimization, gcc's lavish use of stacks and imperfect SSE register allocation, helps very little.

      In one of my programs, icc7 actually produced slower code than gcc (at -march=pentium4, maximum optimization) because the most time-consuming loop was not automatically vectorized for some reason.

      • I appreciate your comments. And it touches on one of the reasons I did the benchmarking in the first place: we noticed our new machines were running as slowly as old machines. As you can see from my original post, icc7 ran 15% slower than icc5. The new machines were 15% faster (CPU) but had a slower compiler. I'm hoping that this was fixed in the recent release of the intel compilers, but haven't had time to benchmark them yet.
  • It's great that this compiler is available. For the people who want to use open source and/or free software, there's gcc. For those who don't care about that and want good code, there's icc. And now that the gcc authors can pore over the output of icc and get some ideas on how to generate better code, we can hopefully expect future versions of gcc to close the gap a bit.
  • by sperling ( 524821 ) * on Monday December 15, 2003 @02:35AM (#7722925) Homepage
    We've tested Intel's c++ compiler for linux at work, and it's cut the full distributed rebuild time of our gameserver software from about 9 mins to 3 mins. That alone is more than enough reason to switch IMO.
    Performance-wise, it seems to have a slight edge over gcc, but this is subjective as I haven't really measured anything yet. Apart from the performance issues, I've found icc to be way more informative in its warnings and remarks than gcc. Unless you strictly believe in the GPL or are open-source only, I see no reason not to at least give it a try, it's a damned good piece of software.
    • Try using distcc, as that would reduce it even further when useing multiple computers.

      Asuming that compile time is most important to you, if the code runs faster under icc then you should probably use that instead.
    • Well, ICC compiled with ICC ought to run pretty fast given its optimizations. Now, has anyone considered compiling gcc using ICC to see how much of a performance boost it gets?
  • by Steve Cox ( 207680 ) on Monday December 15, 2003 @04:54AM (#7723269)
    I used the Windows version of the Intel compiler at work for quite a while, and it does produce some exceptionally fast code (and sometimes takes an exceptionally long time doing it).

    The problem? Since version 6 came out any software we compiled with it exhibited crashes that did not occur when we used another compiler on the same code.

    In the end we had to stop using it. Its a shame really because it was an excellent product (the only gripe being their Premier/Quad support website which was crap).

    Steve.

    • You probably know this already, but the problem you describe could be because your code is not strictly ANSI-compliant (ie. it makes use of undefined behaviour). So it may not necessarily be the compiler's fault.
    • Have you verified that you're actually seeing compiler bugs? It's quite possible that using a different compiler exposes bugs in the code itself - bugs that just happened to be treated differently with your original compiler. It may also be an issue of standards compliance. Either way, it seems that it would be worthwhile for you to explore _why_ you get the crashes you do. It may be enlightening.
      • With 4 years of development/support/enhancements, the program is getting quite large now so even though I have tried to keep my code standards compliant it could be a distinct possibility that some crap has crept in somewhere :) [note - I am using Windows, but the app is not using any MFC generated garbage/C++ extensions].

        I did search for the reason behind the crashes for a while, but after a while it came down to the usual time/money/performance trade off since we did not get any such crashes using MSVC6/
    • Just one thing: it's not because your program crashes with icc that the problem is necessarily in the compiler. It's often caused by errors or incorrect assumptions about the compiler. Just look at how every new gcc release breaks the kernel - not because of bugs in gcc but because of unclean things in the kernel.
  • by rleeuk ( 669057 ) on Monday December 15, 2003 @09:17AM (#7724194)
  • by blackcoot ( 124938 ) on Monday December 15, 2003 @11:05AM (#7725033)
    i haven't played with 8.0 yet, but using 7.1 i managed to get substantial (>20% overall) speedups. of course, this was with ipo turned on for almost everything which generated several megabytes worth of files per source file. i'm looking at playing with their fortran compiler, partially because it's a good excuse for me to play with fortran and partly because i'm fairly certain that it will be able squeeze some extra speed out of key algorithms. that said, even if the executables' speeds weren't substantially different, icc has some other nice features, built in openmp stuff, etc. and, of course, it's always good to have a second compiler's opinions on things.

The opossum is a very sophisticated animal. It doesn't even get up until 5 or 6 PM.

Working...