Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Intel Compiler Compared To gcc 101

Screaming Lunatic writes "Here are some benchmarks comparing Intel's compiler and gcc on Linux. Gcc holds it own in a lot of cases. But Intel, not surprisingly, excels on their own hardware. With Intel offering a free (as in beer) non-commercial license for their compiler, how many people are using Intel's compiler on a regular basis?"
This discussion has been archived. No new comments can be posted.

Intel Compiler Compared To gcc

Comments Filter:
  • c++ programs (Score:4, Interesting)

    by reaper20 ( 23396 ) on Monday December 16, 2002 @10:35PM (#4904448) Homepage
    So does this mean tha mozilla compiled with the intel compiler would run comparable to it's windows counterpart?

    I would like to see a test with real desktop applications and desktops, ie. gcc GNOME/KDE vs. icc GNOME/KDE. Would these projects see significant performance improvements from the Intel compiler?

    • Re:c++ programs (Score:4, Interesting)

      by swmccracken ( 106576 ) on Monday December 16, 2002 @11:58PM (#4904933) Homepage
      The Intel Compilers are not linux specific. They come in both Windows and Linux flavours - and there's nothing stopping you from compiling Moz for Windows using it, afaik.

      And, no, I supsect not really. Intel Compilers are designed for number-crunching work - eg: finite element alaysis, engineering simulations, that sort of thing. They perform optimizations designed to improve CPU bound processes. I suspect that interactive / IO bound processes wouldn't be so affected.

      Secondly, it depends where the bottleneck is - I could be the runtime linker, or the X-Window system itself or who knows.

      Those projects should see some level of improvement, but I wouldn't imagine it's twice as fast. (Things like a paint program might though - as the Intel compilers can take existing "normal" C code and generate SSE and MMX using code.)
    • Re:c++ programs (Score:4, Informative)

      by zenyu ( 248067 ) on Tuesday December 17, 2002 @12:35AM (#4905157)
      Unfortunately the C++ from the two compilers is not compatible, yet. I think they are working on it, perhaps with gcc3.3 and Intel 9.0. Also, the being a different compiler it won't just compile a program that you've got working with gcc 2.x, gcc 3.2, and Visual C++ 6.0, esp if you have a long compat.h to do it.

      I like icc, esp since I'm using a lot of floating point and gcc isn't too good with that on the PentiumIII & 4. But so far haven't had the time to unit test every component with my C++ project, and you can't just drop in icc compiled classes, it's all or nothin' (or lots of hacks and C code, but I'd rather put the work into a proper port at some point.) gcc 3.2 is also better than those benchmarks show, I've gotten a doubling in speed on some code compared to gcc 2.x. It's often a matter of trying different flags on each unit and rerunning your benchmark, I think the -Ox's aren't finely tuned yet on the gcc3.x series.

      There is a real problem with compilation speed on gcc3.2, I thought it hung when I ran a "-g3" compile and it was stuck on one short file for 10 mins, nope, just REALLY slow. I modified my makefiles to do a non-debug compile to check for errors before doing a "-g" Then I only "-g3" the files I need, when I need them. I mention it mostly because it may explain why the -Ox flags aren't optimal yet.

    • Re:c++ programs (Score:5, Informative)

      by bunratty ( 545641 ) on Tuesday December 17, 2002 @10:21AM (#4907200)
      So does this mean tha mozilla compiled with the intel compiler would run comparable to it's windows counterpart?
      The last I heard, the reason that Linux builds of Mozilla are slower than Windows builds is that Linux Mozilla builds use g++ 2.95 with -O optimization. When they can switch to g++ 3.2.1 with -O2 optimization, the speeds should be comparable.
      • Re:c++ programs (Score:2, Interesting)

        by halfnerd ( 553515 )
        I'm running mozilla compiled with gcc 3.2.1 with no problems at all. Even got java working thanks to the excellent portage system in Gentoo. And I heard that flash 6 can be used with gcc3.2.1 compiled mozilla
  • Integer performance? (Score:3, Informative)

    by crow ( 16139 ) on Monday December 16, 2002 @10:39PM (#4904465) Homepage Journal
    In looking at the selection of benchmarks, it seems like they're all based on scientific numeric problems. In other words, they're all very floating-point intensive. I'll admit that I didn't read it all that carefully, but it looked a bit like reporting SPECfp numbers without looking at SPECint.

    Also, the benchmarks used are probably much more loop-oriented than much of the real-world code, but that's typical of benchmarks.

    What I would find interesting would be to compile glibc, apache, and something like perl or mysql with both sets of compilers and see what difference you can get with some web server benchmarks. Or compile X and some game and see how the frame rate compares between the two compilers. Or compile X and Mozilla, and find some really complicated pages to see what gets rendered the fastest (possibly using some trick to get it to cycle between several such pages 1000 times).
    • ...are on their way, including soem very unusual ones. Stay tuned for new episodes in the continuing saga. ;)

      • 900MHz Duron. ./step_gcc

        Total absolute time: 3.21 sec
        Abstraction Penalty: 0.95

        So the more abstracted code is _faster_?!?!?
        (I reran half a dozen times, I never got any results >1.0 from any of the 12 tests)

        THL
      • OK, let's look at the most extreme one:
        (Duron 900, only gcc)

        Complex 20000 0.2 1.5 640.0 105.3 6.1

        The 6:1 ratio of C/C++ Complex on gcc is partly because operator+ and operator* take 2 Complex parameters. I changed that to Complex const& and the 6:1 becomes 2:1

        Complex 20000 0.3 0.5 615.4 296.3 2.1

        Then if you write an operator+= rather than the
        "a = a + b" operator+ in the code you get

        Complex 20000 0.2 0.2 640.0 695.7 0.9

        There you go - a factor of 7 speed increase.

        THL, available for hire as a freelance programmer.
  • by zaqattack911 ( 532040 ) on Monday December 16, 2002 @11:08PM (#4904671) Journal
    I'm a little ignorant when it comes to this... gcc and linux have always gone hand in hand in my mind.

    Could we see versions of linux distributed with intel compiler instead of gcc? Can the intel compiler compile the kernel?

    Clue me in!

    --noodles
    • Try reading the article before posting.

      Intel does not support all gcc language extensions; while it has been used to compile the Linux kernel and other free software projects, it is not a drop-in replacement for gcc.

      • Intel does not support all gcc language extensions; while it has been used to compile the Linux kernel and other free software projects, it is not a drop-in replacement for gcc.

        I'm somewhat dissappointed with kernel hackers (and other opensource developers) with respect to this issue. The issue is that the kernel is not ANSI-C compliant, not the fact that icc isn't compliant.

        It annoys me when MS does not support standards such OpenGL or with MSVC6 or with .doc files, etc.

        I'm not trying to troll here, but standards are a Good Thing(TM). But who am I to complain, Linus' tree is Linus' tree and he is allowed to do whatever he wants with it. Although, I'd like to see a hacker pick it up and port it to ANSI-C.

        • by T-Ranger ( 10520 ) <.ac.sn.otcubehc. .ta. .wffej.> on Tuesday December 17, 2002 @02:51AM (#4905646) Homepage
          Unfortunatly the reality of the compiler world is that everything has there own unique extensions to ANSI-C. And for that matter, there own unique bugs related to there implementation of ANSI C as well.

          Any peice of software as large, complex, and critical, as a OS kernel is going to, at the very least, be tested agianst a specific compiler. Linux was developed primarly with free tools, ie GCC. So Linus and his cohorts have taken the test-on-gcc mindset one step further and used GCC extensions.

          So what? What do they loose? No functionality, they could implement things in ASM if need be, so convience. And convienent things are probabaly understandable things, and understandable things mean less buggy code.

          If people never used compiler extenstions, then you would never have to run ./configure :)

          • If people never used compiler extenstions, then you would never have to run ./configure :)


            WRONG! gnu configure checks which compiler and version you're using, but it spends most of the time checking for #include files, libraries, and functions. Those are dependent on the OS and libraries installed, not the compiler.


            GCC proper doesn't use any extensions (since it may be compiled by a non-gcc compiler).

          • So what? What do they loose?

            Portability. They are just as locked in as any other development team using a single proprietary compiler with its own custom extensions. As a result, they are stuck using a tool that, with all due respect, produces pretty mediocre output compared to the best in the field. That might not matter too much for an OS, since chances are it doesn't take much advantage of either the things the other compilers optimise better or the features GCC doesn't support properly. In general, though, it's a very serious point (said the guy who writes code that compiles on 15 different platforms every day).

        • I'm somewhat dissappointed with kernel hackers (and other opensource developers) with respect to this issue. The issue is that the kernel is not ANSI-C compliant, not the fact that icc isn't compliant.

          If you did any serious kernel development at all, you'd realize how stupid your complaint is. It's not possible to optimize an operating system kernel using straight ANSI C. There are just too many specialized operations that a kernel needs to perform. And since gcc is available for a variety of platforms and architectures, it's no less of a standard than ANSI C is.

        • That is rather naive. I challenge you to write a good kernel in nothing but pure ANSI C.

          I agree in general about standards, but to be disappointed with the kernel hackers over this is a bit much.

    • A while ago, no. Intel explicitly documented that their C compiler could not compile the Linux kernel because the Linux kernel used a significant amount of gcc-specific functionality. (Specifics of inline keywords, the inline assembly bits interfacing with the c written bits, for example.)

      I do not know if this is still true, but I imagine it is.

      The Kernel developers use gcc - I wouldn't entirely trust using a different compiler. Besides, there probably isn't a huge performance penality.

      I've looked at using the Intel compilers (the have a FORTRAN one) and their main advantage is in number-crunching applications. I suspect the differences aren't so important in interactive / non-crunching applications.

      -
      -
      • ... because the Linux kernel used a significant amount of gcc-specific functionality.

        I have the impression that a significant point is the difference in assembler syntax. GCC uses the AT&T syntax, where the register you want to store into comes last, while the Intel compilers (and just about any other x86-native tool) uses the Intel syntax, where the distination register is the first one in the list. There are other differences as well, regardign the way type information and indirection is handled.

        My impression is that Intel does not want to implement an AT&T style assembler parser, and the GCC folks got bothered so much about Intel syntax by all the x86 newbies that they'd rather jump off a cliff.

  • by pb ( 1020 )
    I was just looking into this the other day--since Intel just supplies their own binaries, it wasn't thrilled about my Gentoo install. Sure, I had rpm, and I could use --nodeps to make it shut up during the install, but even then it didn't like my binaries. Maybe they're too new to be RedHat compatible? ...not to mention the silly license file I had to copy to get the thing to even attempt to install. Thanks, Intel, but no thanks. Until you open the source on this one, I see no need to test it out on my ATHLON. ;)
    • You still have to get the license file and install it where necessary, but try an `emerge icc`. It does work (by doing pretty much exactly what you did by hand...). What I really want is the ability to put USE='icc' in make.conf.
    • # emerge /usr/portage/dev-lang/icc/icc-7.0.065-r1.ebuild
      # cp /path/to/my/license-file.lic /opt/intel/compiler70/ia32/bin/

      Thats about all you really need to do to install the latest icc7 ebuild. If you don't have rpm portage will download and install that so it can extract the stuff in the icc rpm file.
  • I am Pentium of Borg. Division is futile. You will be approximated.

  • by flockofseagulls ( 48580 ) on Monday December 16, 2002 @11:24PM (#4904768) Homepage

    But Intel, not surprisingly, excels on their own hardware.

    Do you mean to imply that Intel knows something about the Pentium architecture or instruction set that the authors of gcc don't? Does the code emitted from the Intel compiler use undocumented instructions? Intel's compiler is newer than gcc and wasn't developed with the "many eyes" that have looked at gcc over the years. It looks like Intel's engineers wrote a better compiler, simple as that.

    These benchmarks give gcc a black eye, but I doubt Intel was using undocumented secrets of their chip to defeat gcc. Sometimes the open source community has to admit that not every open source project represents the state-of-the-art.

    • by Screaming Lunatic ( 526975 ) on Tuesday December 17, 2002 @12:00AM (#4904947) Homepage
      I wasn't trying to imply anything that you are implying that I tried to imply. Intel writes an optimizing compiler. The compiler optimizes well for Intel hardware. I don't think that there are undocumented instructions or any other conspiracy theory. Intel would be stupid do that. It would give AMD more opportunity to whip them with programs compiled with msvc or gcc.

      I do believe that Intel engineers probably have a better understanding of branch prediction and cache misses on Intel hardware.

      I don't think these benchmarks give gcc a black eye at all. gcc aims to be a cross-platform compiler first, optimizing compiler second. icc aims to be an optimizing compiler first, cross-platform compiler second.

      And chill with the conspiracy theories.

    • by Anonymous Coward
      Yes intels compilers work better on intel hardware. GCC works well on how many different platforms? How many different architectures does it work with? I have specific optimizations for my athlon-tbird, which are different from regular athlon optimizations. Yes, intel's compiler works better on their hardware. But GCC works quite well on just about everything. And that, in my opinion, makes it better.
      • Aarrrgggghhhhhh! I'd made that point in an earlier incarnation of the article, and it got lost when I rewrote the conclusions. Thanks for bringing this too my attention; I'll restore the lost text.

    • Instruction selection isn't that simple - knowing what instructions there are isn't difficult, as that is documented.

      But, knowing which instruction would be the fastest in each particular situation, how to organise things to reduce the chance of a cache miss and that sort of thing. So, yes, Intel know more about their chips than anyone else does.

      (However, AMD know more about AMD chips than anyone else does...)

    • Amazingly enough, I excel at using my workstation. I don't use any software that is not unavailable elsewhere, but it's amazing how, since I put it together, I tend to know what's available on it and where the quickest place to find the icons to launch things are (and which apps don't have icons, since I habitually launch them from the command line).

      Am I using undocumented "KDE+Bash+Linux knowledge"? Hardly. Do I have a 'home turf advantage'? Yup.

      Who said that intel was doing anything nefarious? I'd say it's passibly *obvious* that the engineers that designed the chip would have an advantage at designing an optomizing compiler, even when things are completely documented. And so it is.

      --
      Evan

    • Let me clarify a couple of popular misconceptions. Facts:
      1) Intel compilers improve code performance (over GNU compliers) on both Intel (PIII and P4) and AMD (Athlon) processors due to supporting SSE and SSE2 instructions and other extensions. Although this perf gain will be greater on Intel cpus.
      2) gcc maintainers have been unwilling to put Intel or AMD specific optimizations in the code -there's no secret instructions, just unwillingness to use the published stuff (check out the 100s of docs, forums and other stuff at developer.intel.com (where to get your non-commecial compiler downloads))
    • One tool that Intel has that no one else does is an accurate software model of their processor. They make one to validate the design before making silicon. This allows them to look at EXACTLY the code that is executing inside the processor - cache hits/misses, branch predictions, everything down to the last clock cycle.

      This tool alone probably gives them a huge edge in developing compilers.

  • Not my experience. (Score:4, Informative)

    by WasterDave ( 20047 ) <davep@zedk[ ]com ['ep.' in gap]> on Tuesday December 17, 2002 @01:19AM (#4905392)
    I've been writing some integer only video compression code, and these results don't really bear out what I've been seeing with GCC 3.1 and Intel C++ 6. I'm getting a consistent 15-20% more framerate under Intel, using an Athlon. An Athlon, god alone knows what we'd be looking at if I was daft enough to buy a P4. Admittedly there are some vectorizable loops in there (that are going to be replaced by primitives at some point), but even without those the performance improvement from C6 was consistent and noticeable.

    More relevant is how the performance of C7 is markedly worse on the P3 platform than C6. Very disappointing, makes me wonder what they've done.

    Dave
    • More relevant is how the performance of C7 is markedly worse on the P3 platform than C6.

      C7 defaults to -mcpu=pentium4, I bet he'd get different results with -mcpu=pentiumpro

      These benchmarks aren't really for those that really need the fastest code, they will benchmark their own code. But, it is valid for what deciding what to compile all that other stuff. Though with gcc holding it's own on most of those benchmarks the ubiquity gcc gets through it's license is outweighs the small performance benefit, well in C surely. Hopefully someone will look at that wacky place in the C++ benchmarks where icc outperformed it by over 1000%, perhaps the fix for that could be pulled into -O3 or maybe -O6, like with pgcc... gcc 3.0 was mostly a standards release, 3.1 and 3.2 were mostly bug fixes, hopefully 3.3 will iron out the ABI interpretation differences between gcc and icc, then 3.4+ can be performance oriented.

    • ...to specific applications and environments. I hope the tests bear that out -- in general, Intel and gcc are tied on Pentium III hardware, and Intel produces demonstrably faster code on Pentium 4 systems.

      My article is a guideline, not a pronouncement. Your mileage is guaranteed to vary.

    • More relevant is how the performance of C7 is markedly worse on the P3 platform than C6. Very disappointing, makes me wonder what they've done.

      It's optimising for P4 by default, which is missing the barrel shifter the P3 uses to generate immediate operands. On P4 it uses a 3rd cut-down ALU to handle them. Hence P3 code will run slowly on P4 CPUs (on top of the fact that P4 only gets ~80% performance clock-for-clock compared to P3) and vice-versa.

      Jon.

  • by blackcoot ( 124938 ) on Tuesday December 17, 2002 @02:03AM (#4905568)
    i've been using icc on a realtime computer vision project that i'm working on. intel's compiler ended up giving me an approximately 30% boost over all --- a difference which is not to be sneezed at. in terms of empirical performance data for my application, icc wins hands down.

    that said, icc does a lot of things that really irritate me. for one, it's diagnostic messages with -Wall are, well, 90% crap. note to intel: i don't care about temporaries being used when initializing std::strings from constants --- the compiler should be smart enough to do constructor eliding and quit bothering me. the command line arguments are somewhat cryptic, as are the error messages you get when you don't get the command line just right. the inter procedural optimization is very *very* nice; however, be prepared for *huge* temporary files if you're doing ipo across files (4+mb for a 20k source file adds up very quickly).

    this all said, i don't think that i'm going to give up either compiler. gcc tends to be faster on the builds (especially with optimization turned on) and has diagnostics that are more meaningful to me. fortunately, my makefiles know how to deal with both ;-)
    • In my experience, it is always a good idea to test both compilers for your specific application. I have seen cases where icc performs far worse than gcc, apparently because the compiled code causes much more page faults.

      On the other hand, icc supports OpenMP, which means that on an SMP machine you might be able to parallelize a loop by inserting just a single line of code, like:
      #pragma omp parallel ...

    • by Anonymous Coward
      MOJO [cuj.com]. This is a link to a detailed report on how to handle unneccessary copying of temporaries and why the compiler can't do it for you.
      • by blackcoot ( 124938 ) on Wednesday December 18, 2002 @05:47PM (#4918948)
        My specific issue has to do with code that looks like this:

        class C {
        public:
        C(const string& s = "some string");
        };

        icc wants code that looks like this:

        class C {
        public:
        C(const string& s = string("some string"));
        };

        The only real difference I see between the two is the explicit creation of a temporary. Now, as to why GCC doesn't complain is another issue --- maybe its diagnostics for temporaries aren't turned on with -Wall (perhaps -pedantic fixes that); however, I have this feeling that GCC's constructor elision is the trick here. To be honest, I'm very curious to find out why this happens. As an interesting aside, Stroustrup tackles the issue of overloading operators in a "smart" way so as to avoid unecessary copies.

        Personally, I think Java (and whomever it "borrowed" these particular semantics from) got it right. Unfortunately, Java isn't exactly a good language for talking to hardware ;-)
        • My specific issue has to do with code that looks like this:

          class C {
          public:
          C(const string& s = "some string");
          };

          icc wants code that looks like this:

          class C {
          public:
          C(const string& s = string("some string"));
          };

          The only real difference I see between the two is the explicit creation of a temporary. Now, as to why GCC doesn't complain is another issue --- maybe its diagnostics for temporaries aren't turned on with -Wall (perhaps -pedantic fixes that); however, I have this feeling that GCC's constructor elision is the trick here. To be honest, I'm very curious to find out why this happens.


          Constructor elision trick? The code

          const std::string& s = "some string";

          implicitly constructs a temporary std::string and binds it to the reference s. I don't know how the compiler could eliminate the construction of the temporary each time the function is called, unless it compiled it to something like the following:

          // C.hpp
          #include <string>
          class C {
          static const std::string S_DEFAULT;
          public:
          C(const std::string& s = S_DEFAULT);
          };

          // C.cpp
          #include "C.hpp"
          #include <iostream>
          const std::string C::S_DEFAULT("some string");
          C::C(const std::string& s) {
          std::cout << "C::C() called with: " << s << std::endl;
          }

          You may wish to rewrite your code in this manner because it virtually guarantees that the std::string for the default parameter is constructed once and only once. It also provides an added benefit: if the value of the default changes (from, say, "some string" to "some other string"), then only the C class's translation unit needs to be recompiled.
  • Mail sent to author (Score:5, Informative)

    by guerby ( 49204 ) on Tuesday December 17, 2002 @04:02AM (#4905818) Homepage
    I sent this on December 10th, no answer or update to the page:

    Hi, I just saw your posting on the GCC list and was surprised by your analysis about Scimark 2.0 "Overall, Intel produces code that is 15% faster". If you look at the detailed result, you'll find out that the Monte Carlo tests is an obvious problem for GCC since it produces code 3 times as slow as Intel whereas on other tests it is on par. This discrepency explains about all of the 15% difference in the composite difference (which is a simple arithmetic average), so "overall" is a bit strong :).

    Interested I took a look at the Monte Carlo test, and it turns out to be 20 lines of code that generate random numbers using integer arithmetic, ie not floating point intensive stuff at all which is quite at odd with your statement "I've found this benchmark reflects the performance I can expect in my own numerical applications".

    My conclusion is that GCC is probably missing one obvious integer optimisation that Intel compiler does, but I don't think we can generalize from this particular point.

    You can quote me on a public forum if you want to. I write numerical intensive code for a living at a big bank.

    Sincerely,

    • You need to re-read the article, which has changed. The "15%" senetence was an artifact that should have been deleted (and now is) from an earlier article.

      The text you found objectional is replaced by the following:

      On the Pentium III, gcc and Intel run very close together. The Pentium IV tests, however, show a trend that will continue throughout the rest of these tests: Intel produces faster code on almost all tests, and produces code that is 20% faster overall. Only on the Sparse Matrix Multiplication test did gcc generate the fastest code.

      Many "numerical applications" involve integer calculations; last time I looked, integers were numbers, too. ;)

      • If you remove the Monte Carlo test, the P4 composite result turns out to be 9.3% better for icc, quite a different figure than 20% (even if icc is of course still better on 3 out of 4 tests).

        Well you can obviously play with words. Why did SPEC dudes bother splitting between SPECint and SPECfp after all? :).

        • SPECint vs. SPECfp (Score:2, Interesting)

          by yerricde ( 125198 )

          Why did SPEC dudes bother splitting between SPECint and SPECfp after all?

          Because encryption and other heavy number theory doesn't use floating-point.

          Because analog modeling of physical systems such as circuits doesn't use integers except as loop counters and pointers.

          Because floating-point hardware draws a lot of power, forcing makers of handheld devices to omit the FPU.

    • Numbers don't lie, people do. Did you know that 99% of people under the age of 16 are unemployed?!
  • Interesting... (Score:3, Interesting)

    by Pathwalker ( 103 ) <hotgrits@yourpants.net> on Tuesday December 17, 2002 @08:01AM (#4906422) Homepage Journal
    I wonder why he didn't turn on -fforce-addr under GCC?

    Under the versions of GCC that I have used, I've always found that -fforce-addr -fforce-mem gives a slight speed boost when combined with -O3 -fomit-frame-pointer.

    Under GCC 3.2, it looks like -fforce-mem is turned on at optimization -O2 and above, but -fforce-addr does not appear to be turned on, and it seems like it may be of some help in pointer heavy code.
    • I didn't use -fforce-addr because I didn't think of it! ;)

      Based on some work suggested by someone in e-mail, I'm going to see if it's possible to write a "test every option combination" script. Given the hundreds of potential options, we're looking at a REALLY BIG test... ;)

      In my view, gcc has far too many options and virtually no real documentation about how those options interact, or even what options go with what circumstances. Very messy, and very hard for people to figure out.

      ,

      I hope to alleviate that problem, given time and resources.

      • I don't think you'll be able to test EVERY combination of options, simply because there are so many.

        On the version of GCC I normally use, there are 25 -f options for controlling optimization. There are also a couple of other options that will effect code efficiency as a side-effect.

        To test every combination of 25 options, you'd have to recompile and re-execute your tests 33,554,432 (2 to the 25th power) times, which will probably exceed your patience.

        With a little clever winnowing of options, you might be able to cut that down to a reasonable set of options. Presumably, some options will always be a win, in nearly every situation. If you take those as fixed, that'll cut down the set of permutations significantly.

        -Mark
      • In a previous article it was noted that someone from the gcc project recommended that you should use -funroll-all-loops. Did they say why? Initially I would have thought only -funroll-loops should be used, but after having actually comparing the two on those benchmarks I did notice they improved performance -- which begs the question of why the gcc manual states:
        • -funroll-all-loops
          • Unroll all loops, even if their number of iterations is uncertain when the loop is entered.
          • This usually makes programs run more slowly. -funroll-all-loops implies the same options as -funroll-loops,
  • Head Start (Score:4, Interesting)

    by ChaoticCoyote ( 195677 ) on Tuesday December 17, 2002 @08:32AM (#4906537) Homepage

    Historically, Intel has always been ahead of the competition in terms of code generation; I've used their Windows compiler for years as a replacement for Microsoft's less-than-stellar Visual C++.

    On the Pentium III, the gcc and Intel C++ run neck-and-neck for the most part, reflecting the maturity of knowledge about that chip. The Pentium 4 is newer territory, and Intel has a decided edge in know how to get the most out of their hardware.

    I have great faith in the gcc development team, and as my article clearly states:

    If anything, these tests prove that free software can produce products that rival -- and sometimes exceed -- the qualities of their commercial counterparts.

    This article is an ongoing effort; I assure you, I'll be updating and expanding the material in response to the comments of readers and further experience.

  • about the arrival of Intel compiler are:

    (1) Competition. This is OSS versus a compiler from the largest CPU-maker, both designed to work on this CPU. I think quality will go high.

    (2) Standards. Now that we have at least 2 worthy compilers, developers from both sides will try harder to stick to standards to be able to bite each others' markets. Intel's compiler will try to compile the linux kernel and glibc2 while GCC should make attempts at Borland and VC++ IDEs, possibly building on their MingW32.

    If only AMD came out now with an open-source compiler for AthlonXP and Athlon64
  • What a bunch of lazy bastard whiners. (coming from a fellow whiner ;-) The gcc code is available to all, and of course the committers will accept any real help. If you can make up the 15% difference, I'm sure you would get their complete attention!
  • Am I the only one who noticed the 20-50% file size increase of ICC versus GCC compiled programs?
  • On other architectures, gcc lags WAY behind the native compiler from the hardware vendor, look at sun workshop, compaq alpha compiler or sgi mipspro..
    And the more code is made that only compiles with gcc, the more performance wastage on these architectures.
    • Indeed. Those vendors would do well to improve gcc on their platforms for their own sake.

      Installing a commercial compiler takes several months in large corporations, assuming you get permission for the expense at all, and often that is not an acceptable option. So gcc is what you get (which has the added advantage of not needing to deal with a PITA license server).

      If that means the performance will suck on Solaris, Tru64, HP-UX or IRIX that just means we are more likely to migrate applications to x86 linux machines instead, not that we buy the compiler...
      • However, migrating to x86 is not an option in many cases, ESPECIALLY those which really require high performance....
        x86 is still really limited to 4gig address space, the scalability is poor and not all applications are appropriate for being clustered. Often a high end 64bit multiprocessor system is the only option, and in these cases a 10% speed increase could result in hours of time saved, or even more..
        • Indeed. And if Sun and company feel fine with retreating up the scale until they eventually sell a few hundred systems per year that's ok.

          If, on the other hand, they're interested in remaing competetive in the low to midrange server end they'd do well to make sure that GCC has the best code generation possible for their platform because GCC is quite often the defacto compiler installed, despite it not being the best for the platform (even if the platform specific compilers were free gcc would remain immensly popular and probably remain the most common compiler on those systems merely because it works close to the same between platforms).
  • You know, Einstein once said his biggest mistake was naming his theory 'special relativity'

    His theory is not that everything is 'relative', it is that if you specify all the variables, everything is precisely understood.

    Moreover, it does not apply to anything else. How is it that you use the postulates of the constancy of the speed of light and of simultaniety, can be applied to the speed of ICC over GCC???

    Quote from the article:
    "Like Einstein, I have to say the answer is relative."

    DOH!

There is no opinion so absurd that some philosopher will not express it. -- Marcus Tullius Cicero, "Ad familiares"

Working...