Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Benchmarks For gcc-3.1 37

Isle writes: "Another good story found via OSNews. Scott Robert Ladd has updated his GCC vs Intel C++ compiler benchmark. Now you can find gcc 3.1 benchmarked against gcc 3.0.4 and icc 6.0. The summary must be that gcc 3.1 is a lot faster than gcc 3.0.4 for very abstract C++ code, but icc is still slightly faster overall."
This discussion has been archived. No new comments can be posted.

Benchmarks For gcc-3.1

Comments Filter:
  • What most concerns me is how long it takes to compile code. While developing, I don't want to sit and wait for the compiler (+linker). gcc is sadly getting slower and slower in this regard.

    Rik
    • by psavo ( 162634 )
      I'm really am curious of how you develop.
      I personally just write the code down, compile, run with debugger. I note worst mistakes, write them down, correct values in memory and continue test run until it've gone as far as it can without modifying code.
      After this all is done, I correct code with my notes, and begin it all over again.
      With this use, actual compiling is not slow if it takes 3-5 mins.
      yeah, daily recompile load is about 700k C++ code + 200k headers (+100k vendors).

      Some time ago I had to recompile with P133, and it took (with watcom 11.0c) approx 7min, which _felt_ like a lot. at the same time I was able to modify code in emacs, so that wasn't such a bad deal until we had a really fast call.
      • I've been coding long enough that I don't make too many mistakes. The problem is that if I change something low down in the dependency tree, lots of stuff gets rebuilt, and this takes time.

        Also, thanks to the complexity of C++, there is no refactoring browser (that I know of) so if I change a method signature, I have to use the compiler to tell me what needs 'fixing'.

        Of course, vim is able to jump to warnings/errors generated by gcc, which helps quite a bit.

        I'm currently working on about 50k lines of C++ code, where a rebuild can take anywhere from 3.5s for one file (no problem) to 90s for a change that affects more code. I'm currently using gcc 2.95.3. gcc 3.1 is much slower.

        I'm using make -j 2 on a dual athlon, and I forward declare where possible to save time.

        I would just prefer the focus of gcc development to shift from execute speed to compile speed. But the gcc people are free to do what they wish, it's not like I'm paying them ;)

        Rik
        • I've been coding long enough that I don't make too many mistakes.
          LOL. That deserves a 5 Funny if anything does.
        • Well, just use C. ;)

          Seriously though, code is usually run a lot more than it's built, so trading speed when building for speed when running makes sense. If you want build speed, disable optimization etc.
        • by elflord ( 9269 )
          I'm currently working on about 50k lines of C++ code, where a rebuild can take anywhere from 3.5s for one file (no problem) to 90s for a change that affects more code. I'm currently using gcc 2.95.3. gcc 3.1 is much slower.

          But gcc 3.1 optimises more heavily. gcc 3.1 with -O1 compiles almost as fast as older releases with -O2, and the optimisation levels are comparable (that is, -O2 on gcc 3.1 should compare with -O1 on an earlier release)

      • by p3d0 ( 42270 )
        I use Design by Contract, and program by writing the assertions first. By the time the program runs without violating any assertions, it needs very little interactive debugging.
        • Assertions are great at the lowest level of development, but when you're dealing with higher levels of complexity, it's not really possible to use them.

          For example, how do I write an assertion that downloading some data didn't make the UI stall for long enough to be noticeable ? Or that the formatted text I presented to the user has the right colours and line wrapping to make it pleasing to the eye ? Or that the correct icons were loaded for the toolbar buttons ?

          This is not about programming paradigms, it's about the reality of testing large apps.

          Rik
          • by p3d0 ( 42270 )
            True, there are certain things that cannot be captured by executable assertions. However, I disagree that it's a matter of "level". You can have high-level, abstract assertions just as well as low-level ones. The examples you gave are just assertions of a kind that can't be checked automatically at runtime, not a level.

            Having said that, I think the examples you gave are more amenable to runtime checking than you might think. Checking time constraints is trivial--all you need is a couple calls to gettimeofday--though the challenge here would be to find a systematic way to put these checks in all the right places. A more fundamental challenge is to reason about the delay that might occur during various operations, in order to convince yourself that you will indeed satisfy the assertions. However, none of this is impossible, as you seem to imply. For instance, using timeouts can help you put an upper bound on the delay of I/O operations.

            GUI assertions are different, though these are still possible. The key is that you should Design by Contract, just as the hardware folks design for testability. In other words, don't simply code the way you already do, and just throw in a few assertions; rather, you change the way you code in order to make it amenable to assertions to the greatest possible extent.

            In the case of a GUI, the idea is to leave everything in data structures as late as possible, and then make the actual rendering stage trivial. Then, right up to the renderer, you can assert that the text has a certain colour. The rendering stage itself may not be amenable to assertions, but the idea is to keep that as small, as simple, and as widely-used as possible.

            Assertions don't replace testing; rather, Design by Contract multiplies the tests' effectiveness.
            • I think you don't understand what I'm talking about. I'm not talking about timing how long an operation takes, I'm talking about how long it _seems_ to take, _to the user_. If the app is busy for > 10ms, the app feels like it has stalled. This pisses the user off. You can't measure that easily, because the app may do a lot of things that take 9ms, and the UI will feel like it is stuttering.

              You also can't just 'assert that the text has a certain colour' in complex rendering code. I'm not just talking about displaying one word in a certain colour, I'm talking about a massive amount of text, which is formatted and marked up before display. Whether it is done 'correctly' or not is extremely difficult to write checking code for. It's far better to simply have a look and see.

              Anyway, these were two examples of 'untestable' problems that one faces when writing complex apps. There are many more, and yes, they may in fact be testable, but it would take 10 times longer to write the app if you wrote (hideously complex) tests for such things. Time is money, projects have deadlines, etc.

              Rik
              • Ok, let's assume that these things are untestable, as you say. My point is that Design by Contract means minimizing the portions of the program that are untestable. If, even after careful consideration, your particular application has large portions that are untestable, then granted, Design by Contract won't help much for those parts. I won't argue with that.

                What I argue with is your assertion (no pun intended) that such parts are of a "higher level" than those parts which are testable. I contend that they are merely of a different "kind". GUIs are full of this kind of thing. I grant that GUIs are hard to analyze with DbC, but I claim that mony other parts of a program benefit tremendously from this kind of analysis.

                Your example of perceived latency is a perfect example of this, since real-time programmers do this kind of analysis for exactly this reason.
    • by zulux ( 112259 ) on Wednesday May 22, 2002 @10:50AM (#3565822) Homepage Journal
      Check out the Borland C++ compiler. It produces some crappy code but it sure is fast doing it. I use it to start off my projects - I use gcc to finish them off and make them cross platform. Using two comilers can also have debugging benefits - one will balk on somthig that the other passes by, or a logical error will become apparent on one and not the other.

    • by Anonymous Coward
      You're not the only one worried about compilation speed. A long thread just recently popped up on the gcc development list about this very issue. See the recent gcc mailing list archives here [gnu.org], especially the threads starting with this message [gnu.org], this message [gnu.org], this message [gnu.org], and this message [gnu.org], among others. (And don't pay too much heed to Robert Dewar's vocal pessimism; he sent out a lot of messages on these threads doubting the need or feasibility of speed improvements, but his arguments were pretty much refuted by many other developers.)

      gcc's compilation speed can certainly be a problem for very large projects or even smaller projects on slower machines. Unfortunately, things have actually been getting worse for newer releases. Part of this is due to additional optimizations, but there are some genuine performance problems that the gcc developers would very much like to solve.

      Now that this has become a major priority, I expect things to start improving in the not too distant future.

  • by d-Orb ( 551682 ) on Wednesday May 22, 2002 @09:57AM (#3565400) Homepage
    In reality, what the article is showing is that the GCC team have optimised the gcc compiler by a great deal in a very short amount of time. They deserve recognition for this. One of the reasons for using gcc is that it compiles my code everywhere without any headaches. I develop on GNU/Linux, and run the code on Solaris (sometimes on SGI, but not too often). The code runs as expected. What would be interesting to see is how well gcc compares to optimised compilers on other non-x86 architectures. For my code, gcc is slower than Sun's CC, but I am using an oldish version of gcc.
  • by Kesha ( 5861 ) on Wednesday May 22, 2002 @01:57PM (#3567109) Homepage
    After looking through the benchmark results and noting how large the difference is for the Monte-Carlo algorithm between gcc and icc, it seems that this may be caused the underlying standard C library that gcc is using. Perhaps the GNU version of drand48 is being more "random" by using some "random" system function of the kernel (or glibc), whereas icc may be unaware of these more-random system/glibc functions and substituting something of its own instead (which may be faster but probably not-as-random as the gcc version).

    Paul.
  • What about 2.9x? (Score:3, Insightful)

    by ameoba ( 173803 ) on Thursday May 23, 2002 @07:35AM (#3571537)
    Considering that GCC 2.9x is still shipping with most distros, and is the only one that compiles the kernel yet, why not show some comparisons with it, in addition to GCC 3.x and ICC? Why only benchmark fringe compilers, when a vast majority of Linux users will be rocking the older compiler?
    • Re:What about 2.9x? (Score:3, Informative)

      by elflord ( 9269 )
      Considering that GCC 2.9x is still shipping with most distros, and is the only one that compiles the kernel yet, why not show some comparisons with it, in addition to GCC 3.x and ICC?

      It's not true that 2.9x is the only compiler that compiles the kernel. gcc 3.0 and 3.1 also can, indeed kernel compilation was part of the release criteria (maybe you're mixing it up with "2.96" ?) Comparisons would be nice, but FYI, 2.95 is not going to be any faster than gcc 3.0. (and probably slower)

      Why only benchmark fringe compilers, when a vast majority of Linux users will be rocking the older compiler?

      gcc 3.1 is not a fringe compiler. You are going to see it in the next Redhat release (as well as the distributions that follow Redhat)

    • I did it on Debian more than once. It runs.

"The following is not for the weak of heart or Fundamentalists." -- Dave Barry

Working...