Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
GNU is Not Unix Programming Sun Microsystems IT Technology

GNU GCC Vs Sun's Compiler on a SPARC 72

JigSaw writes "When doing research for his evaluation of Solaris 9 on his Ultra 5, Tony Bourke kept running into the same comment online over and over again: Sun's C compiler produces much faster code than GCC does. However, he couldn't find one set of benchmarks to back this up and so he did his own."
This discussion has been archived. No new comments can be posted.

GNU GCC Vs Sun's Compiler on a SPARC

Comments Filter:
  • What CPU? (Score:4, Informative)

    by keesh ( 202812 ) * on Wednesday January 28, 2004 @08:01PM (#8118930) Homepage
    I wish this guy would tell us what CPU he's using. There's a hell of a lot of difference between the low-cache and high-cache CPUs (yes, these will work in a u5 as well as a u10). Looks like he's using a low-cache one, where there's not as much difference (and where the 64bit penalty isn't as noticeable).
    • Re:What CPU? (Score:3, Informative)

      by Oinos ( 140188 )
      If you follow the "evaluation" link, it tells you. UltraSPARC IIi 333MHz 2MB Cache.
    • Re:What CPU? (Score:4, Informative)

      by nelsonal ( 549144 ) on Wednesday January 28, 2004 @08:08PM (#8118983) Journal
      It's the same guy who bought a Sun system on ebay and has been writing columns on shiny new (to him anyway) system. The specs from the purchase article:

      The system I'm using is a second generation of the Ultra 5, released in late 1998. Here are the specs:

      System: Sun Ultra 5
      Processor: UltraSPARC IIi 333 MHz, 2MB cache
      Memory: 256 MB
      HD: Seagate Medalist 7200 RPM IDE Pro 9140 8.4 GB
      IDE Controller: Built-in UDMA2, 33 MB/s max
      CD-ROM: 32x IDE
      Video Controller: Sun PGX24 (ATI Mach64), 4 MB VRAM
      Network: Built-in 10/100 NIC (hme, "Happy Meal" interface)

      My knowledge of sun processors is a little lacking in the UltraSPARC II range, so I'll leave it to you to evaluate this one, My only exposure is the dead server CPU module for a paperweight. It and a foxtrot on the corkborad are my own version of the geek purity test.
    • Re:What CPU? (Score:3, Informative)

      by carsont ( 648940 )
      The specs of the machine were in his first article [osnews.com].

      It's the 333 MHz processor with the 2 MB cache. (The same one that's in the U10 I'm using right now, by the way).
  • by dingbatdr ( 702519 ) on Wednesday January 28, 2004 @08:08PM (#8118988) Homepage
    To me this was the most interesting line of the article:

    Sun's compiler was the clear winner. Surprisingly, the older version of GNU's GCC beat 3.3.2 by a very slim margin.

    One of my favorite version numbers (2.95.3) is still getting good press. Cool.

    dtg
  • Actually, he did (Score:1, Informative)

    by Anonymous Coward
    Read the articles. 2MB cache, 333 MHz UltraSPARC IIi
  • by Anonymous Coward
    I used Sun's new compiler to compile new drivers for my old 14.4 faxmod=20 ]} } } }&..}=3Dr}'}"}[NO CARRIER]
  • by MerlynEmrys67 ( 583469 ) on Wednesday January 28, 2004 @08:47PM (#8119281)
    I mean gcc's strength has never been fast code (all though it is no slouch) it has been cross platform. You can use GCC on everything from the biggest 64 bit procs down to the smallest embedded CPUs.

    Of course a vendors supplied compiler that doesn't have to even think about potential optimizations for another platform will outperform it. It is a testiment to the gcc folks that it is even close.

    • by ctr2sprt ( 574731 ) on Wednesday January 28, 2004 @09:35PM (#8119553)
      I mean gcc's strength has never been fast code (all though it is no slouch) it has been cross platform.
      That's gcc's de facto status, but from the section of its info pages called "GCC and Portability:"
      The main goal of GCC was to make a good, fast compiler for machines in the class that the GNU system aims to run on: 32-bit machines that address 8-bit bytes and have several general registers. Elegance, theoretical power and simplicity are only secondary.
      It's interesting to note how gcc has turned out. I wonder what caused the change; if it were market forces, or changing developer priorities, or just coincidence.
      • by duffbeer703 ( 177751 ) * on Wednesday January 28, 2004 @10:13PM (#8119806)
        Those are Richard Stallman's goals, which don't really mirror anybody else's.

        And since the excerpt was from the info pages for GCC, you and stallman are likely the only humans to have ever read it.
        • by cant_get_a_good_nick ( 172131 ) on Wednesday January 28, 2004 @11:14PM (#8120213)
          Those may have been Stallman's original goals, but not necessarily of gcc anymore. Remember that the maintainers of gcc now aren't the original Stallman lead, FSF gcc folks, but of the splinter egcs group that forked gcc because they were extremely frustrated with the progress of gcc under the FSF. Once it became evident that egcs was making progress leaps and bounds past the FSF gcc, (to Stallman's credit) work on FSF gcc was dropped, and the egcs gcc became the official gcc.

          People think that "The Cathedral and the Bazaar" was made in comparison between commercial and non-commercial programming models. It actually was modeled on FSF gcc (the Cathedral) and Linux kernel (the Bazaar) development. Eventually, at least in gcc development, the Bazaar won.
    • GCC is an excellent compiler. I was surprised why it didn't beat SUN's CC. In my experience, gcc frequently generates code which is at least as good or slightly better than vendor's compiler (at least for DEC, SPARC and Intel platforms). There are special cases, say OpenMP, SSE2, ect, when vendor's compiler does a better job (and gcc currently doesn't support openmp), but in most cases I can easily believe that gcc will be the best. In addition, knowing that a certain version of gcc compiled 8000 Debian pac
      • by j-pimp ( 177072 ) <zippy1981 AT gmail DOT com> on Thursday January 29, 2004 @07:42AM (#8122370) Homepage Journal
        Add to this that compiler gets better with each release and I think it will soon be what Linux is for UNIX - making the rest obsolete.

        I was talking to some Novell engineers at Linux World. They all love watcom. The watcom toolchain is being ported to linux. It already self bootstraps. Novell owns SuSE. I expect SuSE will be making use of watcom for linux in the future. All my projects are going to use Watcom from now on. I'm sick of the annoying voo-doo neccessary to make a cross compiler between unix and mingw/cygwin/djgpp using gcc. Watcom lets me cross compile out of the box. Granted the IDE needs some work, but wmake very powerful, though unique. All the basic unix userland you expect for a makefile (cp, rm, install) is a builtin command and calls to the tool chain (compiler, linker etc) are loaded as DLLs saving system calls, thus improving performance.

        GCC is very mature, popular and supported, but its not going to be the only kid on the block for long now that Watcom is open source.
        • All I can say - competition is good. If Watcom is a decent compiler and will be able to compile kernel and Linux distro I am very glad. Most likely gcc team look critically what is good and what is bad and improve things were needed.
          • All I can say - competition is good. If Watcom is a decent compiler and will be able to compile kernel and Linux distro I am very glad. Most likely gcc team look critically what is good and what is bad and improve things were needed.
            Its my understanding that thekernel is chock full of GCCisms, however userland is a different story. Main obscacle to compiling kernel on watcom will be convincing Linux to accept patches that remove the gccisms or provide watcomisms as appropiate. I do believe that this wil
        • I was talking to some Novell engineers at Linux World. They all love watcom.

          Perhaps that's because Netware was written in Watcom C? Netware was very impressive in its day (late eighties, early nineties) having many features that we take for granted in Linux today. Watcom's C compiler used to be the best on the market for PeeCees, and could produce flat, 32-bit code for DOS extenders back when Microsoft C was still messing about in 16 bits. If you wanted to write an NLM for Netware, you used Watcom. It also

          • If you wanted to write an NLM for Netware, you used Watcom. It also targetted OS/2. Those were the days.
            Yeah netware guys said netware was built using watcom. Also, the watcom dev team all seem to use OS/2 Warp.
    • by AT ( 21754 ) on Wednesday January 28, 2004 @11:07PM (#8120167)
      Actually, in early iterations, gcc killed most vendor's compilers, including Sun's. This was mostly because most vendors's compilers were absolutely terrible when gcc was first released. Since then, compiler technology has made huge advances and vendors have spent lots of effort improving them. At the same time, with the increasingly complex scheduling requirements of todays RISC processors, making a compiler fast takes a lot more work. Designing a portable instruction scheduler that performs good on very different processors is nearly impossible (though gcc does it surprisingly well).
    • Actually, I first started using gcc because it was significantly faster than Sun's compiler for its M68K machines. Of course that was almost 15 years ago, but I know that's one of the reasons it became popular. I don't think this was the case for the early Sparc based machines though. Our sysadmin at the time said that in order to get decent performance out of the chips, Sun had to put more effort into the compilers than it did for the previous machines. FWIW, on Data General's M88K-based machines, gcc

    • I've always thought this is a slightly strange target. Surely the point is that a compiler should be the one piece of software which is tailered to each CPU it runs on, so that the rest of us who write C/C++/whatever don't have to?
  • Bad Statistics? (Score:5, Interesting)

    by Josh Booth ( 588074 ) <joshbooth2000@nOSPAM.yahoo.com> on Wednesday January 28, 2004 @09:17PM (#8119434)
    See Tony Bourke's older article [osnews.com], which conluded that 64 bit binaries are slower than 32 bit binaries. This set of statistics he posted has totally obliterated his previous conclusion. He had only used GCC 3.3.2 and assumed that compiling for both 32 and 64 bits were optimized similarly. However, in most of the benchmarks he did with Sun's compiler, 64 bit programs came ahead of 32 bit ones. This means that GCC 3.3.2 is not as well optimized for his computer for 64 bits as for 32 bits, while the Sun compiler is. If he had just looked at his own data, he would have seen that.
    • However, in most of the benchmarks he did with Sun's compiler, 64 bit programs came ahead of 32 bit ones.

      Looks like to me that Sun's 64bit code won 7 out of 22 tests against Sun 32bit code.

      The rest are either loses or near ties.
  • by amarodeeps ( 541829 ) <dave@dubitab[ ]com ['le.' in gap]> on Wednesday January 28, 2004 @09:18PM (#8119441) Homepage

    ...and a better performance not even all of the time, especially on a 32-bit platform, I choose GCC.

    However, I'd like to see a well-thought out criticism of this piece. It seems like someone always has a good counterpoint to any given set of benchmarks.

    • by cant_get_a_good_nick ( 172131 ) on Wednesday January 28, 2004 @09:51PM (#8119675)
      The Sun compiler has some optimizations, not turned on in this test, that gcc doesn't even offer. Scheduling optimization based on profiles from previous runs (rarely used) optimization and inlining across source files and across all program object files, ordering to help pagefault analysis, enable instruction prefetch, etc. If would be interesting to see if these have vast or just incremental improvements in the runtimes. I think Sun is adding auto-parallelization to future compilers. Most people don't use these optimizations since they imply some work and some testing to see whether or not they help a particular dataset. I know we don't, by the time we get something that we could test, we're already on a new codebase.

      That said, the fact that a generic compiler like gcc is within spitting distance of Forte or SunONE or whatever they call it this week is impressive.
    • Sun sells the "Compiler Collection" for $995, which consists of the C/C++ and FORTRAN compilers without the GUI application builder.

      Sun used to (and may still do) claim that their compiler produced better code than GCC.

      Sun's FORTRAN compiler is much more complete than g77/g95 - no offense intended to either Craig Burley or Toon Moene.

  • "clear" winner??? (Score:5, Insightful)

    by ajagci ( 737734 ) on Wednesday January 28, 2004 @09:37PM (#8119557)
    So, the benchmarks show maybe a 10-15% difference in favor of Sun's compiler. Does that Sun's compiler a "clear winner"? I think not.

    First of all, it's far from clear that those differences are real. You can get much bigger differences from just changes in caching behavior, even with the same compiler.

    Then, there is the question of whether Sun's compiler is actually correct. A lot of commercial compilers intentionally skirt or break the letter of the ANSI standards once you start enabling optimizations. GNU C/C++ is usually more careful.

    Finally, you have to ask whether it matters. So, Sun's overpriced machines using their overpriced compilers run a bit faster than their overpriced machines using a free compiler. So what? If you want bang for the buck, or even just maximum bang, why in the world would you buy a Sun these days anyway?
    • by SewersOfRivendell ( 646620 ) on Wednesday January 28, 2004 @11:51PM (#8120431)
      Assuming he measured correctly, 15% is a lot. It's the minimum threshold for user-perceivable speed improvement, among other things. A lot of people would kill to have 15% faster compilers, kernels, databases, window managers, etc.
      • I've seen Mac UI people say that 30% is the minimum threshold.

        The point I was most interested in is that gcc and Sun's compiler are about the same speed on 32 bit code, and Sun pulls ahead on 64 bit code.
    • Finally, you have to ask whether it matters. So, Sun's overpriced machines using their overpriced compilers run a bit faster than their overpriced machines using a free compiler. So what? If you want bang for the buck, or even just maximum bang, why in the world would you buy a Sun these days anyway?

      Without doing any sort of conclusive tests, I've tested three of my machines with

      openssl speed rsa dsa

      for speed. Here's how it panned out;

      • AMD Athlon XP1800+ - 1000% as fast
      • AMD K62-400 - 200% as fast
      • Re:"clear" winner??? (Score:5, Informative)

        by DAldredge ( 2353 ) <SlashdotEmail@GMail.Com> on Thursday January 29, 2004 @01:40AM (#8121069) Journal
        The OpenSSL code has highly optimized assembly for those functions under x86. On other archs it is just C code that the compiler has to optimize.

        That may explain the speed difference that you are seeing.
        • The OpenSSL code has highly optimized assembly for those functions under x86. On other archs it is just C code that the compiler has to optimize.

          That's why I tested a Mac with a 138MHz speed deficiency which came up almost number-for-number evenly matched with its processing superior.

          Is there some documentation as to the machine-specific optimizations of OpenSSL on the various architectures it supports?

          • PowerPCs are "brainiac" chips. They
            do a lot more per cycle than do the more
            deeply pipelined ultrasparcs, which in turn
            do a lot more per cycle than do the absurdly
            deep x86s.
      • In short, I'm not terribly impressed with the power of that UltraSPARC processor; especially considering it's so closely matched with my two lower-end boxes.

        I have a Sun Ultra 10, 333MHz UltraSPARC IIi with 2MB cache and 128MB RAM in one bank (I realise I can get better memory bandwidth while using two banks).

        I can confirm, the machine runs like a two legged dog.

        If it were not for the fact that I want to improve my Solaris skills (the whole reason I bought the machine), it would be running OpenBSD as my
    • We would pay good money for a 5% improvement! (this is the promised, but so far undelivered, advantage of using Intel's CC over GCC on Linux. We already use it on Windows, but there it is 40% better than VC6).

    • So, the benchmarks show maybe a 10-15% difference in favor of Sun's compiler.

      Sun's overpriced machines using their overpriced compilers run a bit faster than their overpriced machines using a free compiler.

      In my eyes I would rather spend the $2995 on buying a new PC. Hell ... if I wait 6 months before I buy it I'll easily get a 10-15% speed increase on all the programs running on the machine (+ have money to spare).

    • Well, 10 to 15% is a very low rate taking into account the costs for each compiler. GCC may be slower, but, what is the benefits regarding its price? In this matter, I rely my opinion in the cost/benefit ratio. An additional stuff to talk about: no matter which one creates the fastest code. I think, as a programmer, that the code must be optmized first by design, then by the compiler. I've seen a lot of programs that relies its performance to the compiler with absolutely bad design. My comments in this ma
    • I recently compiled gzip with HP's cc and saw perhaps a 10% improvement over gcc 3.3.2 with the right options. This is just a single data point, but I was surprised I didn't get more of a speedup. Even more surprising was that the stock gzip binary was even with gcc, so either HP's compiler improved, they didn't use it, or they optimized conservatively.
  • by duffbeer703 ( 177751 ) * on Wednesday January 28, 2004 @10:17PM (#8119829)
    Next they'll be concluding what language is fastest by writing "Hello World!" in C (compiled in 64 & 32 bit), Logo, Perl and Prolog.

    I hope to be posting a full writeup on how much faster MS-DOS is compared to BSD using boot times as a benchmark.
  • by lindsayt ( 210755 ) on Wednesday January 28, 2004 @11:00PM (#8120126)
    It's important to remember that Sun's compilers are optimized for Sun's big machines, so you don't really see the biggest advantages of the Sun compilers on single or even dual CPU machines. The Sun compilers really shine on the massively SMP machines such as the 10K, 12K, and 15K.

    Of course I don't have any links to benchmarks that prove this, so take it or leave it. But Sun specifically does not care about compiler optimization for their "toy" machines such as the Ultra 5, Ultra 10, Blade 100, etc. Basically, if your Sparc CPU isn't a straight II or straight III, Sun's not as concerned with you.
    • by Anonymous Coward
      This is utter rubbish.

      On all modern machines, the first rule of good code performance is to keep memory access to the very minimum, by utilising cache. This happens to be exactly what helps big SMP machines too. There is no tradeoff there.

      A secondary but still important aspect is the theoretical execution speed you get when you disregard cache misses. The important things here are picking an efficient instruction stream, and good instruction scheduling and register allocation. This is as important on "toy
      • Um, you're right about the two ways to get good performance... but loop unrolling for an 8MB cache is a totally different animal than unrolling for a 1MB cache (you start talking about doing multiple-level unrolling, and hoisting invariants through n, as opposed to 1, loop boundary). Similarly, an efficient instruction stream is different when you have 2 ALU's, 2 L/S units, and 160 rename registers than when you have 2 combined, and 16. Scaling of these features on a processor leads to a lot more than linea
    • I wonder what makes you think so? I once tried to squeeze max performance of a program that had to run on 8 cpu sparc box (each @750Mhz) with 16G mem. (is it "toy" machine or not? read on) The program was nothing fancy: binary search in an array of 64bit packed structs of the total size ~2GB (so we could run 8 processes simultaneously in 16G), plus some postprocessing.

      First observation: 64bit mode was twice faster than 32bit one (you sort of expect that).

      Second observation: gcc with "-m64 -O3" produced co
  • the achille's heel? (Score:4, Interesting)

    by MrLint ( 519792 ) on Thursday January 29, 2004 @12:11AM (#8120548) Journal
    Well slice it which ever way you want, this jives with what i hear about the virtual water cooler. gcc is all well and good on x86 (as where its been tweaked for ages) but not always the best elsewhere. MacOS X is where i hear this mostly. The more aggressively it get used outside of x86 land the more it will get tweaked.

    Move along nothing to see here.
  • The author uses -fast for all compilation with the Sun compiler, but that really isn't what it's meant for. The manual page for the compiler states that -fast is only good for a small portion of code in a program which requires the most optimization. In most cases, it results in optimizations that increase code size, and in my experience, change results of some operations. For instance, attempting to compile OpenPGP extensions to Perl (Crypt::OpenPGP, Crypt::RSA, Crypt::DSA, etc.) resulted in some strang
  • It is a totally waste of time and money to compare compilers that have the feature set you want. By the time you have evaluated, purchased and start using the compiler of choice all compiler vendors and gcc has new optimizations in place making your choice outdated.

    I worked for a compiler vendor once and at a particular point in time Greenhill Oasis, a competitor, compared Drystones benchmarks values in an advertisment between ourselvs, gcc and their own compiler. They turned out to be 3 to 5 times faster

    • Yes , it must take all of a few days to evaluate your program compiled under different compilers. And as we all know , new compilers come out every few hours. Right?
      • If you are going to spend a huge amount of money on compilers for a project lasting more than a few days that is. And for smaller projects I don't see that your budget would allow spending about $1-2K per simultanous user, which is a common charge for commercial UNIX based compilers.

        Your time is better spent finding better ways to solve your problems than fighting for an extra 20% performance out of compiler optimizations. When compilers tries to make tricky optimizations ugly and hard spotted compiler bug

  • would it have improved the scores for gcc any if he had used the march flag instead of the mcpu flag? It seems to me that it might have made at least a little bit of difference.
  • by 0x0d0a ( 568518 ) on Thursday January 29, 2004 @06:32AM (#8122109) Journal
    ...of who is using SPARC instead of x86 if they're worried about a 5% performance difference.
    • Not even only x86.

      Much as I'm not really a Mac fan, if you want a RISC and Unix based workstation, just get a G5 with MacOS X. There you go. It's Unix, it's RISC, and it runs circles around Sun's workstations, at a fraction of the price.

      Or yeah, x86. Get a cheap Dell, install Linux on it, and there you go. It's a Unix workstation. Sun's over-priced under-perfoming stuff won't even come near it in terms of performance.
      • Yes, G5's and Pentium 4's do run circles around Sun workstations. But Sun's money is in their servers. Most people writing software for Suns aren't even looking at their workstations now, only their servers. The only real reason to buy a Sun workstation these days is for developing and testing software that you run on a Sun server. If you want a 32+ processor system with fast I/O and a robust design, then you're not going to find a Mac or PC that's going to do that. And a multiprocessor is going to be more
        • Of course, if instead of paying millions for that huge 32+ server, they:

          A. Hired a competent programmer, for a change, and

          B. stopped letting a clueless maketroid or PHB insist that the software must have all possible buzzwords (e.g., instead of directly write a value in the database, it now sends a SOAP message over MQ to an EJB, which finds and parses another XML message _in_ the SOAP message, and parses it to find the value and table name) ... then they could do the exact same job on a "measly" 4 way Xe

  • Which compiler is fastest at the act of actual compilation?

    If GCC compiles mysql in 10 minutes and Sun's compiler takes 15...

    Which compiler makes smaller binaries?

"Money is the root of all money." -- the moving finger

Working...