Intel C/C++ Compiler Beats GCC 580
jfonseca writes: "An article from Open Magazine claims the new Intel C/C++ compiler's geometric mean performance on multiple kernels compiled through it reached 47% improvement over GCC. They also predict the end to the ubiquitous GNU compiler due to low performance. Many other compiler/platform combinations also compared. A bit pretentious, yet an interesting read."
GCC will live (Score:5, Insightful)
/Janne
Re:GCC will live (Score:2, Funny)
/me holds breath...blush...gasp!...ack!...hrmmmm
Re:GCC will live (Score:2, Insightful)
Re:GCC will live (Score:2, Insightful)
GCC can't generate 16-bit code (Score:5, Informative)
Fortunantly, GCC just about compiles on everything with a CPU
"Unfortunantly," last time I checked, GCC doesn't generate code for 8086 or 80286 processors, only i386 and up, so you can't build an OS that's backwards-compatible with legacy 16-bit apps [freedos.org] with GCC.
Re:GCC will live (Score:3, Insightful)
Payware compilers have some relevance for code that is payware itself. However in the "cheap" or "give me the code or give me death" camps, the cheaper and more open options will still win out.
If performance were really so paramount, the Intel architecture itself would have gotten dumped a long time ago.
Ummmm... didn't they design the chip (Score:5, Insightful)
Intel is happy that their compiler can beat another compiler?
I'd hope so... They designed the damn chips, had a head start, have cash money to buy a few smart compiler dudes
It is interesting to see Intel pick on GCC. They are in the CHIP BUSINESS... A compiler (any compiler) helps them.
You'd think THEY would be the ones to release a compiler into open source so they could get the rest of the world looking at how to do even more optimizations for their chips.
GCC has been out there for well over a decade. Open to anyone to improve
Intel could show us all how to make a better compiler. Open up their source code... but someone might improve on their techniques and that would make them sad. So, instead they berate a compiler that has done them only a service.
Just my thoughts. Yours may vary.
Is that a good thing, though? (Score:4, Interesting)
Is that a good thing? Internet Explorer dominates the web browser market for much the same reasons, whether or not there are better alternatives available. Now we have a proliferation of web sites that only work with IE instead of standard HTML, and all the other well-documented problems.
It would be an advantage for the Linux world if it was easier to port code from other platforms. Most of that code isn't written with GCC, it's written with VC++, C++ Builder, CodeWarrior, etc. If you're going to do this, standards compliance and ease of portability are very important.
I don't know how good GCC is these days; it used to have quite a good reputation for standard compliance and quality of generated code, but that was a couple of years back. If it hasn't kept up -- I said "if", because I don't think this article demonstrates that either way -- and the Linux community religiously stick with it based on philosophical arguments rather than technical merit, surely they'll just be shooting themselves in the foot?
-march=i486 -mcpu=i686 (Score:5, Interesting)
The hobbyist licence is free.
But it prohibits selling copies of the compiler's output. Thus, if you make a CD of your software, you probably can't even sell it for $5 to cover duplication costs. Plus, such a restrictions is incompatible with the "no restrictions on selling" language in the GNU GPL.
Although aren't most Linux distros only compiled for the i386 anyway, ignoring what i686 optimizations exist in GCC?
AFAIK, the Linux distributions are compiled with something like -march=i486 -mcpu=i686 which means roughly "Use only those instructions available in i486 and up, but schedule for i686." (Source: GCC 2.95.3 docs, info gcc invoking submodel i386)
Re:GCC will live (Score:5, Insightful)
One should well expect a CPU vendor's compiler to do well than something that is more general. The fact that we're not all rushing Intel is not hypocrisy.
You merely have an overly simplistic view of the situation.
Not open source (Score:4, Insightful)
Q: Why do people use MS-Office?
A: Because its there.
Q: Why will people use GCC?
A: Because its there!
Same concept, really. Most Free Software will continue to be built with GCC until Intel releases this compiler under the GPL, performance not withstanding.
Re:Not open source (Score:2, Interesting)
Being an user of a special compiler myself (SDS's cc68000), I think that this kind of compiler has a niche market, and the good old GCC is still the most widely used, and probably will be for ages.
Altough, If this new intel compiler does ELF, I can see MandrakeSoft or RedHat building their RPMs (glibc too - yay!) with it and claiming they are faster than the competition
I talked with them about this... (Score:5, Interesting)
I imagine that similar logic applies to their compiler: they give it away for free (binary version, so they can control it), but build in the hooks that make it work faster with their newer chips than with the competitions' while at the same time encouraging people to write more CPU intensive programs because they have the power to do so.
Ultimately, they succeed at their real goal: to sell more chips. By the way, AFAIK, Intel still gives away its compiler in binary form, though only for Windows. Of course, the last time I checked was a year and a half ago...
How so? (Score:2)
1) They make money off people who can afford it
2) They make money off people who can't, and thus need to buy a faster PC.
Re:I talked with them about this... (Score:2, Interesting)
Linux version is free for non-commercial use.
Plus, it works similarly on Athlons
Athlons benefit from better alignment, better scheduling, better cache utilisation, SSE & MMX use, vectoring of intel's compiler compared to MSVC compiler. When it is compared with gcc produced code, there are also benefits of better register utilisation and scheulding (MSVC scheulding is OK, gcc's is awful.) The reason intel's compiler can produce better code on a non-target processor (athlon) is that intel p3 architecture and athlon architecture is more similar to each other than they are to original pentium pro, which is the target processor for MSVC 6.x, or pentium (target for egcs, later merged into gcc.)
The new intel compiler used to be Kai C++ (Score:5, Informative)
Kai and GCC are very similar in concept, except that Kai was a bit of a "higher end" compiler. Think of GCC being the Toyota of compilers, and KCC being the Lexus of compilers. You may pay the extra bucks just for the optimization/cross-platform abilities that Kai has - but most normal folk don't need it.
Unfortunately, Kai got purchased by Intel, and (from what I see on their site [kai.com]) they seem to be dropping the other platforms to support only intel. Really, the Intel compiler is really the Kai compiler, but only for Intel. In fact, Kai (Kuck and Associates Inc.), is now part of Intel. Personally, I think this sucks, since Kai really is a superior product compared to any other C++ compiler out there, if you will to pay the extra $$$...
The end of gcc 'cause intel's compiler is faster? (Score:3, Insightful)
Plus everytime benchmarks showed that a closed source product was faster/better than the free software counterpart, the open source community worked hard to improve its champion.
gcc is dead, long live gcc
Re:The end of gcc 'cause intel's compiler is faste (Score:5, Insightful)
Re:The end of gcc 'cause intel's compiler is faste (Score:3, Insightful)
We routinely beat the system vendor's offerings on benchmarks and (more importantly) real programs.
And we went broke a decade ago, oh well. Compilers became a commodity and we didn't figure out the consequences in time...
Re:The end of gcc 'cause intel's compiler is faste (Score:5, Insightful)
This sound somewhat like a bit biased comparison - even though I think that Intel's compiler is indeed better in x86 optimization - most of gcc developers would confirm this...
gcc is "good enough" (Score:5, Insightful)
I am sure that Intel's compiler is better than gcc at crunching numbers, in fact, it's probably signficantly better. But my guess is that for most folks gcc is "good enough" at a much lower price.
The commercial compiler companies have been trying to rid the world of gcc for quite some time now, and yet, come the close of the day it's the commercial companies that are getting closed down or bought out, and it's gcc that continues to gain users.
Now, if Intel were to allow you to use their compiler for free (even without source code) for commercial purposes, then perhaps I would start to worry about the future of gcc. But that isn't going to happen, and gcc will continue to chug right along.
Re:gcc is "good enough" (Score:2, Informative)
I'd say since there has been alot of time invested in GCC [and its vastly more portable], if the Intel compiler was made open source the GCC dudes would just try to merge the intel optimizations in.
Keep in mind that GCC doesn't just compile for x86 processors [its only intel that does]. So even if Intel releases their compiler for free [open source style] that still won't kill GCC.
Tom
Is it surprising? (Score:5, Interesting)
The interesting thing is that the Intel compiler's code ran at 'virtually identical' speeds on an Athlon.
Re:Is it surprising? (Score:5, Insightful)
Re:Is it surprising? (Score:5, Insightful)
(although, a lot of intel's optimizing is probably due to their knowledge of the arcane, baroque, and just plain stupid x86 architecture, and thus would not be applicable to saner CPUs archs like... virtually anything else currently available.)
Re:Is it surprising? (Score:2, Informative)
That is a lower-level backend issue and won't seriously improve the other backends at all.
The OP has a point though, by upgrading GCC instead of making their own compiler more people will have access to a compiler that makes code tuned for their processor better.
For instance, I have a PIII [I don't but lets say I do]. I write code and build it in GCC. I go out and buy a PIV [roughly same clock rate] I notice that my code is not significantly faster. I get pissed off...
However, I buy the PIV and tell GCC to use PIV specific optimizations, my code turns out faster and I am happy.
Tom
Re:Is it surprising? (Score:3, Interesting)
P4/1.7 +26%, P3/866 +23%, Athlon/1.2 +16%, AthlonXP/1.2 +19% (due to SSE).
Re:Is it surprising? (Score:3, Insightful)
Check out the gcc HEAD branch, and try something along the lines of
-O2 -march=athlon -mcpu=athlon -msse -m3dnow -mmmx
Re:Is it surprising? (Score:3, Interesting)
But GCC's free... (Score:2, Interesting)
While software firms and organizations developing mission-critical programs may decide to switch to icc, the fact that gcc is free will help it to remain popular among hackers and other budget-constrained users. Moreover, most of the source code programs one downloads for Linux are designed to be compiled with gcc.
Umh, this isn't really surprising (Score:5, Interesting)
What does their 'kernel' mean (Score:2, Interesting)
some code blocks they've called 'kernels' for some
obscure reasons.
That makes the test useless - if they've compiled
some linux kernel by both, say GCC and MSVC and their own compiler - that's where the real results
are to be derived from. Needless to say that they
couldn't do that.
And their (surely optimized) "kernel"s run faster compiled by their own compiler. Bah! No surprise.
Conclusion : this is unfair comparison, and the results of the test say nothing.
Re:What does their 'kernel' mean (Score:2)
Mozilla w/ Intel compiler (Score:3, Insightful)
evaluate Intel's C Compiler [mozilla.org]
wtf? (Score:5, Interesting)
Could this be more full of itself? Somehow I have trouble accepting sweeping generalizations about the fate of compiler technology from someone who obviously dropped out of a creative writing program at some third-rate school.
Re:wtf? (Score:2)
Performance is important (Score:5, Interesting)
I know gcc3 is better, becouse it supports more platforms, but what about speed improvements? To have fast inner loop in linux application I must code that loop in assembler. That is a problem for someone, who's creating a computer game.
Re:Performance is important (Score:5, Informative)
Re:Performance is important (Score:3, Informative)
Really, that way you and everyone else could benefit.
Re:Performance is important (Score:2)
Take over? I think not... (Score:5, Insightful)
geometric mean performance on multiple kernels compiled through it reached 47% improvement over GCC.
The testing didn't involve compiling kernels at all.
The 47% performance improvements were on a numerically intense benchmark program.
The preferences of the article's authors is pretty clear:
"Nonetheless, the magnitude of the performance differential in numerically intense applications is such that only the most dramatic sort of improvement in the long-awaited version 3 of the GNU C/C++ compiler will stay the hammer that drives a stake through the fibrillating heart of the aging technology behind the GNU C compiler. May it rest in peace."
These are not the words of objective observers, and such comments strike me as being quite irresponsible.
Re:Take over? I think not... (Score:5, Insightful)
Next time the zealot in you decides to come raging out, take a deep breath and count to 10. Think about how this news might be good for the open-source community before you begin bashing wantonly.
Re:Take over? I think not... (Score:3, Insightful)
You're comparing apples and oranges when you try to argue that since Windows people use closed-source compilers, then Linux people will also. These are completely different groups of people, and I suspect that plenty of people in the Linux community will start using a closed-source compiler when they pry the gcc source from their cold, dead hands.
* No, they didn't compile kernels. They compiled (and tested) ON multiple kernels. Don't you feel silly now, contesting so loudly a point you misinterpreted?
Christopher wasn't the misinterpreter, Slashdot was. Did you read the text he quoted? "the new Intel C/C++ compiler's geometric mean performance on multiple kernels compiled through it reached 47% improvement over GCC." You cleverly omitted the bolded text.
Next time the zealot in you decides to come raging out, take a deep breath and count to 10. Think about how this news might be good for the open-source community before you begin bashing wantonly.
Chill. Chris wasn't being a zealot, he was simply offering counterarguments to the ridiculous claim that Intel's closed-source, x86-only, C/C++ only (I bet) compiler spells death for GCC.
Re:Take over? I think not... (Score:2)
However, GCC is universal. It runs everthing, targets anything and costs nothing. Nothing in terms of both Beer and Speech.
Kernels === benchmarks (not Linux) (Score:3, Informative)
They have a suite of keys loops where each is inportant to someone (fft, lloops, matrix mul, 3d geom, etc..) and determined that in general Proton (the internal codename) is much faster than GCC. Of course it is.
If only it was free. Unfortunatly, Intel built that compiler on several other companies IP and can't release the source.
Who owns those architectures these days? (Score:2)
You might find Alpha and StrongARM on the Intel list...
It probably is faster, but so what? (Score:3, Interesting)
GCC: when was the article written? (Score:2, Interesting)
In fact GCC3 was out some time back and it seems typically (to me) to perform around 20% more slowly than Intel's thing (compared to 40% for older gcc, as the article says). It's not so bad, imo.
why (Score:2)
I don't get it - why is that important?
But that's a minor issue, the important part is convinced - I am throwing my gcc out the window and paying half a grand for Intel C/C++... (somebody needs to come up with a damn "roll eyes" smiley)
Re:why (Score:2)
Personally, I think open-source will only have arrived when there's a *choice* of equally good free compilers available, and I can use any of them to compile the linux kernel and netbse userspace together for my Psion. That's where the portability aspect really comes in - it's "theoretically possible" now with GCC, so come on intel, catch up!
Oh, and the smiley you seek is i
;8)
Re:why (Score:2)
I don't get it - why is that important?
Warnings should never be swept under the carpet, they should always be dealt with. They have a habit of biting later on, particularly when switching between architectures.
One thing I always hate about linking with many third-party libraries is the way they often require dubious casting which generates warnings. I like nice clean code, and I like my compilers to wear jackboots when dishing out warnings.
Cheers,
Ian
Does this mean? (Score:2, Insightful)
Re:Does this mean? (Score:4, Informative)
I'd agree with the gist of what you are saying, but some of your bullet items are just oversimplified, overstated bollocks.
Re:Does this mean? (Score:4, Insightful)
Awww, did you get your widdle bubble burst?
Your bitterness is unbecoming. Slamming the good work of people in the Linux, Mozilla, and GCC projects because of your sudden realization that all of your juvenile misapprehensions are not 100% correct is a mark of poor character.
Re:Does this mean? (Score:5, Insightful)
Sure GCC may not be the best compiler on the face of god's green earth, but to me, if I was running a project with tight money constraints, and I had to choose between a $499 and a $0 dollar compiler and the only reason was the expensive one was faster, unless I absolutly had a better reason for paying the $499, I would have to choose gcc. For one, gcc is well known, in common use, and everything works pretty well on it. I am not saying it doesn't have it's caveats, but to me, compile speed means diddly. Personally, I would rather take the slow one because it gives me more time to drink my coffee while waiting on a compile!
Yeah Linux crashes....so does Windows, Solaris, FreeBSD, OpenBSD, z/OS, DOS/VSE, BeOS, Windows CE, PocketPC, Palmos..........get what I am getting at? And so far as covering these crashes up, I JUST don't see this happen, at least from the Linux realm. Oh and anyone trying to contradict what I am saying about other OS's crashing needs to be smacked in the head with a massive Clue stick. There has almsot never and probably never will be a uncrashable os. There has almost never been a completely bug free piece of code. Sure, some others in the list are better and crash MUCH less then windows (Linux fer sure as well as about 90 percent of the list), but to say that people are tying to cover Linux crashes up is BS!
Mozilla not delivering? Where the heck have you been? Mozilla, as of late, is TON's better then it was. And with the earlier post that Mozilla will also support anti-aliased text, well, besides Konqueror, I see noone else that competes and surely not that bug ridden, crash pron piece of filth called Netscape 6 (or 4.78 for that matter....). I know it's been updated since 6.0 came out, but heck it was based on a sub point 1 release of Mozilla and even Mozilla was better then Netscape 6 when 6 came out! Plus there's that AOL/Time Warner FILLED bookmark list that installs with it and
Loki is dead. Long live Linux. Listen, most people are not like us. They use their computers as tools and not gaming machines. Joe sixpack will ask if it's being made for PS/2 not for the PC. Now that's not to sayt gaming on Linux isn't important. It is, but just because Loki has died doesn't mean their won't ever be games on Linux, commerical or free. There are lots of great free games for Linux. Armegatron, GLTron, Tux Racer, Rocks and Diamonds, Maelstrom and the list goes on. Sure, they might not be a 3d shooter, but then there's Quake for that. Also, the time is ripe for a new gaming shift. MMPORPG and 3d Shooter cookie cutter games can only last so long before something, anything comes to take it's place. Right now is the time for a truly innovative game to come out and steal the show. Oh and Wolfenstien and Doom 4 won't be it...they'll just be another 3d shooter.
Slashdot is censoring.....well, I doubt it. They aren't censoring. I can post anything I want under each new topic. If your talking about story submittals, well, when you are on the other end getting all of the submissions and a vast number are either duplicate, trolls or worse, well, then you start to develop a finely tuned BS detector that sometimes can be faulty. You can usually filter out most BS but sometimes some falls through and get's posted. Rob, Jeff, Chris and Neal are human you know.
Re:Does this mean? (Score:4, Insightful)
Like "GCC isn't the worlds best" (best at what, pray tell? speed? ubiquity? price? portability?)
Like "everyone believed that if you took the money out of programming somehow you magically got software that was faster/better/more innovative" (like those Open Source guys who said it was OK to make money?)
Like "give me stuff for free and feed me a philosophy that lets me pirate everything" (conflating piracy with free software, not recognizing the legitimate desires of people to legally have more control of what they get, and legally paying less?)
;)
Open source is not a panacea. It's a way of licensing technology whose strengths and weaknesses will be more and more recongized over time, but whose pre-eminent virtue of providing greater freedom will offer increasing benefits as software monopolies continue to increase their control and prices so that they can keep their share price going up.
Open Source also has one other long-term, difficult to refute benefit. The fact that Microsoft can't forever grow the software market and must illegally leverage its way into adjacent communications markets (MSN, VoIP), media markets (Slate, Corbis) and consumer services markets (Expedia) is still mostly being glossed over as premature. But it is not being ignored.
--LinuxParanoid, who doesn't think these Linux guys are paranoid enough...
Re:Does this mean? (Score:5, Informative)
As a sysadmin who often compiles packages, but doesn't write them, all I care is that
Mozilla (the shining jewel of Open Source)
BS. Konqueror is better, and KDE and Gnome the shining jewels, after Apache of course. (Sendmail? Bind? Proftpd? PHP? - not jewels perhaps, but great workhorses.)
Loki ... listened to the Linux zealots and got screwed
So sad, Linux may never be primary platform for gaming. I could care. And my Toyota will never enter the Indy 500.
Slashdot ... dream is gone and good riddance.
/. works for me - what gets modded up is generally what I end up agreeing is most worth reading.
If you don't like the moderation, set up your own board and invite in only folks you agree with.
you just don't get it (Score:5, Interesting)
Now, as for GNU C and benchmarks, GNU C has never produced the fastest code on any platform. Unless you lived under a rock and never did any high performance computing, you'd know that. And if you took the time to look at the GNU C documentation, you'd also know that this is no accident. But to most GNU C users, this fact never mattered. GNU C generates decent code and it has many other attributes that make it the "best" compiler for many applications.
You see, there is another misunderstanding that you and Bill Gates share: you think that there is a single "best" solution to everything. In real life, there isn't. What is "best" for you isn't necessarily "best" for me, and there may well be no way to reconcile our conflicting needs in the same piece of software.
I do agree that Slashdot moderation tends to exclude voices like yours and I think that's wrong. Why? So that one can point out how uninformed and confused you actually are.
Speed is everything?.... (Score:3, Interesting)
Also with optimizations follows compiler bugs (i.e. the compiler generates faulty code) that are very hard to find especially if you don't have the source to your compiler.
Finally I think Intel just want to capture customers as thet did with their compilers in the early 90:s (ie PLM and Intel C) It's just not in their interest to be portable. With all this in mind such compilers could be good for a specific project but I'd be careful to build anything on highly optimizing compilers in general and not on a sound design.
gcc is more than a compiler (Score:2, Informative)
What's more important is that gcc provides features that are absent from all the other compilers: gcc works virtually on any architecture, and offers a stable (in its functions) platform, and an unique interface to low-level features (such as building calls dynamically) as well as very good extensions. It demonstrates how free software can offer a standard, and not be affraid of 'innovating.'
so, intel/dell/sun's compilers may have their place, but they don't play in the same category
as gcc. They're useful for dedicated performance apps, or things like games.
Well... (Score:2, Informative)
But isn't the main strength of the gcc it's crosscompiling abilities ? Never heard of any compiler supporting so much platforms.
That's ridiculous (Score:2)
gcc 3 vs gcc 2 (Score:3, Interesting)
What are they targeting with this release? What new big (and important) features are in it? And, in view of the article, can we expect speed increases, or is it mostly about new features?
In any case, I am not stopping using gcc just because some closed, expensive thing is much faster (even if it is ten times faster), and I expect a lot of people here feel the same way. Ok, I might consider if it was ten times faster :)
Apart from the whole OS "cult" theres also another reason (and I am sure many will disagree with me here), but there is such a thing as "fast enough" and for the vast majority of things I use my computer for, that has been more than achieved. Don't get me wrong, I love tweaking, optimizing, overclocking and generally pushing the hardware as far as it will go, by any means handily available (including keeping a voodoo doll of my PC in the freezer), but I've found that I do this more for the process than the end result. Buying and installing a new compiler (which you know nothing about, in terms of how it works) just doesn't seem to be all that much fun. (Besides, I am sure my Athlon would never speak to me again)
Good benchmarking, poor analysis (Score:5, Interesting)
No, what's great news is that Intel's compilers are available now on Linux. So an ISV like Red Hat can compile the OS (or specific math libraries) on them for either real-user or benchmarking benefits.
"Driving a stake through the heart" of GCC is a gross exaggeration, given the ubiquity, freedom, and free beer nature of GCC. "Giving GCC a kick in the pants" might be more accurate. And a good thing, too.
--LP
Re:Good benchmarking, poor analysis (Score:5, Informative)
I totally agree. Unix has always been popular in scientific computing and egineering, but I know of several people switching to WindowsNT because
a) intel systems are extremly cheap (compared to architecures optimized for number crunching like RS6K)
b) compilers available for NT produced MUCH faster code, e.g. Digital fortran. (Yes, I know
When it comes to numerical simulation, run times in the order of weeks are not unusual, so a performance penalty of 50 percent is simply unacceptable.
So this may turn out to be a big win for linux in the scientific computing area.
My own experiences (Score:5, Insightful)
A while ago I tested the Intel compiler on some graphic-stuff I've been coding (using Visual C++). I got between 20-30% performance increase. The compiler was horribly slow though, MSVC was probably 4 times as fast compiling the entire project.
I'm using GCC 3.0.x for Gameboy Advance development (ARM7TDI cpu). It works fine for me, but the vendor compiler generates between 30 and 40% faster (and smaller code) (or something like that, don't have the exact numbers right now). But as many others have pointed out, GCC is free, other compilers are not.
GCC is excellent for multi-platform development and cross-compiling. Using the same compiler for Windows, Linux, *BSD, Irix, Solaris and Gameboy Advance is a huge advantage.
Speed (on the generated code) isn't always the issue. At work we always compile and run with full debug information and no optimization (except for tiny, speed-critical parts and very very thouroughly tested libraries). The code is used in weapon systems (we ship the entire system, including the hardware). Coredumps are very nice if you want to find out why something crashed :)
Sure but... (Score:2, Interesting)
Tom
Here we go again (Score:4, Insightful)
GCC will still be in very wide use, since it comes with Linux and it does a quite decent job. But anyone who really cares about performance will seek out the better alternatives, like Intel's compilers. This is not news, and it's particularly not bad news--we all want freedom of choice, and the more genuine alternatives everyone has, the better, right?
There are other excellent compilers (Score:5, Interesting)
Until there is only one chip left to support (Intel is fast working on it, with the support of turncoats Compaq, HP and others) GCC will be a viable option. GCC is a great "cross platform" compiler that works for much of the current written open soruce code base. You can get that compiler to work for many different OSs and archs.
In the end, remember apache wasn't the fastest web server, but it was the "most correct" and it was free! It really doesn't matter how well your C compiler works if it won't compile your code or run on your system.
Interesting results (Score:3, Interesting)
I'm surprised that nobody has commented on what might be the most interesting result on these tests - that the same code produced by the same compiler runs 10% faster on Windows XP than on Linux (2.4.10, according to SuSE's description of 7.3). Sure, the "kernels"[1] used by the benchmark might not be as representative of real life as we'd like, but this should still be cause for concern. Kernel developers have flamed each other endlessly over smaller differences on less comprehensive benchmarks between the Arcangeli and van Riel VM systems. Do we have to go through a Mindcraft-like period of denial before anyone starts taking such a result seriously?
[1]The objections about the "kernels" used in the benchmarks not being the same as the "kernel" with which we're all familiar only demonstrate the ignorance of people who don't know that the scientific programming and benchmark communities have been using the term just as long as the OS community. Their usage may be different, but it's just as valid.
Re:Interesting results (Score:2)
Not trying to come up with excuses for Linux just yet, it's just the first thing to jump out at me.
All in all, they have very little information about the actual benchmarks in that article. But I expect we'll see more people doing these "head to head" comparisons now that both OSes can use the same compiler. Kyle? Tom? up to you guys :)
Re:Interesting results (Score:2)
IIRC, 2.4.10 was the first 2.4 kernel with the AA VM. Yes, it was a mess, but I don't think advances since then explain a 10% performance difference for this type of benchmark.
I also don't know for sure that OBL used 2.4.10. They said they used SuSE 7.3, and SuSE's page for 7.3 says it uses 2.4.10, but that doesn't mean the benchmark used that version. Overall, there do seem to be a few things about the benchmark that give legitimate cause for suspicion. For example, I couldn't find the actual benchmark programs, or even a description, and why the hell were they testing on an HP Omnibook (700MHz P3) instead of a more realistic desktop system, and why did they build the code for a WinXP system on a Win2K system? Very odd.
Re:OS is irrelevent. (Score:2)
...except that they didn't, in this case, and the article - which you obviously didn't read - even suggests some reasons why that might not always be the case.
For mathematical applications (Score:3, Funny)
However, I will find use for this information and I will try Intels compiler and compare it to GCC.
Very minor changes in the code of this kind of high-performance applications can result in very big speed-ups, with any compiler. It would be interesting to see some real world problem (some PDE-model or something) based on for example BLAS (Basic Linear Algebra Solver, or something, www.netlib.org), being computed with gcc/icc and see the "real" difference.
A possible reason for the 47% improvement (Score:2, Informative)
It is quite possible that a similar improvement could be achieved by GCC in floating-point intenisve code simply by supporting SSE2.
Re:A possible reason for the 47% improvement (Score:2)
GNU is no longer the driving force behind free sw (Score:4, Interesting)
It seems to me that GNU and the FSF has become a bunch of bureaucrats and politicians who forgot what free software is all about.
Today, the real dynamic and successful projects are mostly non-GNU: KDE, Apache, Linux, Wine, etc.
Today, GNOME is the only GNU-project that can be called a bit dynamic, and I think this is because of a lot of 3rd party involvement via the GNOME-foundation and the fact that RMS is not the final authority in the GNOME-project.
What breakthroughs has there been in RMS-led projects in the last - say - 5 years? I can't think of any.
Of course, gcc is still the best open-source compiler we have, and no alternative is in sight (unless Intel open-sources theirs which is highly unlikely), but I see it as a weak spot in the free software-world. How long have we been waiting for a decsent c++ compiler? Maybe I'm paranoid, but maybe RMS is not very enthusiastic about C++ support because GNOME would look even worse in comparison to KDE, once a good C++ compiler is available?
I think we need a lot more non-GNU involvment for gcc (gcc-foundation?) to get some fresh blood into this project. And if RMS doesn't allow that, we need a fork.
But of course, that's just my opinion, so flame me.
Re:GNU is no longer the driving force behind free (Score:5, Informative)
So? The GNU project does not have a mission statement that includes "produce major breakthrough every couple of years". The FSF's top level page has a couple of links that are essential when trying to evaluate its success: why we exist [fsf.org] (as relevant as ever), what we provide [fsf.org] and where we are going [fsf.org].
But of course, that's just my opinion, so flame me.
I rarely flame people for their opinions. I occasionally flame people who clearly haven't bothered to try to understand what they're talking about and who don't let facts get in the way of their opinions. You seem to fit that category nicely. In particular, your comment "I think we need a lot more non-GNU involvment for gcc (gcc-foundation?) to get some fresh blood into this project. And if RMS doesn't allow that, we need a fork." shows you to have little understanding of gcc's development process. Gcc's development process was broken open in 1999 (by the FSF effectively admitting the failure of its cathedral-style development model of gcc 2.8.x and embracing the bazaar-style development model of the EGCS fork) and has an effective foundation (in the form of the GCC steering committee [gnu.org]), as anyone who has read the GCC FAQ [gnu.org] or is familiar with gcc's history knows.
Re:GNU is no longer the driving force behind free (Score:5, Interesting)
He actually makes an interesting point. I mean, they released a g++ 3.0 that would not correctly compile KDE. It appears that the associated ABI bugs are not going to be addressed until 3.1. And they've dropped the ball on it badly enough that it's been forked twice over the last few years. Up to and including 2.95.x, there were a lot of very obvious and annoying bugs with g++ (no namespace std, no stringstream class, no ios_base class, etc) I don't see any conspiracy theories, but it doesn't seem that gcc have historically given C++ a very high priority.
they got theirs (Score:2)
Supported Platforms (Score:5, Insightful)
Re:Supported Platforms (Score:3, Informative)
Lee
Read the fine print (Score:3, Insightful)
So the compiler produce code that only is optimized for Pentium III & Pentium IV CPUs. So it is not a production quality compiler because it can only produce code for specific processors of an architecture family.
I really don't see what the big deal is. If I wrote a program in assembly to take advantage of these extensions on a Pentium III does that mean I can get a story
Re:Read the fine print (Score:2)
Think about it. The technology is there, so why not use it? It's the same thing I tell my programming teacher when she gets pissed about my using strrev(char *string); instead of writing my own string reverse function in C++.
Hell, a lot of shitty compilers aren't even optimized for MMX yet, and that set of instructions DOES exist in every processor out there now.
I'd assume that if you're using the Intel compiler on an AMD chip, it's not going to try to optimize for P3 and P4 instruction sets. So why not use an AMD compiler for compiling to 3dNow! and the like?
Intel isn't obligated to produce a compiler that is faster for *every* architecture in existance. They're only obligated to make a straight compiler that works with it all...if it happens to like its own processors better than others, thats called the left hand working well with the right. It would be pretty hard for Intel to make their compiler work as well with 3dNow! as it does with SSE and SSE2.
real question is C++ conformance + compiler speed (Score:2, Interesting)
Is what they measure important to you? (Score:2)
The best performance measure is running your code on a variety of systems. Because most people can't do that, it may make sense to look for standard benchmarks that look like your code, and then make analogies based on the similarities of those loads to what you want to do. It's critical to pick the right benchmarks to have a good analogy; if you're interested in 3d performance, it doesn't make sense to make performance comparisons based on the number of rc5 keys per second.
Unfortunately, the Open Magazine article doesn't give any information on what exactly their tests are doing. So it's not possible for you to figure out which, if any, of their tests will be analogous to your code. :-(
As I've mentioned before [slashdot.org], I'm mostly interested in integer performance. From what I've read about the Intel C compiler, its strength is floating point. If I did a lot of FP work, I'd be sending Intel a credit card number about now, and I imagine many FP people will.
But for integer work, I think it's not so clear. Andreas Jaeger has a nice page benchmarking versions of GCC [www.suse.de]. On Athlon processors, SPEC CPU2000 [spec.org] CINT2000 [spec.org] base looks like it's around 10% faster when built with the Intel C compiler than with GCC 3.0.1. I think I can live with that.
It's a lot easier to modify gcc than icc too, and yes, I really do hack on gcc from time to time.
Better than GCC but not better than VC++ (Score:2)
However, *I* am surprise that Intel's compiler *is* in fact better than Microsoft's C++ compiler (which is if you used it). After all MS has a whole army of codders working on their compiler (not to mention for 15+ years). MS should know all too well by now how to write an optimized compiler -- and bug free, support for the latest standard, and on and on.
Now *THIS* should be the news of the day not Intel vs. GCC.
fibrillating hearts (Score:2)
"Nonetheless, the magnitude of the performance differential in numerically intense applications is such that only the most dramatic sort of improvement in the long-awaited version 3 of the GNU C/C++ compiler will stay the hammer that drives a stake through the fibrillating heart of the aging technology behind the GNU C compiler."
If I read that correctly, it means that they did all the tests with gcc2. I think gcc3 version has not been optomized as much so it would probably be worse, but it would still be interesting to see how they compare.
Warning (Score:3, Interesting)
What good is a fast running kernel, when it has more bugs than something from Microsoft?
AMD should support GCC enhancements (Score:5, Interesting)
AMD should supply GPL'd contributions to GCC that optimize code for its Athlon processors. This would give them a relatively cheap way of putting out a competing compiler to Intel's proprietary version since it would leverage all the work that has already been done by the GCC group. It could also make them the preferred chip for open source OS's by ensuring that Athlons run GCC code faster than any other processor. This would be strategically very valuable at a time that they are about to push their new 64 bit instructions while Linux is simultaneously becoming viable/validated as an enterprise platform. Since GCC is not limited to Linux, these performance enhancements would also translate into gains for non-open source development projects as well.
All in all, it seems that this could be a be a great way for AMD to give developers a way to produce AMD optimized code while at the same time encouraging the use of their new 64 bit instructions in the booming open source OS server/workstation market.
Can someone answer... (Score:2)
OK, it's interesting as an experiment in Intel-specific compilers. As a baseline when testing out any new x86-compatable processors, it's probably a critical tool. But otherwise?
Seriously: Why even bother?
As for proclaiming GCC dead...please. Speed benchmarks and compilers are notorious PR pieces. I can't think of a better example of pure sensationalism. Disagree? Prove me wrong.
Patents (Score:2)
this isn't exactly news (Score:5, Insightful)
I do a lot of high performance computing with GNU C. It doesn't matter to me how fast the Pentium works with some oddball proprietary compiler--the performance I get with GNU C is the performance an Intel-based machine has for my purposes. If that's less than optimal, that just makes Intel's platform less attractive. If Intel wants to do something about that, they should invest in improving the GNU C backend.
This is great news! (Score:3, Insightful)
Look, we only use a handful of Linux machines, so we aren't likely to use this. However, if I was rolling out 1000 workstations in my enterprise, and we were tweaking/tuning the OS before rolling it out, recompiling with this would work.
Assuming Red Hat makes compiling under the Intel compiler a requirement for inclusion in their distribution, they're in a great situation.
Why not compile everything with an optimized compiler? You still have the freely redistributable GCC for compiling open source code, but for stuff that is being downloaded in binary format, wouldn't you want it to run faster?
Does it compile quicker? Who cares. When you are doing software development, you want something that compiles quickly. When you are rolling out a production environment, free speed is good.
Look, your precious GCC is terrific, it is a flexible, cross-platform compiler. It's always been week on the performance. The GCC team has always made it clear that the biggest problem ISN'T processor-specific tweaks, its general compiler improvements that are patented.
GCC is a baseline, things should compile with it. Things should also compile to the POSIX standard. That doesn't mean you don't add tweaks on the platforms that you support and set it up so that
Give me a break. I realize that many of you just use Linux to configure and tweak Linux to the point that you can post on Slashdot about how you can do anything with Linux. However, those of us that have included it as one of our tools to solve problems can use ANY tools that are made available to us.
If I can get a 47% performance improvement by recompiling some of my applications, terrific. Replacing the server may be cheap in terms of hardware (a few grand for a new server every 6-12 months isn't bad, its one of the few reasons to use x86 servers), but it takes time. Building and testing new hardware is easily 2-3 man-weeks before TESTING (expensive, look at your salaries and double it to estimate costs to the company), recompiling on your test machine and testing is just the testing time.
Alex
Optimizations and Patents (Score:4, Insightful)
Get Intel compiler for free (as in beer)... (Score:3, Informative)
65% increase in codesize for the Intel compiler? (Score:4, Interesting)
It contains a shared and a static library, and two binaries. Full of symbols, so I stip them
61124 bytes in libcxa.a
49356 bytes in libcxa.so.1
90380 bytes oblcpu_gcc
131736 bytes oblcpu_icc
$ ldd oblcpu_*
oblcpu_gcc:
libm.so.6 =>
libc.so.6 =>
oblcpu_icc:
libm.so.6 =>
libcxa.so.1 => not found
libc.so.6 =>
Ok, so the icc version needs the shared library to be loaded as well.
$ size libcxa.so.1 oblcpu_*
text data bss dec hex filename
22839 3008 124 25971 6573 libcxa.so.1
70563 15860 1923912 2010335 1eacdf oblcpu_gcc
93858 24236 1923768 2041862 1f2806 oblcpu_icc
Codesize for gcc: 70563 bytes
Codesize for icc: 93858 + 22839 = 116697
Hmm, that is a 65% increase in code size! Not to mention the increase in data size (can anybody say 'lookup table' or 'buffering'?)
Hmm... I wonder if they tried gcc optimizations such as '-funroll-all-loops'. Too bad they didn't provide the source so we could verify the results.
I got a free evaluation CD from Intel with the February issue of "Linux Magazine", so I'll be doing my own comparisons thank you.
It's not 47 percent on my app (Score:4, Interesting)
I tested the Intel compiler against GCC using Robert Hyatt's excellent crafty chess engine and the speedup was only 7%. (Athlon 1.2Ghz)
On a PIII-500Mhz the speedup was only 2.5%
Of course for other application results with vary, but I for me the Intel compiler isn't worth the money or the effort.
Hats off to the GCC team for building one of the greatest tools of all time. You can't beat GCC for sheer usefulness and ubiquity.
Imprecise floating point! (Score:5, Insightful)
The Intel compiler does not generate precise floating-point code by default!
From the compiler documentation:
Looks like we can't even have IEEE compliance, we can only favor it. More gory details can be found in the manual [intel.com] (warning, big PDF...), but the "optimizations" that shocked me most were:
These are all defaults. Trading precision for speed can be a lifesaver sometimes, but not in numerical analysis!
--
I like canned peaches.
It's the optimiser... (Score:3, Interesting)
A point I'm suprised no-one has made yet - GCC is a great compiler, and it's optimiser kicks ass, on sensible (read: orthogonal etc.) CPU architectures (Sparc, PA-RISC) and even semi sensible ones (Motorola 68k).
HP compiles the HP-UX kernel for PA-RISC with gcc, and not their own compiler, because it produces the tightest code there is for their platform.
The 80386 is definitely non-sensible; an ungodly mess nothing short of Byzantine - 16 different registers, with no two with instruction sets alike. 80 bit data formats. 8 and 16-bit legacy modes. It shares the unqiue distinction of being even uglier than VAX. Intel would have scrapped the whole steaming turd many moons ago instead of reinventing the 1970's and microcode, were it not for the Wintel monopoly fuelling the fire for faster 80x86 compatibles.
This has chicken-and-egged its way into the open soruce world - the ultimate reason I'm running Linux on P3 and not a Sparc, PA-RISC, 88100, MIPS, RS6k or whatever is because of Microsoft; yes really - the Wintel (or DOStel) hegemony made x86 the best bang for buck architecture through economies of mass production, even though it ***sucks***, which is why Linus Torvalds had one as an impecunious student in 1986.
Now, I'm trapped in the Wintel sheep model on a smaller scale - I have a P3 for the same reason that most people have Windows; I'd have a Sparc or Merced based Linux box in a heartbeat, but all the Linux software I like to use comes ready-rolled for x86, and no I don't enjoy typing "make" 15 times just to install intant messaging.
It's hardly surprising that Intel's code optimiser does better on their archtecture (including 3rd party implementations thereof). It's very goofy to try to optimise for x86. I think you'll find the Intel / GCC gap to be a lot smaller on Merced (IA-64), which is a more sensible setup.