Intel's New Compiler Boosts Transmeta's Crusoe 272
Bram Stolk writes: "Intel recently released its new C++ compiler for linux.
I've been testing it on my TM5600 Crusoe. Ironically, it turns out that Transmeta's arch nemesis, Intel, provides the tools to really unlock Crusoe's full potential on linux." It doesn't support all of gcc's extensions, so Intel's compiler can't compile the Linux kernel yet, but choice is nice.
Take Note that... (Score:1)
Intel's C++ compiler still compiles code to x86. This is really great, considering that the approx. 28% speedup in Crusoe is not the native Crusoe. I wonder how Crusoe will fare once there is a compiler that build straight to its native.
For me, Crusoe + icc + GNU/linux is a winning combination.
Well, to me, it's a hasty conclusion. P4 gains 26%, Athlon XP gains 19%, and plain Athlon gains 16%.
Re:Take Note that... (Score:2, Interesting)
You're one of those people who just doesn't get the fact that the Crusoe gets speed gains by *not* using its native instruction set.
Clarifying The Comments (Score:1)
That's what I have said. My point was that the 28% gain is basically on-par with P4. Athlon gains weren't too shabby either. Meanwhile, we understand that current Crusoe performance is pretty dismal compared to P4 or Athlon. So, 2% difference on performance gain doesn't mean that Crusoe performance is now leveraged into a new level.
If it were compiled into its native, we then can see Crusoe's raw power and compare them neck to neck. The story would have been much different.
Note also that I am not a revisionist. I believe Slashdot community is intelligent enough figuring out what I said.
You still don't get it (Score:2, Flamebait)
Crusoe does cool things because it runtime optimizes the code that it is morphing. If you were to run crusoe code natively, you'd no longer get the optimization benefits, and all you'd be left with is an even slower low-power chip.
Theoretically, you could write a Crusoe-to-Crusoe code morphing module, but that wouldn't buy you anything more than the X86-to-Crusoe morpher.
Re:You still don't get it (Score:2, Informative)
Well, code morphing itself does not worth the performance. For example, let's compare Intel Celeron vs Crusoe with the same speed. I doubt that Crusoe can even beat Celeron, even with the super-optimized morphing that has run for months.
The problem here is that no matter how good is the morphing, it is still "emulation". You can do morphing or maybe mixed with dynamic recompilation, you cannot beat the real stuff that run natively. There are lots of examples.
The real power of Crusoe processor is that it is a VLIW processor, which can jam-pack several instructions into one. *This* is the real power. Notice that P4 adopt this solution too (3 instr to 1). Intelligent compiler can arrange the machine code so that the instruction bundles are used very efficiently. Now, let's say that Crusoe has 32-bit instruction wide and it has 128-bit. Theoretically, you can jam-pack 4 instruction at once, thus yielding 4 times the MHz rate. *This* is what I really want to see.
About the power problem: I really pessimistic about processor power that can prolong battery life n times (with n > 2), as claimed by Transmeta. IIRC, *the* major power drain is at LCD and hard drive. If those bogger are attacked, I wouldn't have been surprised that the battery life would be prolonged. But, let's recall that before Crusoe came, P3 processor only consumes about 2W. How many portions was that to the total laptop consumption? Now, Crusoe reduce that into a mere 1W -- and *that* was claimed as *the* dramatic power saver. I smelled rat.
I don't want to attack your "belief" as Crusoe adherents, but please understand the underlying problem before you answer.
Re:You still don't get it (Score:2)
That doesn't mean code morphing is a bad idea. It means at least one of the following is true:
If you look at the history of the RISC CPUs the IBM ROMP was not very fast when it first came out. As I recall that was the first commercial RISC design. It didn't mean RISC was a bad idea (RISC in fact stomped CISCs butt soundly during the first half of the 90s, and I'm not sure the most recent x86 has beaten the cold dead Alpha's corpse...and the Power4 seems a whole lot faster...). The ROMP was slow because it wasn't the best design. I think the big problem was it's MMU. They did manage to fix it up, but I don't think it got a whole lot faster then the 68020 for a long time!
I'm not saying code morphing is great either, just that a commercial failure is hardly a scientific experiment.
And the PIII did 2 instructions, the AMD K7 did three, and RISCs have been doing 4+ for a long time. As far as VLIW goes 4-way is pretty low. The most interesting things that (seem) to be in the real Crusoe are the split load and split store (they can start a load, and later decide to complete it -- taking a fault if need be, or abort it; similar with stores, they can be queued and later canceled).
I am too, but on my notebook (an older G3 PowerBook) the batt monitor shows about 4 hours with the LCD on high and minimal CPU use, it shows 4.5 when I put the LCD on max dim (almost unreadable in a room with 60watt bulbs, ok with no light in the room). Leaving the monitor on dim and running a tight loop the batt time drops to around three hours. Leaving it going until the fan (tempature sensitive) kicks in and the batt time drops a bit more.
So I would say the CPU can use more power then the backlight. No, not quite, the swing from minimal CPU use to max is more taxing then the swing from minimal backlight to max. The no backlight to min may be a bigger swing then no CPU to minimal CPU.
That's not to say the Crusoe can magically turn a one hour batt to six hours, but it might get one hour to two.
Re:You still don't get it (Score:2, Informative)
Java's HotSpot compiler beats out traditional C/C++ code on a number of benchmarks.
Re:Take Note that... (Score:2, Informative)
I'm confused... (Score:1)
Re:I'm confused... (Score:1)
The point here is that the raytracer code generated by icc is more tuned; it's optimized and can do its job faster.
Justin
Re:I'm confused... (Score:1)
I have jest never been exposed to raytracing source before. Really quite interesting, the similarities between it and computer source code. I don't know what I was expecting, but what I saw certainly wasn't it.
Again, thanks for the kind clarification. It was much more constructive than the AC that also replied.
Re:I'm confused... (Score:2)
Re:I'm confused... (Score:1)
Re:I'm confused... (Score:1)
Uh... (Score:1, Interesting)
Last time i checked the kernel was in C not C++
Re:Uh... (Score:2, Informative)
Re:Uh... (Score:3, Informative)
Re:Uh... (Score:3, Informative)
Re:Uh... (Score:2)
Example:
sizeof('a')
is 4 on Intel/C, and 1 on Intel/C++. More generally, it's the same as sizeof(int) in C and it's sizeof(char) in C++.
Example:
char *a = malloc (10);
Should issue a diagnostic in any conforming C++ compiler, requiring a cast of the malloc() return value to (char *) to suppress the warning (which results in a lot of dangerous casts in C++ code that things like lclint will be confused by).
There are tons of such things, which make it nearly impossible to compile any reasonable large C project with a C++ compiler and get correct behavior. And that's assuming that you don't have variables named "new" or your own "bool" type or anything obvious like that.
Sumner
My results with Linux and NetBSD (Score:5, Interesting)
~wally
How long until Intel changes the compiler? (Score:1, Flamebait)
Any bets on which of the next versions will spew an error about "incompatible architecture" when used on non-Intel hardware?
GCC extensions?? (Score:4, Insightful)
Wait, the Kernel uses GCC extensions? I thought the Kernel was written in real C, not that bastard GCC version. I've never look at Kernel code, so I'm not sure. Is this really true?
If it's true, I think that's a huge mistake. The Kernel should not be at the mercy of one compiler.
Re:GCC extensions?? (Score:5, Informative)
There are even some places where GCC extensions make the code easier to maintain. For example, the way that device driver entry points are defined is much cleaner (using the "structure member : value" structure initialization syntax) and less error prone than using standard C.
Yes, it might have been helpful a few times to have been able to compile Linux on a non-GCC compiler, but not very often. And GCC runs almost everywhere, so limiting yourself to GCC doesn't limit the architectures you can port to. It really does seem that in this case the benefits outweigh the losses.
Re:GCC extensions?? (Score:3, Interesting)
But are these performance increases greater than what would be realized if the Kernel could be compiled using icc?
This doesn't address the maintainence issue, but it's something that I am looking forward to seeing in the near future. I figure someone has the time and drive to hack the Kernel source to the point that icc will compile it. Goodness knows, I don't.
Re:GCC extensions?? (Score:2, Interesting)
Wrong... (Score:5, Informative)
The Linux kernel is not only available on Intel chips. It is available on ARMs, DEC Alphas, SUN Sparcs, M68000 machines (like Atari and
& Amiga), MIPS and PowerPC, as well as IBM mainframes.
Which makes more sense? Targetting a cross plartform compiler like gcc [gnu.org] are targetting individiual compilers for each platform Linux runs on?
Re:Wrong... (Score:2)
I believe the gist of his argument is not that the kernel should be tied to some other compiler instead of gcc, but rather to the language spec, so that any standards compliant compiler should be able to compile it.
Re:Wrong... (Score:2)
Re:Wrong... (Score:2)
I didn't say that I agreed with the comment, I was merely pointing out to Carnage4Life the fact that he had probably misinterpretted it... I know that you can't do that in straight C.
Re:Wrong... (Score:2)
Re:GCC extensions?? (Score:2)
Now, tying an Open Source project to a single proprietary compiler or tool is certainly a bad idea. I'm trusting proprietary tool makers less and less every day (based on how Borland is handling Kylix). But tying it to Open Source tools, especially popular ones, is not a problem.
Sound 'N Spell (Score:2, Funny)
Another one who learned the pronunciation of "Crusoe" from the Gilligan's Island theme song!
Re:GCC extensions?? (Score:1, Interesting)
Re:GCC extensions?? (Score:2)
Here's some kernel code [linux.no]. Now you've seen it.
If it's true, I think that's a huge mistake. The Kernel should not be at the mercy of one compiler.
Why not? The major goal of operating system design is to extract as much performance as possible with as little overhead as possible. Portable code by definition is rarely as efficient as code targetting a specific platform or compiler.
Re:GCC extensions?? (Score:3, Interesting)
"At the mercy of" one compiler is a rather strange description, don't you think? After all, both Linux and gcc are released under the GPL, which means that anyone who wants to use Linux will already be willing to accept what many people view as the most obnoxious feature of gcc (the license). And it's not as though the gcc developers can yank the rug out from under Linux by making it proprietary, refusing to distribute old versions, etc. If anything it would be crazy to make serious modifications to Linux to take advantage of a compiler like Intel's that could be taken away at any minute.
Re:GCC extensions?? (Score:2)
Different compiler do produce different code and have different extensions.
To enable compiling the kernel with different compilers, you have to programm for the different extensions and you have to test the kernel with the different compilers. This, plus the different architectures supported gives you a (n*m) variety of possibilities and the same amount of problems.
For the very same reason, the kernel is not only for GCC it's also only for one or two different versions of GCC.
Maybe take a look a the LKML-FAQ [tux.org], where Rafael R. Reilova gives an anwser on why not use different compilers.
Real Men (Score:1)
Seriously, anything that is going to need the optimizations that this new compiler does, should probably be written in ASM anyway. Your 'hello world' and 'count and increment an array' programs are not going to run any faster. Don't bother.
Re:Real Men (Score:1)
Re:Real Men (Score:3, Funny)
Re:Real Men (Score:2)
Only LONELY geeks program in Hex or assembly!
Real Men code in C++, Java, Fortran, or Objective C, get the necessary job done, then go home to f*ck the prom queen!
Re:Real Men (Score:2, Funny)
Re:Real Men (Score:2, Funny)
next version will do the kernel (Score:5, Informative)
I'm not surprised the compiler helped Crusoe. GCC is a remarkable achievement in portability, but architecture-tailored compilers (MSVC, ICC) do better both in terms of code size and speed - like 30% better. But if you're going to PAY for your compiler, it better not be beaten by a free alternative.
I hope we see distros using icc, and I also hope it spurs further development in GCC.
Re:next version will do the kernel (Score:2, Informative)
I've compared MSVC 6.0 on Win32 to Cygwin's port of gcc 2.95.3 (with the -mno-cygwin switch, so gcc generates native win32 executables).
The gcc generated stuff consistently runs faster. Ballpark 20% or more IIRC - I don't have the figures handy. These were on real world compute intensive programs that I use at work, not artificial benchmarks. And yes, I had full-tilt optimization on both compilers.
While I don't doubt that gcc optimization could be improved further, and should be, my biggest complaint is often that the compiler itself runs slowly (particularly for C++).
Re:next version will do the kernel (Score:2)
Nonetheless... gcc doesn't far well on alpha chips.
Pan
Re:next version will do the kernel (Score:2)
That's because there are more people working on the Intel optimizations. It's also worth noting that some people heavily involved with the development of gcc have said there are some optimization techniques that they would like to implement, but can't because they are currently patented and are waiting for the patents to run out.
I've always heard good things about Digital's Alpha compilers. When the Alpha division of H-Paq finally takes a dirt nap, it would be nice if they could GPL the Alpha optimizations for inclusion into gcc.
Re:next version will do the kernel (Score:2)
I haven't tried it (..that should be a new acronym..) but it should Just Work. The gcc codebase is conservative in this regard. It is written to be bootstrapped by different, older (pre-ANSI C) and sometimes quite buggy compilers. See gcc/README.Portability in the latest gcc source tarball for details.
Now, can you use gcc to *compile* icc?
screw the kernel, recompile the system libraries! (Score:5, Insightful)
probably binary compatible or close (Score:3, Informative)
Re:probably binary compatible or close (Score:2)
Re:probably binary compatible or close (Score:3)
Re:probably binary compatible or close (Score:2)
Enjoy
JOhn
Licensing loophole ($$$) (Score:2, Interesting)
Actually, having read the license, I found the following loophole:
. . . if you buy the compiler, you are allowed to distribute code that you compile with icc ;) Find someone who has paid for icc, and ask them nicely if they would compile something for you. No, it's not open-source, but you can distribute source code along with an optimized binary if you're so inclined.
All your compilations are belong to us (Score:2)
I realize you've got a smiley there, but I've got to say: Duh! Who would use/buy a compiler that didn't allow you to distribute your binaries? That would be like using a word processor where you didn't own the work you wrote.
Though it wouldn't surprise me if sooner or later the Microsoft C++ or Word license would claim that any work produced with the tools is property of MS.
Re:Licensing loophole ($$$) (Score:2)
Re:Licensing loophole ($$$) (Score:2)
This is very interesting. (Score:3, Interesting)
Given that Intel makes a lot of its money from selling silicon, why on earth would it develop compiler technology which legitimized the approach of one of its major competitors ?
I can only assume that Intel has some fairly advanced code morphing technology of its own, and has been using the transmeta devices as a testbed.
I can just see it now, a 4GHz pentium with code morphing extensions.
I expect this one will be fought out in the patent arena. IBM and Intel are heavyweight players and I don't see either of them giving any ground willingly.
Re:This is very interesting. (Score:1)
Re:This is very interesting. (Score:4, Insightful)
You make it sound like it only improves transmeta's chips and not others. I really doubt that's what's going on here.
Precisely (Score:2)
I think the most dramatic demonstration of this was a test done by Tom's Hardware [tomshardware.com] last year. He ran a test on a bunch of different processors doing MPEG-4 encoding using FlaskMPEG. The Pentium 4 performed abysmal, comming in behind a Pentium III 1ghz. Intel decided then to download the source code to FlaskMPEG and recompile it with their compiler. This moved the P4 up to the top of the heap, but also increased all the other scores. The P4 1.5 got the biggest boots, from 3.83fps to 14.03fps the PIII 1ghz also got a lesser boost from 4.39fps to 8.03fps. However the Intel compiler helped out the Athlon 1.2ghz too, boosting it from 6.43fps to 11.14fps. So it even gave their competitors' hardware a 60% speed boost.
Intel's compiler division isn't interested in trying to screw their competitiors and make Intel's chips look the best, they are interested in producing the most optimized x86 code possible. Now of course the Intel compiler supports all the special Intel extensions (MMX, SSE, SSE2) and I don't believe it supportins things like 3dnow, but that dones't mean they are going to screw up their code on purpose to make it run poorly on other chips.
Re:This is very interesting. (Score:2)
archenemy (Score:2, Insightful)
/me is wintel-free, yay Mac
Re:archenemy (Score:2)
If the Athlon was an Intel clone, it wouldn't kick the P4's ass.
Re:archenemy (Score:2)
KDE performance (Score:2, Insightful)
Re:KDE performance (Score:3, Interesting)
It's my understanding that the problem is with the gnu library linker, but I don't know much about compilers. Does this intel compiler have it's own library linker or does it still use the gnu one?
If it does use it's own can we expect to see dramatic speed increases if we were to compile KDE with this intel compiler?
Re:KDE performance (Score:2)
Re:KDE performance + some thoughts about compilers (Score:2, Informative)
Now, for the expected performance increases. If I am correct, the intel compiler is the old KAI C++ compiler, which was highly regarded in number crunching circles as the best optimizing, more standard compliant compiler around.
Still, the spectacular increases occur only in very specific cases which are amenable to optimization. Number crunching (big math computations) are the best example, and this applies probably to mp3 encoding, divx playback and compression, image processing and other stuff like this, too. But for your average, highly heterogenous code which goes into your typical desktop apps, the increase is significatly smaller.
Lotzi
Kernel (Score:2, Redundant)
Re:Kernel (Score:4, Informative)
Ummm... Price? (Score:1, Interesting)
I'm suprised I haven't seen anyone else post this. Intel's compiler is EXPENSIVE! $499? I think since most programmers are not exactly rich (Gates excluded), I think most Linux people are not going to exactly embrace this new compiler.
$500? I paid less than that for my MS compiler!
Erioll
Re:Ummm... Price? (Score:2, Informative)
Re:Ummm... Price? (Score:2)
Re:Ummm... Price? (Score:2)
That compiler produces code incompatible with GPL (Score:2)
You can download it from Intel
Reminder: This compiler includes no support and cannot be used to produce products for resale or commercial use.
And thus produces binaries incompatible with the GNU General Public License, which allows no such restrictions on distributed binaries.
Re:That compiler produces code incompatible with G (Score:2)
Not surprising (Score:3, Interesting)
Here's another news flash (Score:4, Insightful)
AMD uses (or at least, used to use, I haven't checked lately) Intel's compilers for their SPEC runs.
Intel's compiler is the best available for CPUs that implement the x86 ISA. Transmeta implements that ISA, so why does this news surprise people?
Re:Here's another news flash (Score:2, Interesting)
VLIW proper subset of RISC (Score:2)
Transmeta is NOT RISC, it is VLIW with a x86 to VLIW optimizing translator.
VLIW means "very long instruction word," and EPIC means "explicit parallel instruction computing," both of which in practice mean "architectures that combine several fixed-length instructions into one word." RISC means "reduced instruction set computing," which in practice means "architectures with fixed-length instructions." All important VLIW/EPIC instruction sets have fixed-length instructions (32-bit in a 256-bit word for TMS320C6K, 32-bit in a 128-bit word for Crusoe, or 41-bit in a 128-bit word for IA64), but MIPS, PPC, and Sparc disprove the converse; therefore, VLIW/EPIC RISC.
duh, you're using the wrong gcc flags (Score:2, Interesting)
You have an 80386 machine here secretly? Why not use the optimized flags like "-mcpu=i686 -march=i686" and give a fair comparison?
Am I the only one to see this? C'mon people, wake up, read the manual.
No kernel, so what? (Score:4, Interesting)
The real story here is that the maintainers of GCC aught to look carefully at their optimization code for x86 FPUs.
I'm betting that Intel developers have done their best to make use of the P4 cache. Since Transmeta CPUs do work recompiling programs on the fly they have larger caches (128KB L1 + 512KB L2) than the Athlon (128KB L1 + 256KB L2) and the Pentium 4 (20? KB L1 and 256KB L2). ICC is probably also highly agressive in implimenting SSE and SSE2 instructions. Transmetal CPUs also use VLIW instructions in core wich are by their nature highly parallel (compared to native x86). Even if the Transmeta chips can't use SSE and SSE2 they may benefit from the parallel-oriented optimizations that ICC probably makes.
On a different note: in a program like POVRay that executes basically the same tight loop of instructions mega-gazillions of times during a scene the Transmeta chip's software can have the opportunity to highly optimize the program. I would like to see the stats on the second and third runs of that rendering to see how much the Transmeta "code morphing" improved the performance. It would be very interesting if the GCC and ICC built POVRays perfomed at almost the same speed after a few runs. It would obviously be a great proof of the value of Transmeta's design. I for one have always wondered what the code morphing stuff would be able to do if it were able to interface with the operating system and recompile and save the entire system back to the hard disk as it goes through the optimiztion processes. (I suppose that errors could be highly disasterous.)
That's just my $0.02 and I'm no expert so I could definately be wrong.
This is not a signature.
You are right, in that you are wrong. (Score:2)
You may also be right that gcc doesn't play with the x87 stack very well, but that is likely a minor difference in comparison.
Don't forget Fortran (Score:2)
If I can get it to compile on Linux, then I can do a whole host of things my employer previously thought impossible.
What about gcc 3.0 ? (Score:4, Interesting)
Interesting benchmark of Intel's compiler vs. gcc 2.95.4, but what about gcc 3.0? I'd love to see how that compared, given that I've heard such mixed opinions about whether it's optimisation tends to be better, worse, or the same as the 2.95 series..
Re:What about gcc 3.0 ? (Score:4, Informative)
GCC 3.1 will focus on optimization, building on the new infrastructure implemented with 3.0. If you're brave enough, you can pull from CVS and try it out for yourself.
iverilog (Score:2)
Results not surprising (Score:5, Informative)
The gcc "open projects" [gnu.org] page gives people a good idea of what remains to be done on gcc. The minutes of the IA-64 GCC summit [linuxia64.org] are especially interesting and informative, because it gives a good idea of the current state of GCC and also what GCC needs to be a competitive compiler in the future.
Bottom line: Do not be surprised when commercial compilers beat gcc performance. It's catching up, but it's still got a long way to go.
GCC Home Page [gnu.org]
Re:Results not surprising (Score:2, Funny)
Was that a subliminal message?
Re:Results not surprising (Score:2)
And just think, ten years ago, the first thing one would do with a new Sun was install gcc on it because it was much faster than the compiler that you had to buy from Sun. Especially, if it was a M68K based machine. I don't think gcc was that much slower than the sparc compiler. It was slower than the MIPS compiler by quite a bit though.
This is a FP-based test (Score:3)
Since the kernel doesn't use floating point instructions, it's not such a big loss that you can't compile it with icc yet. In addition, compiling the kernel (which is not written in ISO C, let alone ISO C++) might uncover a few bugs in the kernel code and the compiler, and it's not very likely that the kernel folks are able or even willing to help you if you use a strange system configuration with a proprietary compiler.
About the Intel Compilers... (Score:3, Interesting)
You can also look at some rudimentary benchmarks comparing gcc 3.0.1 and Intel C++ 5.0 [coyotegulch.com].
g++ icc on my own FP-heavy program. (Score:2)
I tried Intel's C++ compiler on my own floating point heavy plasma simulation program. I tried some very high optimization flags, and that produced a binary which crashed.
Using -O1 produced a binary roughly 1/2 as fast as a -O3 g++-compiled binary.
Perhaps this compiler is a win on C code, but on C++ it sure looks like a dog to me.
Re:Gee (Score:1, Redundant)
Re:Without the kernel, what good is it? (Score:2)
Re:Without the kernel, what good is it? (Score:2, Interesting)
Re:Without the kernel, what good is it? (Score:1)
The only thing different here is that they run a bit faster!
Exactly how much did you learn about Transmeta before you bought their stock?
Justin
Re:Without the kernel, what good is it? (Score:5, Insightful)
So what you're saying is that the only really useful use of a compiler is to compile the Linux kernel?
That's quite possibly the silliest thing I've heard someone say. Try:
Son: "Look ma, I got the fastest engine in the world for my car! Now I can drive faster than anyone else!"
Ma: "Um, sonny, it can't play MIDI files or make julean fries, so it's totally useless."
Totally wrong. There are thousands of pieces of software out there. The Linux kernel is merely one.
--Dan
Re:Without the kernel, what good is it? (Score:2)
wow, the compiler fixes bloated design issues too?
Seriously though, any speed up with any program helps. Given that Mozilla's UI is in XUL and that's were a lot of the sluggish behavior seems to be, has anyone come up with a jit compiler for xul?
Re:Without the kernel, what good is it? (Score:2)
There is a sub-project for Mozilla called Rhino that implements the JavaScript interpreter in Java. It apparenlty can or did translate JavaScript into Java bytecode that could be processed by the JVM's jit. According to the history page, it doesn't sound like it worked all that great (leaked memory and the JavaScript->Java translation was slow).