Performance Benchmarks of Nine Languages 954
ikewillis writes "OSnews compares the relative performance of nine languages and variants on Windows: Java 1.3.1, Java 1.4.2, C compiled with gcc 3.3.1, Python 2.3.2, Python compiled with Psyco 1.1.1, Visual Basic, Visual C#, Visual C++, and Visual J#. His conclusion was that Visual C++ was the winner, but in most of the benchmarks Java 1.4 performed on par with native code, even surpassing gcc 3.3.1's performance. I conducted my own tests pitting Java 1.4 against gcc 3.3 and icc 8.0 using his benchmark code, and found Java to perform significantly worse than C on Linux/Athlon."
Trig functions... (Score:4, Interesting)
Why are the Microsoft languages so fast with the Trig functions?
Re:Trig functions... (Score:5, Funny)
They probably cheat and use undocumented native OS calls.
Re:Trig functions... (Score:3, Insightful)
So in a benchmark comparing compiler performance I can't see how that is "moot".
Re:Trig functions... (Score:4, Interesting)
- Algorithms for allocating memory in a manual management scheme can be quite complicated. Look up on how large glibc's memory allocator is. Memory allocation algorithms for GCs tend to be much simpler, often as simple as a simple pointer increment.
- Deallocation algorithms for manual memory management are often even more complicated than the allocation algorithms. They are nearly always slower. Plus, objects are deallocated one at a time. Deallocation algorithms for GC can be simpler, but most importantly, the GC can deallocate large numbers of objects at once. This is, of course, more efficient.
- Copying GCs can compact holes in memory, which makes for better cache utilization.
Depending on the problem at hand, a GC can be a little slower or about the same. For a functional programming style (which has a particular pattern of memory usage) GC is usually faster than manual management. For programs that tend to operate in phases, allocating large numbers of objects gradually and freeing them at once, GC will also be faster.
The real problem with GC is that it affects latency. Modern GCs only freeze the app for a fraction of a second, but that's a large amount of time for something like a game or movie. There are some work arounds for this, though. Latency-sensitive apps can disable GC and use manual memory management. Or, they can use a real-time garbage collector, which has guaranteed latency, but does incur a large fixed overhead. Of course, these problems can be worked around, as evidenced by the fact that major PS2 games like Jak and Daxter were written in a GC'ed language (Common Lisp). Most proponents of GC will tell you that such work-arounds are a good deal easier than hunting down memory leaks and dangling pointers!
Re:Trig functions... (Score:3, Insightful)
This is (once again) proof that Java is not slow, in fact it's really fast. It's slow starting, and yes, consumes more memory than native code, but the gained security, stability and ease of programming (reduced development times) are worth the memory use increase.
Also, the memory us
Re:Trig functions... (Score:5, Insightful)
Don't forget that it is also percieved as slow since just about any application anyone has seen for a desktop environment written in Java has a sluggish GUI.
Yeah, I know Java's strengths aren't in the Desktop arena, they're in development and the back-end.
Re:Trig functions... (Score:5, Informative)
Re:Trig functions... (Score:4, Informative)
Isn't that what AWT tried to do originally? I'm just delving into Java for the first time the last few months, but I thought I've read this in "Core Java, Vol. 1"
They say (pg. 236 "Core Java, Vol. 1) that this resulted in a "write once, debug everywhere" problem since you will have different behavior, different limitations and different bugs on each implementation of AWT on each platform
Re:Trig functions... (Score:4, Informative)
AWT has native widgets: Combo box, menu, button, text area, input box, checkbox, etc... Not only primitives.
What you are describing is Swing, not AWT.
Swing relies on the most basic AWT features: Component/Container and drawImage, and re-implement the whole widget sets in Java, relying on these two AWT components.
Re:Trig functions... (Score:5, Interesting)
Eclipse is nice, I love eclipse. But I dont mistake it as a Swing replacement. AWT has a purpose, as does Swing and SWT, they are all different.
I believe AWT should be as fast as SWT because its also natively implemented.
Re:Trig functions... (Score:5, Insightful)
Faster processors should enable us to achieve more, not achieve the same old stuff much less efficiently.
Re:Trig functions... (Score:3, Informative)
Re:Trig functions... (Score:5, Informative)
Re:Trig functions... (Score:4, Insightful)
Elsewhere it sucks. MacOS, GTK, photon, Motif. Even porrly writeen swing programs outperform on those platforms.
But back to your FUD. Yes, bad programmers make ugly and poor performing GUI code. Swing is no different in that regard. But have you looked at recent swing programs in the 1.4.2 version of the JDK? Tried stuff like CleverCactus (a mail client)? Synced your MP3s on a new RIO? Used Yahoo's Site Builder to make a web site? There are excelent swing progams [sun.com] out there. Many you probobly don't realize are java swing apps!
But since SWT is only in early adopter land we haven't seen the real dogs of GUIs it can make yet, especially since you have to do such arcane and ancient tasks in SWT as managing your own event queue!
Re:Trig functions... (Score:4, Interesting)
Re:Trig functions... (Score:4, Informative)
Or just doesn't bother implementing it at all. Try printing from eclipse on Linux.
Re:Trig functions... (Score:4, Informative)
Re:Trig functions... (Score:5, Insightful)
It's in many ways unfortunate that with JDK 1.2 (Swing) and onwards, Sun pretty much dumped fast native support for GUI rendering. It has its benefits -- full control, easier portability -- but the fact is that simple GUI apps felt faster with 1.1 than they have done ever since (or even more). This is, alas, especially noticeable on X-windows, perhaps since often the whole window is rendered as one big component as opposed to normal x app components (in latter case, x-windows can optimize clipping better).
Years ago (in late 90s, 97 or 98), I wrote a full VT-52/100/102/220 terminal emulator with telnet handling (plus for fun plugged in a 3rd party then-open SSH implementation). After optimizing display buffer handling, it was pretty much on par with regular xterm, on P100 (Red hat whatever, 5.2?), as in felt about as fast, and had as extensive vt-emulation (checked with vttest). Back then I wrote the thing mostly to show it can be done, as all telnet clients written in Java back then were horribly naive, doing full screen redraw and other flicker-inducing stupidities... and contributed to the perception that Java is and will be slow. I thought it had more to do with programmers not optimizing things that need to be optimized.
It's been a while since then; last I tried it on JDK 1.4.2... and it still doesn't FEEL as fast, even though technically speaking all java code parts ARE much faster (1.1 didn't have any JIT compiler; HotSpot, as tests show, is rather impressive in optimizing). It's getting closer, but then again, mu machine has almost an order of magnitude more computing power now, as probably does gfx card.
To top off problems, in general Linux implementation has been left with much less attention than windows version (or Solaris, but Solaris is at least done by same company). :-/
Re:Trig functions... (Score:3, Informative)
Re:Trig functions... (Score:5, Informative)
I'm only a beginner in C# and Java, but I know both have reflection, and the proposed Java 1.5 has enums. Kudos to C# for having them first
Also
Better for whom? Why? Doesn't it have the severe shortcoming of platform lockdown?
I can write a c#.net app in 1/4th the code of a java one. Go take a look at Microsoft's petshop program if you do not believe me.
I can write an assembly app in 1/4 the code of a Python one. Assuming, of course, that the Python app wasn't written for small code size... The simile is very accurate; Sun didn't write their petshop for small size.
The Java Petshop reimplementation here [prevayler.org] spanks both Sun's and Microsoft's petshop in terms of size, and pretty clearly demonstrates that both languages could do better.
BTW, I absolutely love C# -- from what I've done with it so far. My only complaint is that its support is at best halfhearted for other platforms, and I will not allow my work to be tied down to one platform. This is the only thing that kept me from learning K (well, K is portable, the only problem is that it's only available from one vendor, Kx systems). Anyhow, I think C#'s bytecode is far beyond anything Sun's ever going to do with Java.
ALso WIndows2k3 is as stable as Linux now. NT4 is old. The situation has improved dramatically. I have never even seen a blue screen on windows2k yet!
I agree with all of that, but it's not enough. I have seen blue screens and system crashes on 2000 and XP (XP far, FAR FAR more often than 2000). But then I've seen system crashes on Linux, so I'm not just complaining about MS
-Billy
Re:Trig functions... (Score:4, Informative)
For many math functions java uses a software implementation rather than using the built in hardware functions on the processer. This is to ensure that these function perform exactly the same on different architectures. This probably accounts for the difference in performance.
Re:Trig functions... (Score:5, Insightful)
Re:Trig functions... (Score:3, Informative)
Conversely, when you have a finite number, 1-thread-per-stream asynchronous IO is VERY desirable; both in coding, and in some small ways even in efficiency (immeidate responses; limited only by context switching time).. In contrast, IO-multiplexing may have certain tasks take long periods of time (stalling other channels).. Then you have to manually keep track of such situations as you code and push them of
Re:Trig functions... (Score:3, Informative)
Re:Trig functions... (Score:3, Interesting)
I, for one, would _never_ trust Java in a mission critical embedded environment. In fact you still see assembly in those envrionments from time to time. Imagine using Java for a fly by wire system. Would you fly on a plane that was using Java for fly by wire? I, for one, would not.
Considering that the EULA forbids from using Java to operate Nuclear Plant and Air Traffic systems, you will never fly in a Java powered Boeing. But that's in the Lic
Re:Trig functions... (Score:3, Interesting)
Re:Trig functions... (Score:5, Interesting)
In the case of Java, you find that the Intel floating point trig instructions don't meet [naturalbridge.com] the Java machine spec. So they had to implement them as a function.
It all depends if you want accuracy or speed.
Re:Trig functions... (Score:3, Interesting)
Last time I did similar benchmark on Windows, the MSVC runtime library set the FPU control word to limit precision to 64 bits. Other environments on x86 used 80 bits precision by default, increasing computation time for some operations.
Re:Trig functions... (Score:3, Interesting)
Otoh, the P4 SSE2 uses a vectorized software model that doesn't have these; I don't know whether the MS compiler generates x87 hardware code or SSE2 vectorized software.
Java specifies using a 32-bit model for these functions, and is probably doing them in software. But what software? And does it use the vectorized SSE2?
Accurate? (Score:5, Interesting)
32-bit integer math: using a 32-bit integer loop counter and 32-bit integer operands, alternate among the four arithmetic functions while working through a loop from one to one billion. That is, calculate the following (while discarding any remainders)....
It also relies on the strength of the compiler, not just the strength of the language.
Why did VB do so bad on IO. (Score:4, Interesting)
Re:Why did VB do so bad on IO. (Score:5, Informative)
Re:Why did VB do so bad on IO. (Score:5, Informative)
Java Performing worse then C (Score:5, Insightful)
Why is this a suprise? C has been most commonly used for so long because of it's speed and efficiency. I think anyone who has done much work with either developing or running large scale java programs knows that speed can definitely be an issue.
Re:Java Performing worse then C (Score:5, Insightful)
All that matters to anti-Java zealots is speed. The list of benefits coming from using Java is too long to take the speed-only view seriously.
Comment removed (Score:4, Insightful)
Re:Speed? No. Stability. Yes (Score:4, Insightful)
But what about type safety? Java has no generic typed containers, like the STL. This means you tend to find errors at runtime instead of at compile time.
I need to know that my code is as safe as possible. I don't want a user to find a bug because my hand tests didn't get 100% code coverage every time.
And how about predictable performance. I would much rather know that this function will tak 200ms all of the time instead of 100ms most of the time a 10 s due to garbage collection occasionally.
Coming in 1.5, but you can do this now (Score:5, Insightful)
Re:Java Performing worse then C (Score:3, Interesting)
I would consider myself part of that "anyone," and I disagree with you. Other than load times (which aren't as bad as they used to be), Java can perform as fast or faster than C code. The main thing is to use a good VM - IBM's J9 VM significantly outperforms Sun's.
Under Windows... (Score:3, Insightful)
Re:Under Windows... (Score:3, Informative)
Re:Under Windows... (Score:5, Insightful)
Re:Under Windows... (Score:3, Interesting)
Python's huge win is not in speed, but in the ability to express the program in a very concise and easy to understand way.
The fact that Psyco can provide huge speed ups via a simple import is just icing.
Ongoing, open source "language shootout" (Score:3, Informative)
http://scutigena.sourceforge.net/ [sourceforge.net]
It's designed as a framework that ought to run cross-platform, so you can run it yourself. We haven't added it yet, but I think we really want to divide the tests into two categories. "Get it done" - and each language implements it the best way for that language, and "Basic features comparison" - where each language has to show off feat
Re:Under Windows... (Score:4, Insightful)
The review article is /.ed now, but from the test names on the summary table it looks like the tests are indeed mostly numeric. Unfortunately, only a small minority of people make their living writing number crunching code.
For the vast majority of business and web-based apps, the bulk of operations involves string manipulation. If an app is compute intensive and not I/O or GUI bound, then the bottleneck is usually creating, modifying and destroying strings. Benchmarks on string handling would be more useful to most developers.
However, doing string manipulation benchmarks isn't so simple. There are at least four approaches to strings, and some languages let you pick any of these:
-- dangerous and very fast: using static buffers and in-place modifications like old-school C
-- somewhat safer and may be fast: using semiautomatic memory management with mutable strings, like C++/STL or C with glib's g_string
-- safer still: using totally automatic memory management with mutable strings, like Ruby or (IIRC) Perl
-- safest: using totally automatic memory management with immutable strings, like Java or Python
Of course, for each problem the algorithms would need to be structured differently to get the maximum possible speed in each of the above four methodologies.
Basically, for string-intensive code, claiming that Java is just as fast as C will always be a false statement if you compare C code written in the first dangerous style vs. Java, which is always written in the fourth and safest style. No matter what technical tricks the VM writers come up with, there is just no way that they'll be able to match C's ability essentially zero-overhead in-place buffer operations over and over in the same spot that stays loaded in the L1 cache. (Actually, you probably could write Java code that operates on raw character arrays, and it might approach the speed of C. But that would probably look even uglier than the C code.)
In the few cases that I've ported a string-intensive high-level-language algorithm to raw low-level C code with few or no mallocs (not a trivial task), I've gotten at least a 10X speedup on the CPU-bound tasks, and at least 10X less memory usage. (Note that I did those tests largely out of curiosity. For most applications, even a 10X speedup is rarely worth the increased development time, bug vulnerabilities or maintenence issues. My opinion is that if you have to write code like this, you should confine it to a C extension library to a high-level language like Python.)
I've found that STL can be faster or slower than Java, depending on how smart you are. It's very easy to inadvertently get C++ to thrash around with needless automatic data copying.
Languages like Perl and Python can be very competetive on string operations if you know how to use their libraries. By using the most powerful operations that work on the largest chunks of data at one time (Python's re.findall(), for example), you take advantage of the fact that the library call is mostly written in C. Bit-banging in a dynamic interpreted language is usually dog slow, as the Python numbers seem to show on the summary chart.
To sum it up, most people write apps whose performance can't be predicted by a few simple language benchmarks, because the way the app is written can affect the performance more than the language it's written in.
.NET Languages and IL (Score:4, Interesting)
Why benchmark the various ".NET languages" (those languages whose compilers target the CLR)? Every compiler targeting the CLR produces Intermediate Languages, or more specifically MSIL. The only differences you'd find is in optimizations performed for each compiler, which usually aren't too much (like VB.NET allocates a local variable for the old "Function = ReturnValue" syntax whether you use it or not).
Look at the results for C# and J#. They are almost exactly the same, except for the IO which I highly doubt. Compiler optimizations could squeeze a few more ns or ms out of each procedure, but nothing like that. After all, it's the IL from the mscorlib.dll assembly that's doing most the work for both languages in exactly the same way (it's already compiled and won't differ in execution).
When are people going to get this? I know a lot of people that claim to be ".NET developers" but only know C# and don't realize that the clas libraries can be used by any languages targeting the CLR (and each has their shortcuts).
Would like to see... (Score:4, Interesting)
As any games/DSP programmer will tell you, there are a million ways to speed up trig providing that you don't *really* care after 6dps or so.
OK, maybe I'm just bitter because I was expecting gcc 3.1 to wipe the floor.
trig calls in gcc (Score:5, Informative)
You can enable inline trig functions in gcc as well, either with a command line flag, or an include file, or by using "asm" statements on a case-by-case basis. Check the documentation. With those enabled, gcc keeps up well with other compilers on trig functions.
Re:trig calls in gcc (Score:4, Insightful)
Sitting on a Benchmark (Score:3, Interesting)
Oh wait! C# only runs on one operating system. Can you name any other development languages that only run on ONE OS, boys and girls? Neither can I.
Re:Sitting on a Benchmark (Score:4, Informative)
Ximian's Mono has a C# compiler for open OS's:
http://www.go-mono.com/c-sharp.html
Re:Sitting on a Benchmark (Score:3)
And libc isn't "integrated right into" operating systems? (Richard Stallman would like to have a GNU/word with you, then.) Anyway, who cares? This isn't the Special Olympics. If code runs faster, it runs faster. There are no fairness points.
Oh wait! C# only runs on one operating system. Can you name any other development languages that only run on ONE OS, boys and girls? Neith
Re:Sitting on a Benchmark (Score:3, Insightful)
Our beloved Penguins can swim quite well under Linux^H^H^H^H^Hwater, thankyou!
Re:Sitting on a Benchmark (Score:3)
Oh, really? How is going to be "integrated right into the operating system" going to help with integer and floating point microbenchmarks? I'd really like to know.
And, also, in what sense is the CLR "integrated right into the operating system" that the JVM isn't? Both are a bunch of DLLs running on top of the NT kernel. What's the difference in your mind?
Oh wait! C# onl
Re:Sitting on a Benchmark (Score:3, Interesting)
Boy, that's gotta be embarrassing
--
Mando
this is just so bogus (Score:5, Insightful)
What about coder's performance? (Score:4, Interesting)
On the other hand, the time and cost required by the coder is a bigger issue (unless you outsource to India). I would assume that some languages are just easier to design for, easier to write in, and easier to debug. Which of these langauges offers the fastest time to "bug-free" completion for applications of various sizes?
Re:What about coder's performance? (Score:3, Insightful)
I like python, it is easy to write, and keep it somewhat clean.
Re:What about coder's performance? (Score:5, Insightful)
Performance realities do not go away, no matter how much we may wish they would. Now, does that mean you're going to go write major portions of your web application in assembly to speed it up? No, probably not. But your database vendor may very well use some tricks like that to speed up the key parts of their database. You sink or swim by your database, so don't say it doesn't matter because it absolutely does.
Anyway, in my day-to-day operations, I can think of quite a few things that get compiled directly to executable code even though they don't have to be. Why would you do this if performance wasn't an issue and we could just throw more hardware at it?
1. Regular expressions in the
2. XSL transformations in the
3. The XmlSerializer classes creates a special compiled executable specifically created to serialize objects into XML (byte code!!).
And the list just goes on and all of this eventually ends up getting JITed as well. My pages are 100% XML based, go through many transformation steps to get to where they need to be, and on average render in about 70-100ms (depending upon the amount of database calls I need to make and the size of the data). This all happens without spiking our CPU utilization to extreme levels. There is *NO WAY* I could've done this on our hardware if nobody cared about performance.
As always, a good design is the most important factor. But a good design that performs well will always be superior to one that doesn't.
Bryan
Cost of Hardware vs. Cost of wetware (Score:5, Insightful)
You raise excellent points. For many enterprise and server applications, performance is an issue. But I never said one should care nothing abut performance, only that in many applications the cost of the coder also impacts financial results.
For the price of one software engineer for a year (call it 50k to 100k burdened labor rate), I can buy between 20 to 100 new PCs (at $1000 to $3000 each). If the programmer is more expensive or the machines are less expensive, then the issue is even more in favor of worring about coder performance.
The trade-off between the hardware cost of the code and the wetware cost is not obvious in every case. A small firm that can double its server capacity for less than the price of a coder. or the creators of an infrequently-used application may not need high performance. On the other hand, a large software seller that sells core performance apps might worry more about speed. My only point is that ignoring the cost of the coder is wrong.
These different languages create a choice of whether to throw more hardware at a problem or throw more coders at the problem.
Speed or accuracy? (Score:5, Interesting)
Where is Fortran? (Score:3, Insightful)
It's a pity that the present-day language of choice for high-performance computing, Fortran 90/95/HPF, was not covered in this study. There has been anecdotal evidence that C++ has approached Fortran, performance-wise, in recent years, but I've yet to see a proper comparison of the two languages.
Alternative comparison, compiler shootout (Score:5, Informative)
About the Python performance (Score:5, Insightful)
Read the OSNews thread (Score:5, Insightful)
Namely:
- They only test a highly specific case of small numeric loops that is pretty much the best-case scenario for a JIT compiler.
- They don't test anything higher level, like method calls, object allocation, etc.
Concluding "oh, Java is as fast as C++" from these benchmarks would be unwise. You could conclude that Java is as fast as C++ for short numeric loops, of course, but that would be a different bag of cats entirely.
Quoting the results section here... (Score:5, Informative)
Results
Here are the benchmark results presented in both table and graph form. The Python and Python/Psyco results are excluded from the graph since the large numbers throw off the graph's scale and render the other results illegible. All scores are given in seconds; lower is better.
int long double trig I/O TOTAL
Visual C++ 9.6 18.8 6.4 3.5 10.5 48.8
Visual C# 9.7 23.9 17.7 4.1 9.9 65.3
gcc C 9.8 28.8 9.5 14.9 10.0 73.0
Visual Basic 9.8 23.7 17.7 4.1 30.7 85.9
Visual J# 9.6 23.9 17.5 4.2 35.1 90.4
Java 1.3.1 14.5 29.6 19.0 22.1 12.3 97.6
Java 1.4.2 9.3 20.2 6.5 57.1 10.1 103.1
Python/Psyco 29.7 615.4 100.4 13.1 10.5 769.1
Python 322.4 891.9 405.7 47.1 11.9 1679.0
IBM Java (Score:3, Interesting)
My application that I benchmarked is data and network and memory intensive, although not math intensive, so that's what I can speak for. We consistently use 2 GB of main memory and pump a total of 2.5 TB (yes, TB) of data (doing a whole buch of AI style work inside the app itself) through the application over it's life cycle, and we cut our total runtime from 6 days to 2.8 days by switching to the IBM VM.
Not testing languages (Score:4, Insightful)
wrong questions (Score:4, Insightful)
So, yes, you can construct programs, even some useful compute intensive programs, that perform as well or better on Java than they do in C. But that still doesn't make Java suitable for high-performance computing or building efficient software.
Benchmarks like the one published by OSnews don't test for these limitations. Microbenchmarks like those are still useful: if a language doesn't do well on them, that tells you that it is unsuitable for certain work; for example, based on those microbenchmarks alone, Python is unlikely to be a good language for Fortran-style numerical computing. But those kinds of microbenchmarks are so limited that they give you no guarantees that an implementation is going to be suitable for any real-world programming even if the implementation performs well on all the microbenchmarks.
I suggest you go through the following exercise: write a complex number class, then write an FFT using that complex number class, "void fft(Complex array[])", and then benchmark the resulting code. C, C++, and C# all will perform reasonably well. In Java, on the other hand, you will have to perform memory allocations for every complex number you generate during the computation.
Less simple benchmarks (Score:5, Insightful)
The optimisers in sun's Java VM work on run-time profiling - they identify the most run sections of code and use the more elaborate optimisation steps on these segments alone.
Benchmarks that consist of one small loop will do very well under this scheme, as the critical loop will get all of the optimisation effort, but I suspect that in programs where the CPU time is more distributed over many code sections, this scheme will perform less well.
C doesn't have the benefit of this run-time profiling to aid in optimising critical sections, but it can more afford to apply its optimisations across the entire codebase.
I'd be interested to see results of a benchmark of code where CPU time is more distributed..
Comment removed (Score:3, Insightful)
Python Longs are arbitrary precision! (Score:4, Informative)
It is an arbitrary precision decimal type! That's why Python's scores on the Long test are so much higher (slower) than the other languages.
I wonder what Java scores when the benchmark is reimplemented using BigDecimal instead of the 'long' machine type.
Python uses a highly efficient Karatsuba multiplication algorithm for its longs (although that only starts to kick in with very big numbers).
Language performance arguments miss the point (Score:5, Insightful)
Then, with more and more languages, especially ones with VMs, you get further and further away from the hardware. The end result: you lose performance. It does more and more for you, but at the expense of real optimizations, the kind that only you can do.
Now the zealots will come out and say, "Language X is better than language Y, see!" To me this argument is boring. I tend to use the appropriate tool for the job. So:
Yes, my teams use many languages, but they also put their effort to where they get the biggest bang for the buck. And in any business approach, that's the key goal. You don't see carpenters use saws to hammer in nails or drive screws. Wise up!
Problem: Java not portable (Score:3, Interesting)
I actually use C++ for portability, not speed or generic programming (which are nice to have).
If you avoid platform, compiler, and processor specific features, C++ is even more portable than Java. Java on the other hand tends to drag all platforms down to the least common denominator, then requires the use of contorted logic and platform extensions just to attain acceptable performance.
People seem to have forgotten the original intention of C: portable code.
Consider the logic... (Score:4, Insightful)
Guido van Rossum noted in an interview [artima.com] the following statistic, and I think it bears considerably on appropriateness:
So then, unless you quantify the types of apps you build, the team you use, and the results that are expected, my experience has shown me that most of the time, for business apps, it's overkill. Now, if you're in a dev team at a software company, well then, I could consider the other side.
Windows a good choice for this test (Score:3, Insightful)
These kind of benchmarks are so 1970s (Score:4, Insightful)
Here's a bombshell: if you have a nice language, and that language doesn't have any hugely glaring drawbacks (such as simple benchmarks filling up hundreds of megabytes of memory), then don't worry about speed. From past experience, I've found it's usually easy to start with what someone considers to be a fast C or C++ program. Then I write a naive version in Python or another language I like. And guess what? My version will be 100x slower. Sometimes this is irrelevant. 100x slower than a couple of microseconds doesn't matter. Other times it does matter. But it usually isn't important to be anywhere near as fast as C, just to speed up the simpler, cleaner Python version by 2-20x. This can usually be done by fiddling around a bit, using a little finesse, trying different approaches. It's all very easy to do, and one of the great secrets is that high-level optimization is a lot of fun and more rewarding than assembly level optimization, because the rewards are so much greater.
This is mostly undiscovered territory, but I found one interesting link [dadgum.com].
Note that I'm not talking about diddly high-level tasks in language like Python, but even things like image processing. It doesn't matter. Sticking to C and C++ for performance reasons, even though you know there are better languages out there, is a backward way of thinking.
Re:These kind of benchmarks are so 1970s (Score:3, Insightful)
Today, everything is in script because it's not worth the bother anymore. In 1998 I had to write my own affine transformation code in C to get a GUI to work at anywhere near real-time. Today I can run a planetarium simulator (read LOTS of calculations) at an acceptable speed in just script.
Re:These kind of benchmarks are so 1970s (Score:3, Interesting)
If cars followed Moore's law we'd all be driving at the speed of light about now. And guess what -- that's completely unnecessary.
No, I wouldn't want a 20 HP engine in my car. But I don't feel the need for a 1.6e9 HP engine, either.
Python numbers (Score:3, Interesting)
Python did pretty badly in the tests. The reason is that in Python it takes a long time to translate a variable name into a memory address (It happens at runtime instead of compile time).
The benchmark code has stuff that basically looks like this:
Adding 1 to i takes no time at all but looking up i take a little time. In C this is going to be a lot faster.
Python did really bad when "i" from the example above was a long compared to when it was a long in C. That's because Python has big number support but in C a long is limited to just 4 bytes.
Python did OK in the trig section because the trig functions are implemented in C. It still suffers because it takes a long time to look up variables though.
In real life, variable look up time is sometimes a factor. However, for programs that I've written getting data from the network, or database was the bottleneck.
G++? (Score:3, Insightful)
Tester does not understand Java (Score:3, Informative)
I dont know why the reasons are not clear to him. Perhaps its because he still thinks the JVM is "running bytecode" and does not understand what JITs did or what HotSpot compilers do. Byte code is only run the first few passes, after which its optimized into native code. Native being whatever the compiler of the c program used to compile the JVM could do. This is fundamental. Which explains his results, and points to a poor HotSpot implementation where trig functions are concerned.
Why no ActivePerl? (Score:3, Interesting)
In the article it rather sounds like they just assumed Python performance would be an indicator of performance for interpreted languages generally, but is there anything to back this up?
Slashdotted (Score:5, Funny)
RP
Performance not important? Umm , not quite... (Score:4, Interesting)
"Even if C did still enjoy its traditional performance advantage, there are very few cases (I'm hard pressed to come up with a single example from my work) where performance should be the sole criterion when picking a programming language. I"
I can only assume from this that he has never done or known anyone who has done any realtime programming. If you're going to write something
like a car engine management system performance is the ONLY critiria, hence a lot of these sorts of systems are still hand coded in assembler , never
mind C.
Java benchmarks are flawed. (Score:3, Insightful)
2) Java's IO function work on UTF-8 or other system dependant character set. So in essence java is doing twice the ammount of work during the IO benchmark.
I'm sure other people will comment as well, but overall these numbers are not that suprising for code that was just copy and pasted from c code. Why do people expect that ANY language will perform well using another languages code.
Not a fair test - Frame Pointers (Score:4, Insightful)
Tried this with gcj 3.2, here are the results (Score:4, Informative)
Comparison against gcc, gcj and Java 1.4.1 on the same host: I was somwhat surprised on the difference in the trig tests, as both appear to use libm. Not surprised that the IO was slower, the Java IO classes are nifty but do add quite a bit of overhead compared fputs/fgets.
(Sorry about the formatting, it was the best I could do)
I just sped the Python version by 7x and 1.5x (Score:5, Interesting)
Changing this to 'linesToWrite = [myString] * ioMax' dropped time on my system from 2830ms to 1780ms (I'd like to note that I/O on my system was already much faster than his *best* I/O score, thank you very much Linux)
In the trig test, I used numarray to decrease the runtime from 47660.0ms to *6430.0ms*. The original timing matches his pretty closely, which means that numarray would probably beat his gcc timings handily, too. Any time you're working with a billion numbers in Python, it's a safe bet that you should probably use numarray!
I didn't immediately see how to translate his other mathematical tests into numarray, but I noted that his textual explanation in the article doesn't match the (python) source code!
(My system is a 2.4GHz Pentium IV running RedHat 9)
Python Benchmark (Score:4, Informative)
$ python -O Benchmark.py
Int arithmetic elapsed time: 13700.0 ms with
Trig elapsed time: 8160.0 ms
$ java Benchmark
Int arithmetic elapsed time: 13775 ms
$ java -server Benchmark
Int arithmetic elapsed time: 9807 ms
(n.b. this is only a small subset of the tests- I didn't feel like waiting. Trig was not run for java because it took forever.)
To dismiss a few common myths...
1) Python IS compiled to bytecode on it's first run. The bytecode is stored on the filesystem in $PROGNAME.pyc.
2) the -O flag enables runtime optimization, not just faster loading time. On average you get a 10-20% speed boost.
3) Python is a string and list manipulation language, not a math language. It does so significantly faster than your average C coder could do so, with a hell of a lot less effort.
Comment removed (Score:4, Interesting)
Re:Wow (Score:4, Insightful)
-t
Re:Wow (Score:5, Interesting)
The short of it is that GCC 3.2.1 is highly competitive with ICC 7.0, except for two cases:
FP-intensive code on the Pentium 4
Code that allows Intel C++ to auto-generate SSE vector code for it
Re:Wow (Score:5, Interesting)
Re:They should benchmark development time (Score:5, Interesting)
The difference b/w Java and C++ would be dwarfed by the difference b/w Java and Python. Java may be 30-40% more productive than C++, but Python is 1000% more productive than Java. And yes, this applies to larger projects. J2EE may come to its own w/ projects that have hundreds of mediocre programmers, but if you have a mid-size team of highly skilled developers creating something new & unique (something like Zope or Chandler), Python will trounce the competition.
Re:They should benchmark development time (Score:5, Insightful)
The advantages over Java are even increased 6 months down the road. Python code is much more readable and maintainable, hence easier to extend. Dynamically typed object model scales incredibly well.
I used to think the same about Perl vs Java, until I started looking at frameworks like Cocoon and they're all written in Java.
Comparing Perl to Java is foolish, Perl is more like Awk than a general purpose programming language, and not meant for large projects at all.
Re:They should benchmark development time (Score:3, Insightful)
I don't know much about Python and I'll give it a go when I get a chance, but it's really hard to take your comments seriously when you call Python a "Silver Bullet" in your sig
Re:They should benchmark development time (Score:4, Funny)
I heard there was a vote b/w Perl, Awk, Intercal and sed, and Perl won by a narrow margin.
Re:They should benchmark development time (Score:3, Insightful)
Do you base this assertion to actual experience, or just a hunch that "it surely must be so"? If both languages are used to solve the same problem, the Python program is much more concise. It's not physically possible to create the Java program as quickly, given the same typing speed. Not to mention the difference in
Re:Java as fast as c++????? (Score:3, Insightful)