Forgot your password?
typodupeerror
Programming IT Technology

Performance Benchmarks of Nine Languages 954

Posted by michael
from the tmtowtdi dept.
ikewillis writes "OSnews compares the relative performance of nine languages and variants on Windows: Java 1.3.1, Java 1.4.2, C compiled with gcc 3.3.1, Python 2.3.2, Python compiled with Psyco 1.1.1, Visual Basic, Visual C#, Visual C++, and Visual J#. His conclusion was that Visual C++ was the winner, but in most of the benchmarks Java 1.4 performed on par with native code, even surpassing gcc 3.3.1's performance. I conducted my own tests pitting Java 1.4 against gcc 3.3 and icc 8.0 using his benchmark code, and found Java to perform significantly worse than C on Linux/Athlon."
This discussion has been archived. No new comments can be posted.

Performance Benchmarks of Nine Languages

Comments Filter:
  • Trig functions... (Score:4, Interesting)

    by eples (239989) * on Friday January 09, 2004 @11:30AM (#7928169)
    I am not a compiler nerd (IANACN?), so maybe someone else can answer the following simple question:

    Why are the Microsoft languages so fast with the Trig functions?
    • by Kingpin (40003) on Friday January 09, 2004 @11:34AM (#7928216) Homepage

      They probably cheat and use undocumented native OS calls.

    • by bigjocker (113512) *
      What is interesting in these functions is that, as pointed in the article, there seems to be something wrong with Sun's implementation for Java. Removing this test JDK 1.4.2 executes almost on par with Visual C++ (the winner).

      This is (once again) proof that Java is not slow, in fact it's really fast. It's slow starting, and yes, consumes more memory than native code, but the gained security, stability and ease of programming (reduced development times) are worth the memory use increase.

      Also, the memory us
      • by Dr. Evil (3501) on Friday January 09, 2004 @11:39AM (#7928284)

        Don't forget that it is also percieved as slow since just about any application anyone has seen for a desktop environment written in Java has a sluggish GUI.

        Yeah, I know Java's strengths aren't in the Desktop arena, they're in development and the back-end.

        • Re:Trig functions... (Score:5, Informative)

          by csnydermvpsoft (596111) on Friday January 09, 2004 @11:50AM (#7928454) Homepage
          If more people would use the SWT libraries (part of the Eclipse project) instead of the crappy AWT/Swing libraries, then this misconception would go away. SWT works by mapping everything to native OS widgets if possible, giving it the look, feel, and speed of a native app. I used Eclipse for quite a while before finding out that it is almost 100% pure Java (other than the JNI code necessary for the native calls).
          • Re:Trig functions... (Score:4, Informative)

            by happyfrogcow (708359) on Friday January 09, 2004 @12:22PM (#7928881)
            SWT works by mapping everything to native OS widgets if possible

            Isn't that what AWT tried to do originally? I'm just delving into Java for the first time the last few months, but I thought I've read this in "Core Java, Vol. 1"

            They say (pg. 236 "Core Java, Vol. 1) that this resulted in a "write once, debug everywhere" problem since you will have different behavior, different limitations and different bugs on each implementation of AWT on each platform
          • Re:Trig functions... (Score:5, Interesting)

            by dnoyeb (547705) on Friday January 09, 2004 @12:23PM (#7928904) Homepage Journal
            Just because Swing is slow does not make it crappy. It meets nicely what it was designed to do. I use swing applications all the time. Today we have 1GHz processors, its not even an issue any longer, but it wont be allowed to die...

            Eclipse is nice, I love eclipse. But I dont mistake it as a Swing replacement. AWT has a purpose, as does Swing and SWT, they are all different.

            I believe AWT should be as fast as SWT because its also natively implemented.
          • Re:Trig functions... (Score:3, Informative)

            by ChannelX (89676)
            I'm getting really tired of hearing this comment. First off SWT isn't anywhere near Swing in the functionality department. Not even close. Is it faster? Yes in *some* cases. In the cases where there isn't a native OS widget then it emulates the widget and you're then in the same area as Swing yet with a crappy API to boot. No thanks. Secondly, Swing is not slow if programming properly. I suggest looking at the interfaces of IntelliJ IDEA, the tools from JGoodies (http://www.jgoodies.com), or JEdit t
          • Re:Trig functions... (Score:5, Informative)

            by Abcd1234 (188840) on Friday January 09, 2004 @01:14PM (#7929579) Homepage
            Sorry, dude, but SWT is nowhere *near* as complete as Swing, in terms of functionality. I know, I've tried to use it. Basically, because SWT was designed more or less specifically with Eclipse in mind, it has massive gaps in it's APIs (for example, the imaging model is *severely* lacking). Worse, it's difficult to deploy, and even more difficult to use, as the documentation is remarkably incomplete. So, as much as I hate to say it, SWT simply can't replace Swing right now, and I don't expect it to any time soon.
          • by shemnon (77367) on Friday January 09, 2004 @01:37PM (#7929925) Journal
            First off, SWT only performes well on windows, and stack on top of that that the principal native abstractions are taylored to a win32 environment. Based off of that it is easy to see how SWT performes quite nicely on Windows.

            Elsewhere it sucks. MacOS, GTK, photon, Motif. Even porrly writeen swing programs outperform on those platforms.

            But back to your FUD. Yes, bad programmers make ugly and poor performing GUI code. Swing is no different in that regard. But have you looked at recent swing programs in the 1.4.2 version of the JDK? Tried stuff like CleverCactus (a mail client)? Synced your MP3s on a new RIO? Used Yahoo's Site Builder to make a web site? There are excelent swing progams [sun.com] out there. Many you probobly don't realize are java swing apps!

            But since SWT is only in early adopter land we haven't seen the real dogs of GUIs it can make yet, especially since you have to do such arcane and ancient tasks in SWT as managing your own event queue! :( Give the same bad programmer SWT and you won't get a bad GUI instead you will get a non-fucntioning GUI.
        • by Doomdark (136619) on Friday January 09, 2004 @11:53AM (#7928494) Homepage Journal
          Don't forget that it is also percieved as slow since just about any application anyone has seen for a desktop environment written in Java has a sluggish GUI.

          It's in many ways unfortunate that with JDK 1.2 (Swing) and onwards, Sun pretty much dumped fast native support for GUI rendering. It has its benefits -- full control, easier portability -- but the fact is that simple GUI apps felt faster with 1.1 than they have done ever since (or even more). This is, alas, especially noticeable on X-windows, perhaps since often the whole window is rendered as one big component as opposed to normal x app components (in latter case, x-windows can optimize clipping better).

          Years ago (in late 90s, 97 or 98), I wrote a full VT-52/100/102/220 terminal emulator with telnet handling (plus for fun plugged in a 3rd party then-open SSH implementation). After optimizing display buffer handling, it was pretty much on par with regular xterm, on P100 (Red hat whatever, 5.2?), as in felt about as fast, and had as extensive vt-emulation (checked with vttest). Back then I wrote the thing mostly to show it can be done, as all telnet clients written in Java back then were horribly naive, doing full screen redraw and other flicker-inducing stupidities... and contributed to the perception that Java is and will be slow. I thought it had more to do with programmers not optimizing things that need to be optimized.

          It's been a while since then; last I tried it on JDK 1.4.2... and it still doesn't FEEL as fast, even though technically speaking all java code parts ARE much faster (1.1 didn't have any JIT compiler; HotSpot, as tests show, is rather impressive in optimizing). It's getting closer, but then again, mu machine has almost an order of magnitude more computing power now, as probably does gfx card.

          To top off problems, in general Linux implementation has been left with much less attention than windows version (or Solaris, but Solaris is at least done by same company). :-/

          • Since you have 1.4.2 installed, try running the Java2D demo in $JAVA_HOME/demo/jfc/Java2D. The graphics are quite fast on a local machine, largely because (as you said) it uses back buffering to blast the pixels onto the screen. Unfortunately, this is horrible for remote X displays, but I found a workaround. Launch your java app with the option -Dsun.java2d.pmoffscreen=false, and this will disable the backbuffer and tremendously speed up remote X operation, at the expense of some flicker. On my remote m
      • Re:Trig functions... (Score:4, Informative)

        by tealwarrior (534667) on Friday January 09, 2004 @11:41AM (#7928309)
        What is interesting in these functions is that, as pointed in the article, there seems to be something wrong with Sun's implementation for Java.

        For many math functions java uses a software implementation rather than using the built in hardware functions on the processer. This is to ensure that these function perform exactly the same on different architectures. This probably accounts for the difference in performance.
      • Re:Trig functions... (Score:3, Informative)

        by be-fan (61476)
        Its not proof of anything. Its just proof that the JIT manages to handle JIT-ing properly. All of these were simple loops where the JIT could do its work and get out of the way for the rest of the program. They are not indicative of real-world performance, unless all you are doing are such loops.

    • Re:Trig functions... (Score:3, Interesting)

      by hackstraw (262471)
      M$ is pretty tight with Intel (hence the term Wintel). They might have licenced or somehow gotten some code optimisers from Intel. On Linux, the Intel compiler is often 100% faster than gcc on double precision code (like trig functions).
    • Re:Trig functions... (Score:5, Interesting)

      by mengel (13619) <`ten.egrofecruos.sresu' `ta' `legnem'> on Friday January 09, 2004 @11:48AM (#7928433) Homepage Journal
      Probably the Microsoft languages use the Intel trig instructions.

      In the case of Java, you find that the Intel floating point trig instructions don't meet [naturalbridge.com] the Java machine spec. So they had to implement them as a function.

      It all depends if you want accuracy or speed.

    • Why are the Microsoft languages so fast with the Trig functions?

      Last time I did similar benchmark on Windows, the MSVC runtime library set the FPU control word to limit precision to 64 bits. Other environments on x86 used 80 bits precision by default, increasing computation time for some operations.
  • Accurate? (Score:5, Interesting)

    by Nadsat (652200) on Friday January 09, 2004 @11:31AM (#7928178) Homepage
    Not sure of the accuracy. Benchmark is on a loop:

    32-bit integer math: using a 32-bit integer loop counter and 32-bit integer operands, alternate among the four arithmetic functions while working through a loop from one to one billion. That is, calculate the following (while discarding any remainders)....

    It also relies on the strength of the compiler, not just the strength of the language.
  • by nberardi (199555) * on Friday January 09, 2004 @11:31AM (#7928180) Homepage
    Why did VB do so bad on IO compared to the other .Net benchmarks? They were pretty much equal up until the IO benchmarks? Any chance of getting the code published that was used to test this?
  • by ViolentGreen (704134) on Friday January 09, 2004 @11:34AM (#7928210)
    I conducted my own tests pitting Java 1.4 against gcc 3.3 and icc 8.0 using his benchmark code, and found Java to perform significantly worse than C on Linux/Athlon.

    Why is this a suprise? C has been most commonly used for so long because of it's speed and efficiency. I think anyone who has done much work with either developing or running large scale java programs knows that speed can definitely be an issue.
    • by Kingpin (40003) on Friday January 09, 2004 @11:40AM (#7928299) Homepage

      All that matters to anti-Java zealots is speed. The list of benefits coming from using Java is too long to take the speed-only view seriously.
      • by finkployd (12902) on Friday January 09, 2004 @11:46AM (#7928387) Homepage
        Not always though, I think the thing people neglect to consider is that there are times when performance and scale are important enough that the benefits of Java do NOT outweigh C, and vice versa.

        I feel sad for someone who only has enough room in their world for one computer language.

        Finkployd
      • by Isochrome (16108) on Friday January 09, 2004 @12:04PM (#7928649)
        OK, Speed does matter a lot.

        But what about type safety? Java has no generic typed containers, like the STL. This means you tend to find errors at runtime instead of at compile time.

        I need to know that my code is as safe as possible. I don't want a user to find a bug because my hand tests didn't get 100% code coverage every time.

        And how about predictable performance. I would much rather know that this function will tak 200ms all of the time instead of 100ms most of the time a 10 s due to garbage collection occasionally.
    • I think anyone who has done much work with either developing or running large scale java programs knows that speed can definitely be an issue.

      I would consider myself part of that "anyone," and I disagree with you. Other than load times (which aren't as bad as they used to be), Java can perform as fast or faster than C code. The main thing is to use a good VM - IBM's J9 VM significantly outperforms Sun's.
  • Under Windows... (Score:3, Insightful)

    by Ianoo (711633) on Friday January 09, 2004 @11:34AM (#7928211) Journal
    I see once again that Eugenia (a supposed pro-Linux pro-BeOS person who doesn't use Windows) has done all her benchmarks [i]under[/i] Windows. I have a feeling that Python would perform a lot better if it was running in a proper POSIX environment (linked against Linux's libraries instead of the Cygwin libs). Probably the C code compiled with GCC would perform a fair bit better too.
    • Re:Under Windows... (Score:3, Informative)

      by mindriot (96208)
      The article was actually written by Christopher W. Cowell-Shah. Also, as others noted, except for the I/O benchmark (where gcc was actually faster) the benchmarks couldn't profit much from the OS around it since they were just computationally intensive, but didn't make much use of OS-specific library functions.
    • by MBCook (132727) <foobarsoft@foobarsoft.com> on Friday January 09, 2004 @11:59AM (#7928575) Homepage
      Because as we all know VC++ and the other Microsoft languages are so widly available for Linux/BeOS. I'm sorry but your comment is pure troll. It would be interesting to have things like GCC under Linux on the same computer there too, but you can't compare Microsoft's .NET to anything under Linux, because .NET doesn't run under Linux (I know about Mono, but that isn't MS's runtime).
    • Re:Under Windows... (Score:3, Interesting)

      by Umrick (151871)
      Actually, I'm quite comfortable with the performance numbers Python turned in. I use Python quite a bit, and for the things the benchmark was run on, it's the kind of area I'd find looking for bottlenecks, and in turn implement in C or C++.

      Python's huge win is not in speed, but in the ability to express the program in a very concise and easy to understand way.

      The fact that Psyco can provide huge speed ups via a simple import is just icing.
    • We weren't quite ready to release it, but we've been working on a language performance comparison test of our own. It is available at:

      http://scutigena.sourceforge.net/ [sourceforge.net]

      It's designed as a framework that ought to run cross-platform, so you can run it yourself. We haven't added it yet, but I think we really want to divide the tests into two categories. "Get it done" - and each language implements it the best way for that language, and "Basic features comparison" - where each language has to show off feat
  • by ClubStew (113954) on Friday January 09, 2004 @11:37AM (#7928247)

    Why benchmark the various ".NET languages" (those languages whose compilers target the CLR)? Every compiler targeting the CLR produces Intermediate Languages, or more specifically MSIL. The only differences you'd find is in optimizations performed for each compiler, which usually aren't too much (like VB.NET allocates a local variable for the old "Function = ReturnValue" syntax whether you use it or not).

    Look at the results for C# and J#. They are almost exactly the same, except for the IO which I highly doubt. Compiler optimizations could squeeze a few more ns or ms out of each procedure, but nothing like that. After all, it's the IL from the mscorlib.dll assembly that's doing most the work for both languages in exactly the same way (it's already compiled and won't differ in execution).

    When are people going to get this? I know a lot of people that claim to be ".NET developers" but only know C# and don't realize that the clas libraries can be used by any languages targeting the CLR (and each has their shortcuts).

  • Would like to see... (Score:4, Interesting)

    by CaptainAlbert (162776) on Friday January 09, 2004 @11:40AM (#7928288) Homepage
    ...some analysis of the code generated by Visual C++ and gcc side by side, particularly for those trig calls. If there's that great a discrepancy between the runtimes, that's a good clue that either one of the compilers is under-optimising (i.e. missing a trick), or the other is over-optimising (i.e. applying some transformation that only approximates what the answer should be). I didn't see any mention of the numerical results obtained being checked against what they ought to be (or even against each other).

    As any games/DSP programmer will tell you, there are a million ways to speed up trig providing that you don't *really* care after 6dps or so.

    OK, maybe I'm just bitter because I was expecting gcc 3.1 to wipe the floor. :)
    • trig calls in gcc (Score:5, Informative)

      by ajagci (737734) on Friday January 09, 2004 @12:11PM (#7928746)
      The Pentium trig instructions are not IEEE compliant (they don't return the correct values for large magnitude arguments). gcc errs on the side of caution and generates slow, software-based wrappers that correct for the limitations of the Pentium instructions by default. Other compilers (e.g., Intel and probably Microsoft) just generate the in-line instructions with no correction. When you look at the claimed superiority of other compilers over gcc, it is usually such tradeoffs that make gcc appear slower.

      You can enable inline trig functions in gcc as well, either with a command line flag, or an include file, or by using "asm" statements on a case-by-case basis. Check the documentation. With those enabled, gcc keeps up well with other compilers on trig functions.
  • by Foofoobar (318279) on Friday January 09, 2004 @11:40AM (#7928307)
    Well unfortunately, comparing Java to C# on a Windows machine is like comparing a bird and a dolphins ability to swim in water; Several components of C# are integrated right into the operating system so naturally it's going to run faster on a windows machine. Compare C#, C++ and Java on machines where the components aren't integrated and then we will have a FAIR benchmark.

    Oh wait! C# only runs on one operating system. Can you name any other development languages that only run on ONE OS, boys and girls? Neither can I.
    • by Mathetes (132911) on Friday January 09, 2004 @11:47AM (#7928399)

      Oh wait! C# only runs on one operating system. Can you name any other development languages that only run on ONE OS, boys and girls? Neither can I.


      Ximian's Mono has a C# compiler for open OS's:

      http://www.go-mono.com/c-sharp.html
    • Several components of C# are integrated right into the operating system so naturally it's going to run faster on a windows machine.

      And libc isn't "integrated right into" operating systems? (Richard Stallman would like to have a GNU/word with you, then.) Anyway, who cares? This isn't the Special Olympics. If code runs faster, it runs faster. There are no fairness points.

      Oh wait! C# only runs on one operating system. Can you name any other development languages that only run on ONE OS, boys and girls? Neith

    • is like comparing a bird and a dolphins ability to swim in water

      Our beloved Penguins can swim quite well under Linux^H^H^H^H^Hwater, thankyou!

    • Several components of C# are integrated right into the operating system so naturally it's going to run faster on a windows machine.

      Oh, really? How is going to be "integrated right into the operating system" going to help with integer and floating point microbenchmarks? I'd really like to know.

      And, also, in what sense is the CLR "integrated right into the operating system" that the JVM isn't? Both are a bunch of DLLs running on top of the NT kernel. What's the difference in your mind?

      Oh wait! C# onl
    • Microsoft even has an implementation [microsoft.com] for FreeBSD and OS X.

      Boy, that's gotta be embarrassing :)

      --
      Mando
  • by rpeppe (198035) on Friday January 09, 2004 @11:42AM (#7928334)
    Benchmark code like this does not represent how these languages are used in practice. Idiomatic Java code tends to be full of dynamic classes and indirection galore. Just testing "arithmetic and trigonometric functions [...] and [...] simple file I/O" is not going to tell you anything about how fast these languages are in the real world.
  • by G4from128k (686170) on Friday January 09, 2004 @11:43AM (#7928353)
    Given the ever accelerating clockspeed of processors, is the raw performance of langauges that big an issue? Except for CPU-intensive programs (3-D games, high-end video/audio editing), current CPUs offer more than enough horsepower to handle any application. (Even 5-year old CPUs handle almost every task with adequate speed). Thus, code performance is not a big issue for most people.

    On the other hand, the time and cost required by the coder is a bigger issue (unless you outsource to India). I would assume that some languages are just easier to design for, easier to write in, and easier to debug. Which of these langauges offers the fastest time to "bug-free" completion for applications of various sizes?
    • I agree, time to market is key.

      I like python, it is easy to write, and keep it somewhat clean.
    • by Dalroth (85450) * on Friday January 09, 2004 @12:26PM (#7928940) Homepage Journal
      Raw performance will ALWAYS be an issue. If you can handle 100,000 hits per day on the same hardware that I can handle 1,000,000 (and these are not made up numbers, we see this kind of discrepency in web applications all the time), then I clearly will be able to do MORE business than you and do it cheaper. That gives me a competitive advantage from now till the end of time. If you throw more hardware at the problem, well, so can I and I'll still be ahead of you.

      Performance realities do not go away, no matter how much we may wish they would. Now, does that mean you're going to go write major portions of your web application in assembly to speed it up? No, probably not. But your database vendor may very well use some tricks like that to speed up the key parts of their database. You sink or swim by your database, so don't say it doesn't matter because it absolutely does.

      Anyway, in my day-to-day operations, I can think of quite a few things that get compiled directly to executable code even though they don't have to be. Why would you do this if performance wasn't an issue and we could just throw more hardware at it?

      1. Regular expressions in the .NET environment are compiled down to executable code, then executed.

      2. XSL transformations in the .NET environment are compiled to a form of executable code (I don't think it's actual .NET byte code, but it may be) and then executed.

      3. The XmlSerializer classes creates a special compiled executable specifically created to serialize objects into XML (byte code!!).

      And the list just goes on and all of this eventually ends up getting JITed as well. My pages are 100% XML based, go through many transformation steps to get to where they need to be, and on average render in about 70-100ms (depending upon the amount of database calls I need to make and the size of the data). This all happens without spiking our CPU utilization to extreme levels. There is *NO WAY* I could've done this on our hardware if nobody cared about performance.

      As always, a good design is the most important factor. But a good design that performs well will always be superior to one that doesn't.

      Bryan
      • by G4from128k (686170) on Friday January 09, 2004 @01:04PM (#7929408)
        Raw performance will ALWAYS be an issue. If you can handle 100,000 hits per day on the same hardware that I can handle 1,000,000 (and these are not made up numbers, we see this kind of discrepency in web applications all the time), then I clearly will be able to do MORE business than you and do it cheaper.

        You raise excellent points. For many enterprise and server applications, performance is an issue. But I never said one should care nothing abut performance, only that in many applications the cost of the coder also impacts financial results.

        For the price of one software engineer for a year (call it 50k to 100k burdened labor rate), I can buy between 20 to 100 new PCs (at $1000 to $3000 each). If the programmer is more expensive or the machines are less expensive, then the issue is even more in favor of worring about coder performance.

        The trade-off between the hardware cost of the code and the wetware cost is not obvious in every case. A small firm that can double its server capacity for less than the price of a coder. or the creators of an infrequently-used application may not need high performance. On the other hand, a large software seller that sells core performance apps might worry more about speed. My only point is that ignoring the cost of the coder is wrong.

        These different languages create a choice of whether to throw more hardware at a problem or throw more coders at the problem.
  • Speed or accuracy? (Score:5, Interesting)

    by derek_farn (689539) <derek@nospAm.knosof.co.uk> on Friday January 09, 2004 @11:43AM (#7928354) Homepage
    The Java performance is best explained by an article by Prof Kahan: "How JAVA's Floating-Point Hurts Everyone Everywhere" http://www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf also see "Marketing vs. Mathematics" http://www.cs.berkeley.edu/~wkahan/MktgMath.pdf I suspect the relatively poor floating-point performance of gcc is also caused by the desire to acheive accurate results.
  • Where is Fortran? (Score:3, Insightful)

    by Aardpig (622459) on Friday January 09, 2004 @11:44AM (#7928361)

    It's a pity that the present-day language of choice for high-performance computing, Fortran 90/95/HPF, was not covered in this study. There has been anecdotal evidence that C++ has approached Fortran, performance-wise, in recent years, but I've yet to see a proper comparison of the two languages.

  • by SiW (10570) on Friday January 09, 2004 @11:44AM (#7928362) Homepage
    Don't forget about the Win32 Compiler Shootout [dada.perl.it]
  • by ultrabot (200914) on Friday January 09, 2004 @11:45AM (#7928376)
    Note that Python is pretty easy to extend in C/C++, so that speed critical parts can be rewritten in C if the performance becomes an issue. Writing the whole program in C or C++ is a premature optimization.
  • by be-fan (61476) on Friday January 09, 2004 @11:50AM (#7928453)
    There were a number of problems with this benchmark, which are addressed in the OSNews thread about the article.

    Namely:

    - They only test a highly specific case of small numeric loops that is pretty much the best-case scenario for a JIT compiler.

    - They don't test anything higher level, like method calls, object allocation, etc.

    Concluding "oh, Java is as fast as C++" from these benchmarks would be unwise. You could conclude that Java is as fast as C++ for short numeric loops, of course, but that would be a different bag of cats entirely.
  • by Jugalator (259273) on Friday January 09, 2004 @11:50AM (#7928456) Journal
    Site was showing signs of Slashdotting, so I'll quote one of the more important sections...

    Results

    Here are the benchmark results presented in both table and graph form. The Python and Python/Psyco results are excluded from the graph since the large numbers throw off the graph's scale and render the other results illegible. All scores are given in seconds; lower is better.

    int long double trig I/O TOTAL

    Visual C++ 9.6 18.8 6.4 3.5 10.5 48.8
    Visual C# 9.7 23.9 17.7 4.1 9.9 65.3
    gcc C 9.8 28.8 9.5 14.9 10.0 73.0
    Visual Basic 9.8 23.7 17.7 4.1 30.7 85.9
    Visual J# 9.6 23.9 17.5 4.2 35.1 90.4
    Java 1.3.1 14.5 29.6 19.0 22.1 12.3 97.6
    Java 1.4.2 9.3 20.2 6.5 57.1 10.1 103.1
    Python/Psyco 29.7 615.4 100.4 13.1 10.5 769.1
    Python 322.4 891.9 405.7 47.1 11.9 1679.0
  • IBM Java (Score:3, Interesting)

    by PourYourselfSomeTea (611000) on Friday January 09, 2004 @11:52AM (#7928475)
    Using the IBM Java VM, I've been able to achieve consistently cutting my runtimes in half over the Sun VM. Anyone currently using the Sun VM for production work should test the IBM one and consider the switch.

    My application that I benchmarked is data and network and memory intensive, although not math intensive, so that's what I can speak for. We consistently use 2 GB of main memory and pump a total of 2.5 TB (yes, TB) of data (doing a whole buch of AI style work inside the app itself) through the application over it's life cycle, and we cut our total runtime from 6 days to 2.8 days by switching to the IBM VM.
  • by xtheunknown (174416) on Friday January 09, 2004 @11:52AM (#7928486)
    You are not testing the languages, you are testing the compilers. If you test a language with a crummy compiler (gcc sucks compared to commercial optimized C++ compilers) you will think the language is slow, when in fact, the compiler just sucks. The only valid comparisons that can be made are same language, different compilers.
  • wrong questions (Score:4, Insightful)

    by ajagci (737734) on Friday January 09, 2004 @11:53AM (#7928492)
    The Java JIT has been comparable to C in performance for many years on certain microbenchmarks. But Java remains a "slow language". Why?
    • The design of the Java language and the Java libraries means that enormous numbers of calls are made to the memory allocator in idiomatic Java.
    • The Java language has several serious limitations, such as the lack of true multidimensional arrays and the lack of value classes.

    So, yes, you can construct programs, even some useful compute intensive programs, that perform as well or better on Java than they do in C. But that still doesn't make Java suitable for high-performance computing or building efficient software.

    Benchmarks like the one published by OSnews don't test for these limitations. Microbenchmarks like those are still useful: if a language doesn't do well on them, that tells you that it is unsuitable for certain work; for example, based on those microbenchmarks alone, Python is unlikely to be a good language for Fortran-style numerical computing. But those kinds of microbenchmarks are so limited that they give you no guarantees that an implementation is going to be suitable for any real-world programming even if the implementation performs well on all the microbenchmarks.

    I suggest you go through the following exercise: write a complex number class, then write an FFT using that complex number class, "void fft(Complex array[])", and then benchmark the resulting code. C, C++, and C# all will perform reasonably well. In Java, on the other hand, you will have to perform memory allocations for every complex number you generate during the computation.
  • by DuSTman31 (578936) on Friday January 09, 2004 @11:53AM (#7928496)

    The optimisers in sun's Java VM work on run-time profiling - they identify the most run sections of code and use the more elaborate optimisation steps on these segments alone.

    Benchmarks that consist of one small loop will do very well under this scheme, as the critical loop will get all of the optimisation effort, but I suspect that in programs where the CPU time is more distributed over many code sections, this scheme will perform less well.

    C doesn't have the benefit of this run-time profiling to aid in optimising critical sections, but it can more afford to apply its optimisations across the entire codebase.

    I'd be interested to see results of a benchmark of code where CPU time is more distributed..

  • by finkployd (12902) on Friday January 09, 2004 @11:55AM (#7928523) Homepage
    I like java for some things, and the performance has even improved a bit lately. However if I am doing ANYTHING that has to scale and perform well under heavy load that uses cryptographic functions (especially public key encipherment), there is no way I can even seriously consider Java.

    Someone (meaning anyone other than me) should do a benchmark of THAT, I'm sure it would be quite telling.

    Finkployd
  • by PommeFritz (70221) on Friday January 09, 2004 @11:56AM (#7928536) Homepage
    The Python 'long' type is not a machine type such as a 32 or 64 or perhaps even 128 bit integer/long.
    It is an arbitrary precision decimal type! That's why Python's scores on the Long test are so much higher (slower) than the other languages.
    I wonder what Java scores when the benchmark is reimplemented using BigDecimal instead of the 'long' machine type.
    Python uses a highly efficient Karatsuba multiplication algorithm for its longs (although that only starts to kick in with very big numbers).
  • by tizzyD (577098) * <tizzyd@gmail. c o m> on Friday January 09, 2004 @12:07PM (#7928697) Homepage
    Consider what was done years ago with assembly. The performance was incredible, and the amount of superfluous garbage in the code was minimal. Hey, if you wrote the assembly, why would you spend time putting it in?

    Then, with more and more languages, especially ones with VMs, you get further and further away from the hardware. The end result: you lose performance. It does more and more for you, but at the expense of real optimizations, the kind that only you can do.

    Now the zealots will come out and say, "Language X is better than language Y, see!" To me this argument is boring. I tend to use the appropriate tool for the job. So:

    • Python [python.org] for scripts, prototypes, proofs of concept, or components where performance generally is not an issue.
    • For desktop apps, Visual Basic [microsoft.com] (yep, most IT apps are in VB). There is no justifiable reason for an IT department group to write a sales force reporting system in C++! If you want C++, go get a job at a software company. Stop wasting money and time making yourself feel like a hotshot. [I'd consider Kylix [borland.com] here if it was based on Basic. Why? Because honestly, Pascal is just about dead, and Basic is the king of the simple app. Let's just live with it and move on. I do want a cross-platform VB . . . ]
    • For web apps, well, I stick around PHP [php.net]/ASP.NET [asp.net]. Why? Portability! And moreover, the sticking point in a web-based app is not the UI layer; it's usually the underlying data extraction and formatting. Don't waste your time with lower level languages there. IMHO it's just not worth it. JSP and Java stuff, yuck! Too much time for too little bang.
    • Java [sun.com]/C# [microsoft.com] (also consider mono [go-mono.com]/LISP [sourceforge.net] for most production apps. Why? Portability! I want no vendor holding me by the balls. I want platform independence on the back end, and these are the few ways to achieve it. I'd include Haskell [haskell.org]/OCAML [ocaml.org] here when appropriate. Perl [perl.com]? I'm loathe to use Perl as production, considering most Perl code cannot be understood 2 weeks after it's written. I'd rather take the hit in performance and be able to pass the code to someone else later.
    • C++ [att.com]/C [bell-labs.com] for components--just components--where performance is at an absolute premium or there exists some critical library that only has this kind of interface. But this step has to be justified by the team, with considerable explanation why a different architecture could not suffice. Otherwise, the team could waste time checking for dangling pointers when instead it could be doing other things, like finishing up other projects.
    • Assembly? Only when there is not a C complier around. Embedded stuff. Nowadays, you just do not have the time to play.

    Yes, my teams use many languages, but they also put their effort to where they get the biggest bang for the buck. And in any business approach, that's the key goal. You don't see carpenters use saws to hammer in nails or drive screws. Wise up!


    • I actually use C++ for portability, not speed or generic programming (which are nice to have).

      If you avoid platform, compiler, and processor specific features, C++ is even more portable than Java. Java on the other hand tends to drag all platforms down to the least common denominator, then requires the use of contorted logic and platform extensions just to attain acceptable performance.

      People seem to have forgotten the original intention of C: portable code.

  • by nberardi (199555) * on Friday January 09, 2004 @12:21PM (#7928868) Homepage
    Windows was a good choice for this test, because many of the development languages that were used in this test aren't really mature enough in *nix. (i.e. .Net languages and arguably Java) A better test would be doing both tests on both OS's, because GCC is really more optimized twords Linux, while VC++ is more optimized twords Windows. I would have rather seen VC++ vs. Borderland C++, because that is a more real world business example.
  • by Junks Jerzey (54586) on Friday January 09, 2004 @12:22PM (#7928894)
    It is amusing that the obsession with raw speed never goes away, even though computers have gotten thousands of times faster since the the days of the original wisdom about how one shouldn't be obsessed with speed. Programmers put down Visual Basic as slow when it was an interpreted language running on a 66MHz 486. It was still put down as slow when it shared the same machine code generating back-end as Visual C++ running on a 3GHz Pentium 4. And still some people--usually people with little commercial experience--continue to insist that speed is everything.

    Here's a bombshell: if you have a nice language, and that language doesn't have any hugely glaring drawbacks (such as simple benchmarks filling up hundreds of megabytes of memory), then don't worry about speed. From past experience, I've found it's usually easy to start with what someone considers to be a fast C or C++ program. Then I write a naive version in Python or another language I like. And guess what? My version will be 100x slower. Sometimes this is irrelevant. 100x slower than a couple of microseconds doesn't matter. Other times it does matter. But it usually isn't important to be anywhere near as fast as C, just to speed up the simpler, cleaner Python version by 2-20x. This can usually be done by fiddling around a bit, using a little finesse, trying different approaches. It's all very easy to do, and one of the great secrets is that high-level optimization is a lot of fun and more rewarding than assembly level optimization, because the rewards are so much greater.

    This is mostly undiscovered territory, but I found one interesting link [dadgum.com].

    Note that I'm not talking about diddly high-level tasks in language like Python, but even things like image processing. It doesn't matter. Sticking to C and C++ for performance reasons, even though you know there are better languages out there, is a backward way of thinking.
    • I'm a TCL nazi. Back when I started (hint, the Pentium Bug was news) we would have to compile critical sections of our programs in C to get any kind of acceptable performance.

      Today, everything is in script because it's not worth the bother anymore. In 1998 I had to write my own affine transformation code in C to get a GUI to work at anywhere near real-time. Today I can run a planetarium simulator (read LOTS of calculations) at an acceptable speed in just script.

  • Python numbers (Score:3, Interesting)

    by Error27 (100234) <`moc.liamg' `ta' `72rorre'> on Friday January 09, 2004 @12:30PM (#7928983) Homepage Journal
    In some ways, this kind of math is a funny thing to benchmark. Adding and multiplication are just tiny assembly language instructions that are the same no matter what human readable programming language the test originally used. Any differences in the run time is going to be caused by other parts of the language.

    Python did pretty badly in the tests. The reason is that in Python it takes a long time to translate a variable name into a memory address (It happens at runtime instead of compile time).

    The benchmark code has stuff that basically looks like this:

    while foo < maxInt:
    i += 1


    Adding 1 to i takes no time at all but looking up i take a little time. In C this is going to be a lot faster.

    Python did really bad when "i" from the example above was a long compared to when it was a long in C. That's because Python has big number support but in C a long is limited to just 4 bytes.

    Python did OK in the trig section because the trig functions are implemented in C. It still suffers because it takes a long time to look up variables though.

    In real life, variable look up time is sometimes a factor. However, for programs that I've written getting data from the network, or database was the bottleneck.

  • G++? (Score:3, Insightful)

    by phorm (591458) on Friday January 09, 2004 @12:33PM (#7929038) Journal
    I know it ties into the GCC libs, but does G++ behave any better/worse than GCC. Comparing VC++ (a C++ compiler) and GCC (a C compiler) is a bit skewed. Also, how about a comparison of GCC in windows VS linux (comparable machines), just to see if the OS has any bearing on things?
  • by dnoyeb (547705) on Friday January 09, 2004 @12:41PM (#7929139) Homepage Journal
    Third, Java 1.4.2 performed as well as or better than the fully compiled gcc C benchmark, after discounting the odd trigonometry performance. I found this to be the most surprising result of these tests, since it only seems logical that running bytecode within a JVM would introduce some sort of performance penalty relative to native machine code. But for reasons unclear to me, this seems not to be true for these tests.

    I dont know why the reasons are not clear to him. Perhaps its because he still thinks the JVM is "running bytecode" and does not understand what JITs did or what HotSpot compilers do. Byte code is only run the first few passes, after which its optimized into native code. Native being whatever the compiler of the c program used to compile the JVM could do. This is fundamental. Which explains his results, and points to a poor HotSpot implementation where trig functions are concerned.
  • Why no ActivePerl? (Score:3, Interesting)

    by frostman (302143) on Friday January 09, 2004 @12:49PM (#7929218) Homepage Journal
    Why didn't they include ActivePerl? [activestate.com]

    In the article it rather sounds like they just assumed Python performance would be an indicator of performance for interpreted languages generally, but is there anything to back this up?
  • Slashdotted (Score:5, Funny)

    by ReadParse (38517) <john@funnycELIOTow.com minus poet> on Friday January 09, 2004 @12:57PM (#7929307) Homepage
    They should have written their site in one of the higher-performing languages.

    RP
  • by Viol8 (599362) on Friday January 09, 2004 @12:58PM (#7929314)
    I was a bit surprised by this quote in the article:

    "Even if C did still enjoy its traditional performance advantage, there are very few cases (I'm hard pressed to come up with a single example from my work) where performance should be the sole criterion when picking a programming language. I"

    I can only assume from this that he has never done or known anyone who has done any realtime programming. If you're going to write something
    like a car engine management system performance is the ONLY critiria, hence a lot of these sorts of systems are still hand coded in assembler , never
    mind C.
  • by _LORAX_ (4790) on Friday January 09, 2004 @01:03PM (#7929382) Homepage
    1) JIT optimizations don't always kick in until a function has been run several times. Since the bechmarks only run once, they are crippled on java.

    2) Java's IO function work on UTF-8 or other system dependant character set. So in essence java is doing twice the ammount of work during the IO benchmark.

    I'm sure other people will comment as well, but overall these numbers are not that suprising for code that was just copy and pasted from c code. Why do people expect that ANY language will perform well using another languages code.
  • by Compenguin (175952) on Friday January 09, 2004 @01:16PM (#7929597)
    His benchmark isn't fair, he's omitting the fame pointer on VC++ but not gcc. How is that fair?
  • by BreadMan (178060) on Friday January 09, 2004 @01:39PM (#7929944)
    Compiled with gcj Benchmark.java --main=Benchmark -o benchmark, compiled other program with the same optimization level.

    Comparison against gcc, gcj and Java 1.4.1 on the same host:
    .......... gcc........gcj.......java
    int.......28,700.....35 ,386....22,953
    double....30,000.....73,504....27, 529
    long......61,520.....61,729....68,914
    trig.. ....11,020....112,497...176,354
    io.........1,930. ....16,533....11,297
    I was somwhat surprised on the difference in the trig tests, as both appear to use libm. Not surprised that the IO was slower, the Java IO classes are nifty but do add quite a bit of overhead compared fputs/fgets.

    (Sorry about the formatting, it was the best I could do)
  • by b0rken (206581) on Friday January 09, 2004 @01:40PM (#7929952) Homepage
    IMO this benchmark is nonsense, and the way the Python code is written is even worse. I looked at the "trig" and I/O benchmarks. In the i/o benchmark, the output is assembled in the stupidest way possible:
    linesToWrite = [myString]
    for i in range(ioMax - 1):
    linesToWrite.append(myString)

    Changing this to 'linesToWrite = [myString] * ioMax' dropped time on my system from 2830ms to 1780ms (I'd like to note that I/O on my system was already much faster than his *best* I/O score, thank you very much Linux)

    In the trig test, I used numarray to decrease the runtime from 47660.0ms to *6430.0ms*. The original timing matches his pretty closely, which means that numarray would probably beat his gcc timings handily, too. Any time you're working with a billion numbers in Python, it's a safe bet that you should probably use numarray!

    I didn't immediately see how to translate his other mathematical tests into numarray, but I noted that his textual explanation in the article doesn't match the (python) source code!

    (My system is a 2.4GHz Pentium IV running RedHat 9)

  • Python Benchmark (Score:4, Informative)

    by SlightOverdose (689181) on Friday January 09, 2004 @01:40PM (#7929966)
    The windows version of Python is much slower. Testing with Python2.3 + psyco on a 2.4ghz p4 running Linux 2.4.20 yeilds impressive results

    $ python -O Benchmark.py
    Int arithmetic elapsed time: 13700.0 ms with
    Trig elapsed time: 8160.0 ms

    $ java Benchmark
    Int arithmetic elapsed time: 13775 ms

    $ java -server Benchmark
    Int arithmetic elapsed time: 9807 ms

    (n.b. this is only a small subset of the tests- I didn't feel like waiting. Trig was not run for java because it took forever.)

    To dismiss a few common myths...
    1) Python IS compiled to bytecode on it's first run. The bytecode is stored on the filesystem in $PROGNAME.pyc.

    2) the -O flag enables runtime optimization, not just faster loading time. On average you get a 10-20% speed boost.

    3) Python is a string and list manipulation language, not a math language. It does so significantly faster than your average C coder could do so, with a hell of a lot less effort.

Remember: use logout to logout.

Working...