Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Java Programming IT Technology

Java Faster Than C++? 1270

jg21 writes "The Java platform has a stigma of being a poor performer, but these new performance benchmark tests suggest otherwise. CS major Keith Lea took time out from his studies at student at Rensselaer Polytechnic Institute in upstate New York's Tech Valley to take the benchmark code for C++ and Java from Doug Bagley's now outdated (Fall 2001) "Great Computer Language Shootout" and run the tests himself. His conclusions include 'no one should ever run the client JVM when given the choice,' and 'Java is significantly faster than optimized C++ in many cases.' Very enterprising performance benchmarking work. Lea is planning next on updating the benchmarks with VC++ compiler on Windows, with JDK 1.5 beta, and might also test with Intel C++ Compiler. This is all great - the more people who know about present-day Java performance, the better.""
This discussion has been archived. No new comments can be posted.

Java Faster Than C++?

Comments Filter:
  • Um, it's online (Score:5, Informative)

    by JoshuaDFranklin ( 147726 ) * <.moc.oohay. .ta. ... nilknarfdauhsoj.> on Tuesday June 15, 2004 @04:27PM (#9434963) Homepage
    If you want, you can read the actual author's piece instead of a news story about it:

    The Java is Faster than C++ and C++ Sucks Unbiased Benchmark [kano.net]

  • by thebra ( 707939 ) * on Tuesday June 15, 2004 @04:28PM (#9434976) Homepage Journal
    Correct link [bagley.org]
  • Fact: C++ is dying....

    Oh hell, I don't have the heart. Nevermind.
  • by nebaz ( 453974 ) on Tuesday June 15, 2004 @04:29PM (#9435003)
    Here's some kindling...

    vi is better than emacs
    bsd is better than linux
    gnome is better than kde
    .
    .
    .
    anything else?

    oh yeah...
    my dad can beat up your dad.
    And you smell funny.
    • by vlad_petric ( 94134 ) on Tuesday June 15, 2004 @04:57PM (#9435450) Homepage
      The server one is optimized for throughput and concurrency, whereas the client one for latency.

      You might think that the two are the same, but the two settings actually make a visible impact if you're running on a multi-processor system. Most notably, the garbage collector and locking primitives are implemented differently.

    • by Jennifer E. Elaan ( 463827 ) on Tuesday June 15, 2004 @05:29PM (#9435832) Homepage
      "Some of the other differences include the compilation policy used, heap defaults, and inlining policy."

      Am I the only one who noticed the "inlining policy" thing? Considering "method call" was one of the most compelling arguments for his case (by orders of magnitude!), the fact that the methods being "called" are being called *INLINE* should mean something.

      If you're allowed to turn on the java inliner, surely you can spare the time to turn on the C++ one as well (he used -O2, not -O3, for compiling the C++ apps).

  • by exp(pi*sqrt(163)) ( 613870 ) on Tuesday June 15, 2004 @04:30PM (#9435031) Journal
    ...on x86? Please! Wake me up when someone who knows enough about C++ to pick a decent x86 compiler runs some benchmarks.
  • by Anonymous Coward on Tuesday June 15, 2004 @04:32PM (#9435070)

    Java and C++ are language. Languages aren't "faster" or "slower", but compilers for them might be. I find it somewhat underhanded to put the languages in the header when it's really comparing compilers.

    Not to mention, inter-language compiler benchmark[et]ing is notoriously difficult to get 'right'. The programs tested are often stupid (doesn't do anything meaningful), or constructed by a person with more skill/bias for one language than the other.

    • by 3770 ( 560838 ) on Tuesday June 15, 2004 @04:50PM (#9435349) Homepage
      A few examples

      1) Java has bounds checking for arrays, C++ doesn't. This is specified in the language. This affects performance.
      2) Java has garbage collection, C++ doesn't. This is specified in the language. This affects performance.

      Also, the specification of Java says that it should be compiled to byte code and executed in a JVM.

      So the "language" certainly affects performance.
  • Nice to hear... (Score:4, Insightful)

    by twocoasttb ( 601290 ) on Tuesday June 15, 2004 @04:33PM (#9435088)
    It's been ages since I've programmed in C++, but it's good to know see these favorable comparisons. I think about the Struts/Hibernate/Oracle applications I write today and shudder when I imagine what how difficult it would be to have to develop web applications in C++. C++ will be around forever and certainly has its place, but long live Java.
  • A few points... (Score:5, Insightful)

    by mindstrm ( 20013 ) on Tuesday June 15, 2004 @04:34PM (#9435107)
    There seem to be some unanswered questions here..

    - How equivalent were the benchmarks? Where they programmed in an optimum way for their respective compilers and libraries? I'm sure the java ones were.. what about the C++ ones? The author states he doesn't understand G++ very well.

    G++ is also known to not produce the best results.

    "I rant it with -O2"

    My guess is many of the tests were not implemented properly in c++.

    The main clue would be this... I can understand java having better than expected performance.. but there is no way I can accept that java is that much FASTER than properly done C++... it doesn't make any sense.

    • Re:A few points... (Score:5, Insightful)

      by Trillan ( 597339 ) on Tuesday June 15, 2004 @05:14PM (#9435644) Homepage Journal

      Maybe it does make sense. But what it proves is that C++ (at least as implemented by GCC, but it's probably a design flaw) is slower than expected, not that Java is blazingly fast.

    • Re:A few points... (Score:5, Interesting)

      by Cthefuture ( 665326 ) on Tuesday June 15, 2004 @05:33PM (#9435879)
      I've been playing with those benchmarks for ages. I use them any time a new language comes out or if I just want to do some independent testing.

      A couple points:

      - The "Great Shootout" benchmark times are sometimes way off because the run-time was too short to get an accurate reading. In those cases the tests should have been run with higher values to really stress the machine. That doesn't appear to be an issue in this test though (assuming his graph values are in seconds).

      - Many of the C++ tests are not optimized. That is, they use C++ features like the iostream stuff (cout, and friends) which is extremely slow. The C versions are available and very fast. C++ is pretty much just an extension of C. You don't need to use C++ features if they slow you down. Another one is the hash stuff. In the C++ hash benchmark there are some goofy mistakes made by using the brackets [] operator where it forces several unnecessary lookups. You can also substitute a better STL hashing function that is faster (like MLton's insanely fast hasher).

      - The test could be done by comparing C to Java. Anything in C++ can be made as fast as an equivalent C version but there are not many programmers that know how. Just assume anything in C++ will run as fast as a C version, and if it doesn't then you did something wrong. The hash tests would be easier in C++ though. If they were written properly they would kill the Java version.

      With that said, I'm going to try these tests myself because I do not believe the results to be accurate. but who knows...
      • by Cthefuture ( 665326 ) on Tuesday June 15, 2004 @06:06PM (#9436242)
        I just went through and tested the hash2 benchmark and found that I was correct. The C++ version slaughters the Java version (even in "server" mode). This is completely different than what this dude's page shows.

        Here is the "correct" code for hash2.cpp:

        #include <stdio.h>
        #include <iostream>
        #include <ext/hash_map>

        using namespace std;
        using namespace __gnu_cxx;

        struct eqstr {
        bool operator()(const char* s1, const char* s2) const {
        return strcmp(s1, s2) == 0;
        }
        };

        struct hashme
        {
        size_t operator()(const char* s) const
        {
        size_t i;
        for (i = 0; *s; s++)
        i = 31 * i + *s;
        return i;
        }
        };

        int
        main(int argc, char *argv[]) {
        int n = ((argc == 2) ? atoi(argv[1]) : 1);
        char buf[16];
        typedef hash_map<const char*, int, hashme, eqstr> HM;
        HM hash1, hash2;

        for (int i=0; i<10000; i++) {
        sprintf(buf, "foo_%d", i);
        hash1[strdup(buf)] = i;
        }
        for (int i=0; i<n; i++) {
        for (HM::iterator k = hash1.begin(); k != hash1.end(); ++k) {
        hash2[(*k).first] += k->second;
        }
        }
        cout << hash1["foo_1"] << " " << hash1["foo_9999"] << " "
        << hash2["foo_1"] << " " << hash2["foo_9999"] << endl;
        }

  • by GillBates0 ( 664202 ) on Tuesday June 15, 2004 @04:34PM (#9435113) Homepage Journal
    The results are very non-intuitive. An extra layer between the program -> CPU implies an extra amount of overhead - be it any layer (VM at the Application layer, VM at the OS layer, or even at the CPU layer (hyperthreading)).

    I looked at his results page quite extensively, but failed to find a good analysis/justification of the results. Just saying that the Server JVM is better than the Client JVM is *not* enough.

    I want to know where the C++ overhead comes from, which Java manages to avoid - does the JVM do better optimization because it is given a better intermediate code (bytecode)? Is it better at doing back/front end optimizations (unlikely given gcc's maturity).

    I tried to look for possible discrepancies in the results, but the analysis will definitely take more time - and I think it's the job of the experimenter to do a proper analysis of the results. Liked his choice of benchmarks though.

    • I agree with you. This does not match my experience at all. The Java programs I have used (especially anything with a GUI) have been bloated and much slower than programs in C++ that do 10 times as much. I would be curious to see if these benchmarks included things like opening windows, pulling down menus etc.

      Magnus.
    • by jfengel ( 409917 ) on Tuesday June 15, 2004 @05:07PM (#9435559) Homepage Journal
      His examples are all non-GUI things; they're pure CPU benchmarks. That's one major case where Java is certainly slower than C++.

      Most of his tests are big loops (primes, string concatenation, etc.) These are cases where (as a sibling poster mentioned) hot path analysis can do you a world of good. A heavily tuned C++ program can do it just as well, or better, but the point of using a high-level language is that you don't have to do those optimizations yourself; you write the code in whatever way seems natural and you let the compiler optimize.

      In a long-running Java program, you don't have that extra layer between the program and the CPU. The JIT does a real native compilation and passes control off to it. Once that's started, it runs just as fast as any other assembly code. Potentially faster, given that the JIT can look at the current run and optimize based on the way the code is going: the precise CPU it's running on, where things are in memory, how far it can afford to unroll a loop, what loop invariants it can lift, etc. It can even replace code as it runs.

      The question then is, does the one-time (albeit run-time) optimization do more good than it costs?

      That's especially easy on a hyperthreaded system. In a C++ program, these loops will run in a single thread on a single CPU, so if the JIT compiler runs on the other (virtual) CPU, you get its effort for free. Even the garbage collector can run on the other CPU, so you get the convenience of memory management with no total performance cost. (You do burn more CPU cycles, but you use up no extra wall-clock time.)

      GCC is very mature, but it doesn't have the option of changing the code at run time. Especially on modern CPUs with their incredibly deep pipelines, arranging your code to avoid pipeline stalls will depend a lot on runtime considerations.

      Also, Java has a few advantages over C++ in optimization. It's very easy to analyze Java programs to be certain that certain memory locations absolutely will not be modified. That's much harder in languages with native pointers. Those invariants allow you to compile out certain calculations that would have to be done at runtime in a C/C++ program. You can even start spreading loop cycles over multiple CPUs, but I'm pretty certain that the present JVMs aren't that smart.

      These results are toy benchmarks, and not really indicative of real performance, even on purely non-GUI code. But I wanted to outline the reasons why the results aren't just silly, and they do have a theoretical basis.
  • Expert results (Score:5, Insightful)

    by otterpop81 ( 784896 ) on Tuesday June 15, 2004 @04:36PM (#9435145)
    Some of the C++ tests would not compile. I've never been very good at decoding GCC's error messages, so if I couldn't fix a test with a trivial modification, I didn't include it in my benchmarks.

    That's Great! I can't figure out GCC's error messages, but I offer definitive proof that Java is faster than C++. Nice.

  • I care that Java is an inconvenient pain to develop in and use. I care that I have to start a mini-OS just to run a Java program. I care that the language is under the control of one vendor. I care that the 'intialization == resource allocation' model doesn't work in Java. I care that the type system is too anemic to support some of the more powerful generic programming constructs. I care that I don't get a choice about garbage collection. I care that I don't get to fiddle bits in particular memory locations, even if I want to.

    I think Java is highly overrated. I would prefer that a better C++ (a C-like memory model, powerful generic programming, inheritance, and polymorphism) that lacked C++'s current nightmare of strangely interacting features and syntax.

    I use Python when I don't need C++s speed or low-level memory model, and I'm happier for it. It's more flexible than Java, much quicker to develop in, and faster for running small programs. Java doesn't play well with others, and it was seemingly designed not to.

    Besides, I suspect that someone who knew and like C++ really well could tweak his benchmarks to make C++ come out faster again anyway. That's something I've noticed about several benchmarks that compare languages in various ways.

    • by Bullet-Dodger ( 630107 ) on Tuesday June 15, 2004 @05:02PM (#9435506)
      I think Java is highly overrated. I would prefer that a better C++ (a C-like memory model, powerful generic programming, inheritance, and polymorphism) that lacked C++'s current nightmare of strangely interacting features and syntax.

      Have you looked at Objective-C? I'm not an expert, but it sounds just like what you describe. As a downside though, I'm not sure how well supported it is on non-OS-X platforms.

    • by SuperKendall ( 25149 ) * on Tuesday June 15, 2004 @05:20PM (#9435726)
      You are once again spouting the tired old line that Sun is the master of Java. Not at all true, Java's fate is controlled by a whole host of companies - including IBM. Take a look at the reality of Java platform evolution at the Java Community Process [jcp.org] web site.

      It's a standards body just like any other, just more open.

      P.S. - Aside from that gripe being wrong, I agree with the other poster that you should look into Objective-C to address other issues. Look for "GnuSTEP" for cross-platform objective C GUI work. It's just nicer to use on a Mac as they have very good tools (though in fairness I have never looked at what GnuSTEP tools might be around, I just can't imagine them being quite as good as the tools Apple has sunk so much effort into!).
  • Flawed Test (Score:3, Insightful)

    by Emperor Shaddam IV ( 199709 ) on Tuesday June 15, 2004 @04:38PM (#9435182) Journal
    Comparing one C++ compiler (gcc) against the Java JVM on one operating system is not much of a test. I love Java, but this is almost something like microsoft would do. Test one specific OS, compiler, and configuration, and then make a blind, far-reaching statement. A fair test would include several platforms and compilers.
  • by mypalmike ( 454265 ) on Tuesday June 15, 2004 @04:40PM (#9435200) Homepage
    From methcall.cpp:

    int
    main(int argc, char *argv[]) {
    int n = ((argc == 2) ? atoi(argv[1]) : 1);

    bool val = true;
    >> Toggle *toggle = new Toggle(val);
    for (int i=0; i<n; i++) {
    val = toggle->activate().value();
    }
    cout << ((val) ? "true" : "false") << endl;
    delete toggle;

    val = true;
    NthToggle *ntoggle = new NthToggle(val, 3);
    for (int i=0; i<n; i++) {
    val = ntoggle->activate().value();
    }
    cout << ((val) ? "true" : "false") << endl;
    >> delete ntoggle;

    return 0;
    }

    Why allocate and deallocate an object within the scope of a function? Well, in C++, there's no reason, so this is bad code. You can just declare it as a non-pointer and it lives in stack space. But guess what? You can't do that in Java: all objects are allocated on the heap.

    That, and using cout instead of printf, are probably why this is slower than the "equivalent" Java.

    -_-_-
    • by Bloater ( 12932 ) on Tuesday June 15, 2004 @05:19PM (#9435711) Homepage Journal
      The cout is done twice, and the new and delete are each done only once. They are not the reason for the poor performance.

      The problem is that g++ probably does not optimise it all inline, whereas the particular java VM he has chosen to use probably does.

      Although defining the Toggle variables with auto storage class may give g++ the hint it needs to realise this.

      Additionally, the activate method is declared to be virtual, this shouldn't be a problem, except that it may further hide the optimisation opportunity from g++. Note that the description of the test does not stipulate that it is testing virtual methods.
  • by vlad_petric ( 94134 ) on Tuesday June 15, 2004 @04:41PM (#9435227) Homepage
    First of all, g++ actually sucks big time in terms of performance. Intel C Compiler, with inter-procedural optimizations enabled, produces code that's almost always 20->30% faster than g++. I've actually once compiled C code with g++ and it was visibly slower than the same code compiled with gcc ... oh well.

    Now, regarding java performance ... Java isn't slow per se, JVMs and some apis (most notably swing) are. Furthermore, JVMs usually have a slow startup, which gave java a bad name (for desktop apps startup matters a lot, for servers it's hardly a big deal). Java can be interpreted, but it doesn't have to be so (all "modern" JVMs compile to binary code on the fly)

    Bytecode-based environments will, IMNSHO, eventually lead to faster execution than with pre-compilation. The reason is profiling and specialized code generation. With today's processors, profiling can lead sometimes to spectacular improvements - as much as 30% performance improvements on Itanium for instance. Although Itanium is arguably dead, other future architectures will likely rely on profiling as well. If you don't believe me, check the research in processor architecture and compiling.

    The big issue with profiling is that the developper has to do it, and on a dataset that's not necessarily similar to the user's input data. Bytecode environments can do this on-the-fly, and with very accurate data.

  • by Sebastopol ( 189276 ) on Tuesday June 15, 2004 @04:43PM (#9435242) Homepage

    Why did he use the strdup function when he already has the char array from the previous sprintf?? That step incurs a huge and unnecessary penalty w/an allocation, just pass the pointer!

    Also, in the second 'for' loop in hash2, he does extra work beacuse he already looked up (*k).second.

    shouldv'e done hash2[k->first] = k->second; ...to avoid another lookup penalty.

    Tell me I'm not crazy.

  • What about gcj? (Score:5, Interesting)

    by joshv ( 13017 ) on Tuesday June 15, 2004 @04:45PM (#9435280)
    I'd be interested in comparing the speed of the native code generated by gjc to the that of JVM.

    -josh
  • by alphafoo ( 319930 ) <loren@boxbe.com> on Tuesday June 15, 2004 @04:45PM (#9435282) Homepage
    A year and a half ago I proposed building a standalone server-type application using Java, and my client scoffed at me because "everyone knows Java is slow". It was 1.4.2 on rh8.0 running on standard dual xeons. It ran pretty fast, and then I profiled it. Repeatedly. I replaced some of the stock library routines with my own faster ones or ones I found on sourceforge, found the most monitor-contentious areas and tuned them, played around with different GC strategies for the given hardware, and ended up with something that is amazingly fast. Scaled to 400+ HTTP requests per second and over a thousand busy threads, per node. Some of the speed bumps came for free, like when NPTL threads came available in the 2.4 kernel.

    I am starting on a new standalone server now doing something different, but I am going to stick with Java, and will be happy to see what 1.5 does for me.

    But I have seen Java run slow before, and I will tell you this: in every instance it is due to someone writing some needlessly complicated J2EE application with layer upon bloaty layer of indirection. All the wishing in the world won't make one of those behemoths run fast, but it's not fair to blame Java. Maybe blame Sun for EJB's and their best practices, or blame BEA for selling such a pig.

    Stuff I like in the Java world:

    • sun's 1.4.2 on hyperthreaded xeons
    • Jetty (fast!)
    • Piccolo XML parser (fast!)
    • Lea's concurrency library
    • Grosso's expirable cache [click] [onjava.com]
    • hibernate
    • JAM on Maven [click] [javagen.com]
    • eclipse
  • Slow C++ compiler (Score:4, Informative)

    by siesta at uni ( 311500 ) on Tuesday June 15, 2004 @04:46PM (#9435301)
    The article says he used GCC to compile the C++ versions, but GCC produces code that isn't nearly as good as the Intel compiler for example. (Here [ddj.com], but no good if you don't subscribe)
    A lot of the test results are close, and I think a different compiler would change the outcome.
  • by IvyMike ( 178408 ) on Tuesday June 15, 2004 @04:48PM (#9435322)
    Here is my experience with C++ vs. Java: At my company, we had a specialized image viewing program. The original program was written in C++ years ago, and performance sucked even on modern machines. It probably had a dozen man-years of time in it. We decided to re-write it in java.

    We knew java in theory should be worse than C++ at manipulating large blocks of raw data, so we spent some time architecting, prototyping, and profiling java. We quickly learned the limitations and strengths.

    The result? After 4 engineers worked for 6 months, we had a program that was rock solid, had more features, had a modern UI, and was WAY faster. Night and day; the old program felt like a hog, and the new program was zippy as anything. And the new code is fewer lines, and (in our opinion) way more maintainable. Since the original release, we've added severeal new features after day or two of work; the same features never would have happened on the old version, because they would have been too risky.

    So the question is this? Could we have re-written or refactored the C++ program and gotten the same speed benefits? No doubt, such a thing is possible. But we are all convinced there is NO WAY we could have done it with as little effort. The C++ version would have taken longer to write and debug.
  • by morcheeba ( 260908 ) * on Tuesday June 15, 2004 @04:52PM (#9435372) Journal
    Why did he use only -O2?

    -O3 adds function inlining and register renaming [gnu.org].

    Also, some of the code doesn't look too much of a test of the language, but more of a test of the libraries. Both versions of hash rely on the library implementations, and it looks like hash.cpp [kano.net] does an extra strdup that the java version doesn't. I don't know either of the hash libraries well enough, but I don't see why this significant slowdown would be necessary in the gcc version.
  • Troll (Score:5, Insightful)

    by stratjakt ( 596332 ) on Tuesday June 15, 2004 @04:52PM (#9435390) Journal
    This test proves that Sun's optimized Java compiler and VM are faster on Red Hat than gcc.

    Gcc is designed for compatibility with a wide range of architectures, and is not optimized for a single one. He also (apparantly) used stock glibc from Red Hat. And only one "test", the method call test, showed java to be a real winner. And even then, it's server-side Java, which is meaning less when you talk about it as a day-to-day dev language (ie; creating standalone client-side apps).

    Intel's (heavily optimized) C++ compiler should be a damn sight faster, and so should VC++.

    This "comparison" is so limited in scope and meaning, that this writeup should be considered a troll.

    Hell, read his lead-in:

    "I was sick of hearing people say Java was slow, when I know it's pretty fast, so I took the benchmark code for C++ and Java from the now outdated Great Computer Language Shootout and ran the tests myself."


    Ie; I set out to prove Java is teh awesome and c++ is teh suck!

    If anything it proves something I've known intuitively for a long time. gcc does not produce x86 code that's as fast as it could be. That's a trade-off for it being able to compile for every friggin cpu under the sun.

    I can't wait till RMS takes personal offense and goes on the attack.
  • by cardshark2001 ( 444650 ) on Tuesday June 15, 2004 @04:53PM (#9435402)
    It's just not possible. It could be comparable, in limited cases, but not faster. It just can't be. If you find that it is, there's something wrong with your experiment. Does this mean Java is bad? Not necessarily. It depends on your purpose.

    Okay, so how could I make a blanket statement like that? In this case, the author of the paper merely used a compiler switch in gcc (-o2). That doesn't mean his c++ was highly optimized. It just means he told the compiler to do its best. If you really wanted to highly optimize c++, you would study the compiler and how it works, and you would profile the actual assembly that the compiler generates to make sure that it didn't do anything unexpected. Given *any* algorithm, I can come up with a c++ implementation that is faster than a Java implementation. Period.

    The java compiler actually compiles to a virtual opcode format, which is then interpreted by the java virtual machine at runtime. Imagine if you needed to talk to someone on the phone, but instead of talking to them, you had to talk through an intermediary. Is there any possible way that it could be faster than talking to the person directly?

    Now, I'll be the first to point out that a badly implemented c++ algorithm could be much slower than a well implemented Java algorithm, but I'll take the pepsi challenge with well written code any time, and win.

    Relying on benchmarks and code somebody else wrote doesn't prove anything. Did he get down and dirty with the compiler and look at the generated assembly code? No, he did not.

    Move along, there's nothing to see here.

    • by rjh ( 40933 ) <rjh@sixdemonbag.org> on Tuesday June 15, 2004 @07:15PM (#9436853)
      Given *any* algorithm, I can come up with a c++ implementation that is faster than a Java implementation. Period.
      I'd like to see a C++ implementation of the Halting Problem that's faster than a Java implementation, please, thank you.

      Oh, wait, you can't do that because nobody can write Halting.

      I guess that means there are some algorithms for which you can't write a faster C++ version. Next time, try less rhetoric and more facts. "There exist lots of algorithms for which I can code a C++ implementation that's faster than a Java implementation" is good. The instant you make a unilateral statement like the one you just made, though, it shows that you don't know as much about computer science as you think you know.

      Fact: there exist cases where Java is faster due to its ability to optimize on the fly. And if you know C++ as well as you think you do, this shouldn't surprise you. C++ beats C so handily for many tasks because C++ is able to do much better compile-time optimization largely on account of the C++ compiler having access to much more type information than a C compiler. When Java beats C++, it's on account of Java having access to much more information about runtime paths than a C++ compiler. ("Much more" may be an understatement; C++ doesn't even try!)

      In other words, the JVM (sometimes) beats C++ for the exact same reason that C++ almost always spanks C; the faster implementation has access to more information and uses that information to make more efficient use of cycles.

      I don't think these situations will appear all that often, and I am deeply skeptical of this guy's "in the general case, Java is faster" conclusion. But my skepticism isn't leading me to make rash statements which cannot be backed up.
  • by Troy Baer ( 1395 ) on Tuesday June 15, 2004 @05:00PM (#9435480) Homepage
    The author doesn't really explain why he didn't compile with -O3 aside from a very slight amount of hand-waving about space-speed tradeoffs, which quite frankly I don't buy. If you're benchmarking, why wouldn't you optimize as heavily as possible? If he was really interested in benchmarking this stuff objectively, he could've at least shown that there wasn't much different between -O2 and -O3. Not to mention the question of whether g++ generates good binary code on a given platform...

    This didn't exactly fill me with optimism either:

    I don't have an automated means of building and benchmarking these things (and the scripts that came with the original shootout didn't run for me). I really do want you to test it on your own machine, but it's going to take some work, I guess.
    This would seem to imply that the author does not know much about either shell scripting or Makefiles. I'm not sure I'm willing to trust benchmarks from somebody who can't figure out an automated way to build and run them.

    --Troy
  • Explanation (Score:5, Interesting)

    by gillbates ( 106458 ) on Tuesday June 15, 2004 @05:51PM (#9436075) Homepage Journal

    Reviewing the console log, we find that when java programs were tested with a large number of iterations, Java only performed better on one test.

    • We don't know which OS was used. While each C++ program must have been loaded entirely each time, the JVM may very well have remained cached in RAM between tests - hence a faster startup time, which explains:
    • Java is actually slower than C++, but because the JVM was already cached in RAM, it ran faster on those tests which involved a relatively small number of iterations. However, when the number of iterations was increased, Java was always slower than C++, with the exception method call and object instantiation:
    • Object instantiation isn't really relevant because of the fact that C++ programs call the OS for every single memory request, where as Java can pool it. This test measured the speed of the kernel's malloc more than the speed of C++.
    • In most of the C++ code, IO is placed in the inside loops, meaning that the program is really testing the throughput of libc and the OS, as opposed to the efficiency of the generated code.
    • An interesting note: the Java client won none of the benchmarks.

    I know that Java has many strengths, but speed isn't one of them. Looking at the results, we see the g++ runtimes are much more consistent than those of Java - on some tests, the Java Server is faster than the client by a factor of 20!? How could a programmer code without having any realistic expectation of the speed of his code. How embarrassed would you be to find that your "blazingly fast" app ran slower than molasses on the client machine, for reasons yet unknown?

    When it comes to speed, compiled languages will always run faster than interpreted ones, especially in real-world applications.

    But discussions of language speed are a moot point. What this really tested was the implementation, not the language. Speed is never a criteria upon which languages are judged - a "slow" language can always be brought up to speed with compiler optimizations (with a few exceptions). I suspect that if C++ was interpreted, and Java compiled, we'd see exactly the opposite results.

    In short, the value of a language consists not in how fast it runs, but in what it enables the programmer to do.

  • by Get Behind the Mule ( 61986 ) on Tuesday June 15, 2004 @05:59PM (#9436155)
    Whew, there's seems to be a lot of denial running through this thread. "An interpreted language just can't possibly be as fast or faster as a compiled language! So I just don't care what the empirical results say, no matter how badly or well done they are, it just can't possibly be!"

    I think some of you are overlooking the fact that a VM running byte code is capable of doing optimizations that a compiled language just can't possibly do. A compiled language can only be optimized at compile time. Those optimizations may be very sophisticated, but they can never be any better than an educated guess about what's going to happen at runtime.

    But a VM is capable of determining exactly what is happening at runtime; it doesn't have to guess. And thus it is able to optimize those sections of code that really are, in true fact, impacting performance most severely. In can do this by compiling those sections to machine code, thus exploiting precisely the advantage that a compiled language is alleged to have by its very nature. And other kinds of optimizations, the kind that a compiler traditionally does, can be performed on those sections as well.

    Of course there are scenarios where runtime optimization doesn't win much, for example in a program that is run once on a small amount of data and then stopped, so that the profiler doesn't get much useful info to work with. This is why Java is more likely to have benefits like this in long-running server processes.

    And of course a conscientious C++ programmer will run a profiler on his program on a lot of sample data, and go about optimizing the slowest parts. A conscientious Java programmer should do that too. But an interpreted language has the advantage that the VM can do a lot of that work for you, and always does it at runtime, which is when it really counts.
  • Function calls (Score:5, Interesting)

    by BenjyD ( 316700 ) on Tuesday June 15, 2004 @06:00PM (#9436176)
    Why does the example use a recursive fibonnaci sequence algorithm? It's so slow, and the runtime is dominated by the function call time.

    For example:

    [bdr@arthurdent tests]$ time ./fib_recurse 40
    165580141
    real 0m3.709s
    user 0m3.608s
    sys 0m0.005s

    time ./fib_for_loop 40
    165580141
    real 0m0.006s
    user 0m0.002s
    sys 0m0.002s

    I think a lot of these benchmarks are showing that the Hotspot optimiser is very good at avoiding function call overheads.
  • by xp ( 146294 ) on Tuesday June 15, 2004 @06:03PM (#9436201) Homepage Journal
    Here is an excerpt from the article for this story: Lea used G++ (GCC) 3.3.1 20030930 (with glibc 2.3.2-98) for the C++, with the -O2 flag (for both i386 and i686). He compiled the Java code normally with the Sun Java 1.4.2_01 compiler, and ran it with the Sun 1.4.2_01 JVM. He ran the tests on Red Hat Linux 9 / Fedora Test1 with the 2.4.20-20.9 kernel on a T30 laptop. The laptop "has a Pentium 4 mobile chip, 512MB of memory, a sort of slow disk," he notes.

    What this shows is that GCC's implementation of C++ is slower than an interpreted language like Java. This does not show that C++ is slower than Java.

    ----
    Notes on Stuff [blogspot.com]
  • by danharan ( 714822 ) on Tuesday June 15, 2004 @06:21PM (#9436379) Journal
    The article mentions Lea modified the String concatenation code, although Java still lost to C++ in that test. He unfortunately didn't do a great job:
    import java.io.*;
    import java.util.*;

    public class strcat {
    public static void main(String args[]) throws IOException {
    int n = Integer.parseInt(args[0]);
    StringBuffer str = new StringBuffer();

    for (int i=0; i<n; i++) {
    str.append("hello\n");
    }

    System.out.println(str.length());
    }
    }
    Instantiating the StringBuffer with an approximate size would prevent it from having to reassign a char array every time it runs out of space. new StringBuffer(n*6) for n=10000000 as used in his test should have a pretty large impact.

    I could not run the test for 10M, but ran it for up to 1M. 541 milliseconds in one case, 280 in the other. Here's the code I used (I had to modify the timing cause I'm running XP):
    public class Strcat2 {
    public static void main(String args[]) throws IOException
    {
    long start, elapsed;
    start = System.currentTimeMillis();

    int n = Integer.parseInt(args[0]);
    StringBuffer str = new StringBuffer(n*6);

    for (int i=0; i<n; i++)
    {
    str.append("hello\n");
    }

    System.out.println(str.length());
    elapsed = System.currentTimeMillis() - start;
    System.out.println("Elapsed time: "+elapsed);
    }
    }
    The only difference in the class Strcat besides the class name is the instantiation of StringBuffer.

    NB: I'm not accusing the author of bias against Java, nor am I ignorant of the fact a bunch of /.'ers could kick my ass in C++ optimization. It would be interesting however to have a distributed benchmark, where in the true spirit of OSS we could fiddle with it until we could not wring any more performance gains.
  • by MP3Chuck ( 652277 ) on Tuesday June 15, 2004 @06:49PM (#9436645) Homepage Journal
    http://bash.org/?338364 #338364 +(1308)- [X] Saying that Java is nice because it works on all OS's is like saying that anal sex is nice because it works on all genders
  • by Serveert ( 102805 ) on Tuesday June 15, 2004 @07:11PM (#9436816)
    sprintf(buf, "%x", i);

    It must parse the "%x" and determine what it's trying to do. So it decides, at runtime, you want to translate an integer value into a hexidecimal string. Of course if there's an error at runtime in the string "..." it must generate an error. How 'bout using strtol?

    Now let's look at the java version:

    Integer.toString(i, 16)

    Ok, here we have something that is strongly typed so of course it will be faster. At runtime the java compiler _knows_ what it's dealing with and it knows it must invoke the hexadecimal conversion code.

    Note that these statements are within loops.

    Just one example, that was the first file I looked at. I don't think they have quite optimized the C++ code IMO. Plus the C++ library is notoriously slow, I would recommend rogue wave or your homegrown C++ classes.

    I think the lesson here is it's very easy to write slow C++ code while it's very easy to write fast Java code.

    Gimme any C++ code here and I'll profile it/speed it up and get it as fast if not faster than java.
  • by eduardodude ( 122967 ) on Tuesday June 15, 2004 @07:25PM (#9436935) Homepage
    Check out this recent IBM Developerworks article [ibm.com] which looks at how modern JVMs handle allocation and garbage collection.

    Some very surprising tidbits there. For instance:

    "Performance advice often has a short shelf life; while it was once true that allocation was expensive, it is now no longer the case. In fact, it is downright cheap, and with a few very compute-intensive exceptions, performance considerations are generally no longer a good reason to avoid allocation. Sun estimates allocation costs at approximately ten machine instructions. That's pretty much free -- certainly no reason to complicate the structure of your program or incur additional maintenance risks for the sake of eliminating a few object creations."

    Read the article for an excellent nuts-and-bolts analysis of many current performance considerations. From the posts in this thread, it's pretty clear a lot of people haven't looked into what's actually done in a server JVM these days.
  • by mc6809e ( 214243 ) on Wednesday June 16, 2004 @02:18AM (#9439470)
    I notice that much of the overhead is simply in making function calls.

    Ackermann.cpp, for example, spends very little time actually calculating anything. Much of it's work includes all the overhead associated with making a function call.

    Included in this overhead is management of the frame pointer. By using -fomit-frame-pointer, you avoid pushing the old ebp on the stack and a store of the current esp into ebp.

    Ackermann runs about twice as fast with this simple optimization.

Some people manage by the book, even though they don't know who wrote the book or even what book.

Working...