Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Java Programming

Java VM & .NET Performance Comparisons 104

johnhennessy writes "Just came across some good virtual machine performance benchmarks (on Mono's mailing list). It covers executing java bytecode via a host of different Java runtimes and also the mono runtime. Not only does it give numbers for IBM's runtime (1.4.2 and 1.3.1), Sun's runtime (1.5.0 and 1.4.2), GCJ, Mono, Jikes and much more! These numbers are also given for both Intel and Opteron (where relevant). Before the flames begin, don't forget to read the authors description of how the benchmark was carried out. Hopefuly this should inspire educated discussion on ones favourite JVM/CLR."
This discussion has been archived. No new comments can be posted.

Java VM & .NET Performance Comparisons

Comments Filter:
  • No jRocket (Score:2, Insightful)

    by Anonymous Coward
    These are nice evaluations of "free" JVMs.
    But I don't see commerical and highly optimized products like JRocket on there (BTW, I think the JRocket JVM is free, support costs).
  • It's no surprise Sun's JDK seems to perform the best out of the other versions. They know Java the best since they created it, and thus are able to optimize it better than the other versions. This would probably change if Sun had a more open licensing scheme for Java. Just my two cents.
    • Everyone else knows it as well as Sun. The language and VM spec are public knowlege.

      Sun has a better JVM not because they have some mystical insight because they invented it, but because they made it a huge part of their company survival strategy. Improving the JVM gets a very high priority in the company. IBM on the other hand might not care if they are second, as long as their performance is competitive.
    • by the morgawr ( 670303 ) on Wednesday October 20, 2004 @12:37PM (#10577339) Homepage Journal
      Unless I'm reading the chart wrong, it looks like IBM's VM is faster than Sun's for many applications. (It just doesn't seem to have an Opteron version)
    • by jeif1k ( 809151 ) on Wednesday October 20, 2004 @03:29PM (#10579357)
      It's no surprise Sun's JDK seems to perform the best out of the other versions. They know Java the best since they created it,

      The techniques necessary for compiling Java well have largely been around since long before Sun even released Java. Sun has no special, secret knowledge there, they have just been hacking on their implementation longer than anybody else. If anything, Sun's progress on Java has been pokey.

      Sun Java is probably pretty close to what is theoretically possible, so they will largely be standing still, while other implementations will keep improving until they also hit the limit. In a couple of years, you can expect that all actively developed JVM and CLR implementations will have roughly the same performance on comparable code.
    • Moderators, please.
      Flamebait this guy.

      I mean... Obviously he didn't read the charts, and, i mean jesus christ read what he wrote which parses too ... suns performance would suck if they opened java more...

      but no explination as too... uhm... why?

      *boggle*
  • OK, so I'll admit I've not *read* the article, but fairly throughly scanned over the first 2-3 tables, and graphs.
    But, what are those numbers?
    I'd presume they are meant to be seconds, but the difference is so vast it can't be. Can it?
    • So... instead of RTFM you ask here? Oh sorry... I forgot, this is Slashdot. They aren't seconds, they are some kind of score on the respective benchmarks, the higher the score the better;
  • by Muad'Dave ( 255648 ) on Wednesday October 20, 2004 @12:51PM (#10577508) Homepage

    What surprises me is that GCJ is never faster than a 'real' JVM. You'd think that natively compiled and optimized code would be faster than an interpreter. I guess there's optimization work to be done in gcjlib.

    • Isn't it the GIJ interpreter against the others?
    • What's wrong with making a JNI wrapper on the plain old C library and using all that instead of those funky java classes. Every time I want to read a file in Java I have to read the docs on java.io.* to figure out what the hell I want.

      I already know what I want. I want to open a file and read it. I don't want to have to worry about if the file is buffered or not, or if it has a method to read a line at a time. Just gimme an old-fashioned fopen () function.

      So, am I crazy, or has anyone else thought this to
      • by CaptainPinko ( 753849 ) on Wednesday October 20, 2004 @01:07PM (#10577760)
        Yes you are crazy. Don't use java then. Really when I look at C functions I think they are an awful mess of vomit, however I except that this is the aesthetic/style that C expect and I deal with it. If you worked with Java enough you woudln't have to look it up. What I like about java is that since there are so many layers I can take out the FileInputStream reader and throw a NewtworkStream and leave the rest of the code the same since it uses the buffered reader. For exapmle in C i hate how there is fprintf, sprintf, printf. As far as I am concernred those should be all rolled into one function. The point is either except the way that Java is or don't use it: don't make Java like C. I'm sure you can see how much you'd hate C if I tried to make it more like Java.
        • Yes you are crazy.

          Thanks for reaffirming that. I thought I was going crazy. er, something like that.

          I was only half serious about it anyway. I really don't touch Java that often and I work with C++ and Python every day, which explains why I have to look things up all the time.

          But I have a comment/question. The example you gave about the layered API adapting to a NetworkStream easily strikes me as something that should really be in the OS, or maybe a JVM thing. Instead of putting it all the way up into
          • by Hard_Code ( 49548 ) on Wednesday October 20, 2004 @01:52PM (#10578311)
            Because for that to work, every OS would have to support it, and since actually none DO (except maybe Hurd, which nobody uses), ipso facto, it must be done in the libraries. You could of course argue whether it is done in libraries that are visible to programs, or built into the VM itself but that sort of academic.

            Furthermore, I don't think it is necessarily true that everything-is-a-file is the best abstraction for all IO forever and ever. If you want such an abstraction simply use java.net.URL and write protocol handlers for any protocols not provided. So far, http[s]:// and file:// (and probably some others) work out of the box. Add your own.
          • Well I don't that abstracting to a file is actually that good a thing. The problem with treating it like a file is that there are fairly specifc things that seperate files from network streams from devices. For example you never need to worry about a file being disconnected but with devices and networks it is. Having seperate classes allows from more specifc methods methods to be created like isReachable(Address) which can't apply to a file. Trying to merge them all into Object would lead to sloppy semantic
            • Thanks for the explanation of why my idea is not a good one. But from my perspective it's not a total loss. I now know of something that will most likely appall any good Java programmer, and that's a useful thing to keep up my sleeve, to instantly make any code review meeting suddenly an awful lot more interesting.
        • Agreed 100%. There are things that annoy me about Java, but the IO classes are a godsend. I rather miss their flexibility and power when I'm working in C, C++, or even Python et. al. Not saying you can't find libraries that have many or most of the capabilities, but in Java, it's all built in and you know it will work on any Java platform.
      • *ahem*

        BufferedReader reader = new BufferedReader (new FileReader( fname ));

        (BufferedReader, for most things, is faster than a plain Reader, and in addition has the handy readLine() interface)

        Sometimes you just have to learn the libraries for your language. The Reader system, BTW, is nifty in that

        • reading from writing is clearly seperated, which helps with security checks. I can know that most of my code does not change the underlying files based on the object type that was passed in)
        • your code doesn
        • by Arrgh ( 9406 ) on Wednesday October 20, 2004 @03:57PM (#10579710) Homepage Journal
          A character is not a byte. Don't use FileReader unless you're absolutely sure that either:

          1. The default character encoding resulting from your particular combination of JVM and platform will be correct and non-lossy every time the program is run (e.g. not in Windows, which defaults to ISO8859-1), or;
          2. You're certain that the file contains only 7-bit ASCII.
          Readers and Writers deal with characters, whereas InputStreams and OutputStreams deal with bytes. FileReader and FileWriter are miscreants that should be deprecated, because they hide a very important implementation detail, namely that they always use the platform's default character encoding, which is often lossy.

          If you want to read characters from a file (or socket) you need to come up with some way to agree on the character encoding and specify it precisely. Not even HTTP does a good job of this--you don't know the character encoding of a request or response until the Content-Type header has been transferred, and often not even then.

          What's the character encoding for URLs and domain names [cr.yp.to]? Convention seems to be settling on UTF-8 but AFAIK it's just that.

          The equivalent technique that's less risky (but of course much more verbose) is:

          BufferedReader r = new BufferedReader(new InputStreamReader(new FileInputStream("foo"), "UTF-8"));
          String line;
          while ( (line = r.readLine()) != null ) { // etc... }

          Where "UTF-8" is a sane default non-lossy character encoding. If you don't know the encoding that was used to write the file you're about to read, you're sort of screwed. You can try some heuristics to try to detect its encoding, or if you're "lucky" you might find a Unicode Byte Order Mark [unicode.org].

          Note that none of this headache is particular to Java, it's just that the designers of Java knew early on that a character is not a byte and formalized that distinction (poorly at first) in the language and libraries.

          • The byte-character dichotomy is a very important point, and one that's often overlooked. But I don't really agree that:

            FileReader and FileWriter are miscreants that should be deprecated

            Sometimes you do want to use the platform's default encoding. In fact, I'd suggest, most of the time. In particular, it's the right default setting; if your program knows what encoding it wants to use, then it can go right ahead and use it -- but a lot of the time, it won't know, and the platform default is the Right T

      • If you want C, you know where to find it: /usr/bin/gcc is a good guess.

        If you want Java, then use Java.

        One of Java's most significant advantages is that it doesn't expose pointers or dynamic memory allocation to the programmer. This is a Very Big Deal, because there are only two kinds of programmers out there: the sort who know they can't be trusted to never leave a dangling pointer or a memory leak, and the sort who are living in denial.

        Why in the world would you want to throw away one of Java's major
        • Why in the world would you want to throw away one of Java's major advantages just to save yourself the minor inconvenience of having to learn something new?

          The guy was complaining about Java's not-so-great I/O API. That doesn't mean he doesn't like automatic memory management. Java's particular I/O API is not a prerequisite for automatic memory management.

          I also don't think he's complaining about "the inconvenience of learning something new." A well-designed API doesn't require much storage sp

          • API Docs (Score:3, Interesting)

            The Java API has a lot of these and that's probably why he has to keep going back to the API docs.

            I must say that is mitigated by the fact the Java has hands down the best API documentation for any platform. Really what more could you ask for. Combined with the Swingset and there SWING component guide [sun.com], who could ask for anything more in terms of documentation? Mind you I'll agree that getting used to all the layers can be a burden when first learning but it becomes elegant once you see the big picture.

            • I must say that is mitigated by the fact the Java has hands down the best API documentation for any platform. Really what more could you ask for.

              I agree that good documentation helps and that Java libraries are well-documented. However, that's still not a reason to downplay complaints against bad APIs.

              You also shouldn't assume that people who don't like some Java API's are just OO-deficient C people who don't see the "elegance" of layering. You have to layer in the right places.

              • The original
                1. because every input stream is a subclass of InputStream? You code for InputStream and you can support them all. If you needed the added support of a subclass use that one. I like Java's hierarchy. It reminds me of the ordering of natural sciences starting with "Living Things" (ie java.lang.Object) to the 5 kingdoms etc.
                2. I agree. The interface should have been sub-classed and then if you were going to implement the "optional" methods you'd implement the subclass. This angers me too... however I feel you'll
                • because every input stream is a subclass of InputStream? You code for InputStream and you can support them all. If you needed the added support of a subclass use that one. I like Java's hierarchy. It reminds me of the ordering of natural sciences starting with "Living Things" (ie java.lang.Object) to the 5 kingdoms etc.

                  I don't think "because everything else is done that way" is a good enough reason. If you had an ObjectOutputStream, there's no reason to expose the raw write() methods. If you want t

                  • My point was more than just "everyone else does it". I mean that the Java hierarchy is simple to understand and is a common structure we use to break down groups. I cite the 5 Kingdoms as an example. I think actually that that is a good metaphor of thinking about Java inheritance. It's own kind of biosystem or something. If there is something wrong with a class (I've never used ObjectOutputStream) then that doesn't mean that there is something wrong with the philosophy. Thats what I love about Java. It's no
                    • If there is something wrong with a class (I've never used ObjectOutputStream) then that doesn't mean that there is something wrong with the philosophy.

                      If the philosophy caused the crappy class, then, by definition, there is something wrong with the philosophy. If the philosophy didn't cause the crappy class, then the API is inconsistent. In Java, it's a little of both.

                      The original Java API philosophy was "let's make everything a subclass." An example of this would be deriving from java.lang.Th

                    • Your example of extending Thread versus implementing Runnable is a bad one. Extend-ing means you extend the concept and bring something new to it. Unless you are adding something conceptually to it don't extend it. For example if SelfTterminatingThread or something would be a proper sub-class. A thread that prints numbers (ie the basic "hello word" for programs to prove that thread scheduling is not deterministic)is fairly specific and does not extend the concept of a Thread. Interface means that something
                    • Your example of extending Thread versus implementing Runnable is a bad one.

                      I was using it as an example of where it is possible to use inheritance (and people do) but it is a bad idea. I used that example because you said you weren't familiar with ObjectOutputStream. But thanks for another installment of "Intro to OO for Literature Majors". Again, your examples are at such a high level that they won't help anyone but absolute beginners. You have to look at complex real-world API requirements if y

                    • I happen to thing that there is something wrong with Christianity/Islam. (I'm not naive enough to try and argue that topic on Slashdot, though).

                      oh i'd agree with that but not in away which you could develop from those two examples. hell i have a problem with people claiming the existence of good and evil...

        • all of the great programmers I know--are fluent in more languages than they have pairs of clean socks
          So... they know 2 languages ???
      • If you are not crazy then the day you need to worry about more than open a file you will appreciate Java libraries better.
        Else ...
    • by AT ( 21754 ) on Wednesday October 20, 2004 @01:06PM (#10577737)
      Look again. It is faster in the Linpack 500x500 test. That shows there is potential, at least, but obviously there's work to be done.

      Besides, there isn't much reason why a JIT should be slower than a natively compiled binary, besides startup time. The code still gets compiled, just at runtime. In fact, since profiling feedback is close at hand, it has an advantage (though newer versions of gcc/gcj can use profiling data to optimize as well).
      • Stack behaviour is static (ie, stack effects are constant. m popped, n pushed, all the time - unlike Forth) in the JVM. That means it is possible, but sort of hard, to statically translate the stack code to register code. Push/Pop don't HAVE to be used. IIRC, it could be relatively complex in Forth (http://www.complang.tuwien.ac.at/projects/forth. h tm [tuwien.ac.at] it might be discussed in this paper http://www.complang.tuwien.ac.at/papers/ertl93.ps. gz [tuwien.ac.at], but i'm far from sure, and don't feel like making sure at the tim
      • >In fact, since profiling feedback is close at hand, it has an advantage

        Yes, but on the other hand, the time used to instrument the currently running executable and optimising it is slowing down the execution.

        Whereas on a profiling computer, the profiling run is used to accelerate the executions which occurs *after* this one..
        • But the JIT profile data is representative of the current run. The profiling run used with the traditional compiler may or may not be representative.

          Both approaches have advantages and drawbacks.
    • by Dr. Bent ( 533421 ) <ben AT int DOT com> on Wednesday October 20, 2004 @01:16PM (#10577873) Homepage
      You'd think that natively compiled and optimized code would be faster than an interpreter.

      Code run in modern Java VM's is not interpreted...it's compiled on the fly with whatever performance optimizations are appropriate for the particular machine at that particular point in the execution path.

    • by murphee ( 821020 ) on Wednesday October 20, 2004 @01:23PM (#10577959) Homepage
      Sigh... for the last time: Java code executed with Sun or IBM JVMs is not interpreted, but compiled to native and heavily optimized code. Before anybody complains: yes, the Sun VM uses mixed-mode execution, which means that code is interpreted first until it has been used for a certain amount of times, then it's compiled. It might seem like a contradiction, but that's mostly a performance improvement (turning byte code into native code with optimizations costs time... often more time than it would take to simply interpret the code; not to mention the memory overhead of creating native code for every one-off method). IBMs VMs always JIT compile their code, but they have different levels of JIT Compilers; the first time code is used, a baseline compiler simply transforms byte code into native code. If that code is heavily used, it is optimized and the old code is replaced with the faster version. Again: No Bytecode interpretation, Dynamic compilation! Next week: Why Java != JavaScript

      • Java code executed with Sun or IBM JVMs is not interpreted, but compiled to native and heavily optimized code.

        No argument there, but wouldn't you think that compiling and optimizing the code every time you run a Java program would make it slower than compiling and optimizing it once ahead of time? All of that compiling and optimizing is included in the performance specs for the JVM's, but they almost uniformly trounce GCJ.

        • by murphee ( 821020 ) on Wednesday October 20, 2004 @02:05PM (#10578476) Homepage
          The big advantage of VMs and dynamic compilers is that they can take advantage of the CPU that they are running on. Ie. if you use gcj to compile software, you have to compile optimize it for a specific CPU (instruction order is important and makes a difference, as different OutOfOrder engines on different CPUs (think Intel vs. AMD) act differently). It also means more complicated deployment, for instance: if your code uses SSE2, your program only runs on Pentium4 and other CPUs that have SSE2. If you don't want that, you have to factor out the functions that make use of SSE2 and provide replacements that don't use it (which also means having to check the CPU at startup and choosing the libs appropiately).
          This doesn't matter for a compiler working at runtime, as it knows about the capabilities of the CPU and just uses them if possible (I know that the JVM uses SSE when available, though I haven't found out what exactly it is used for).
          A JIT/DynamicCompiler also knows everything about your CPU, for instance Cache sizes,... and can exploit that knowledge to further optimize the code or data layout.

          Also: you have to keep in mind, that Java (like .NET) is a dynamic environment, ie. you can load classes at runtime. This proves to be a bit of a problem with static compilation, or I should say: with optimization. Hotspot (Suns Dynamic Compiler or VM and also other JVMs) do some optimizations that improve OOP code, like devirtualization (which removes the vtable lookup overhead for virtual method calls) and even virtual method inlining.
          Now: these optimizations have one problem: they may be accurate when the code is generated... but what happens when a new class is loaded? For instance, the compiler devirtualizes a method in class A, because Class Hierarchy Analysis tells him, that this method is never overridden. So... 10 minutes later, a class is loaded, that is derived from class A, and overrides the aforementioned method... so... the code is suddenly incorrect. In this case, the dynamic compiler, takes back the old optimization, and everything is OK again.
          This kind of scenario makes it necessary for many optimizations to have a system (compiler) working at runtime.

          Mind you: Runtime code loading or generation is a point where gcj gets really slow, because then it has to run that code with gij (the interpreter). There are people that got Eclipse compiled with gcj, but they ran into that problem too (the Plugins in Eclipse are loaded at runtime; they worked around that by compiling the plugins as shared libraries).

          • Thanks for the informative and thoughtful reply. In my particular case, I'm evaluating running eCos+GCJ vs Wonka or Kaffe on an embedded processor. I had assumed that the GCJ-compiled code would be faster, but now I'm not so sure. I doubt Kaffe or Wonka do extreme optimization, but it sure got my attention when GCJ didn't compete favorably with 'real' JVMs. The main snag with GCJ is static compilation - eCos does not have any dynamic loading available, so things like System.loadLibrary() cause real problems
            • Depends on what you want to do with it; if you can manage to have the whole code statically compiled (with no dynamic class loading), that would not be a problem.
              Here's the link to the LinuxJournal text talking about getting gcj to compile Eclipse: http://www.linuxjournal.com/article.php?sid=7413.h tml [linuxjournal.com] Maybe that can help you with your decision.
            • I thought this might be interesting...

              System.loadLibrary() is used for loading native code. If you wish to load a class file at runtime, you may have to implement your own ClassLoader.

              • Thanks for the info. I'm trying to jam a statically-linked gcj executable into an embedded device. When trying to compile a JNI-based open source jar into an object file, it's not always possible (license-wise) to go in and hack the code to suit my needs. Most JNI implementations call System.loadLibrary() to load the native half of their code, making it unusable in a statically-compiled executable. Someone suggested that GCJ support an option to specify what libraries are statically linked - this would help
        • by maraist ( 68387 ) * <michael.maraistN ... m.com minus poet> on Wednesday October 20, 2004 @03:02PM (#10579039) Homepage
          The other replyer did a good job, but missed your question, which was that you have to recompile on every execution.

          On SUN, -server means that a full JIT is applied to all code before it is run (though some enhancements might not be possible until a few runs are performed; run-time profiling / enhancements). -client means that code is only JIT'd if it's run more than a few times (meaning it's a hot-spot which is worth trading off compile-time for run-time). While -client does a good amount of inital JITing, you can see that there's not a terrible amount of performance difference between -client and -server. At least not compared to gcj.

          This should convince you that the overhead of compiling is nothing compared to the amount of work being executed. In my experience, runing tomcat with -server v.s. -client is night-and-day as far as load-time is concerned (2 to 3 times slower to get into a state ready to accept web-requests), but I don't notice tremendous differences in runtime performance (compilation is hidden in between web requests). What does this say about the differences between tomcat and the benchmark? Again that there must be several orders of magnitude more time spent executing than loading.

          If you wanted to write "ls" in java, then yes, you'd prefer something LIKE gcj (don't know what their load time is so I can't say with certainty). Something that does very little, and thus wants a minimalist foot-print is not going to like java. Even perl was too much over-head a few years ago for most small tasks, which is why awk, sed are still in use today, even though perl totally blows them out of the water for even the most trivial tasks. Today machines are fast enough that the human-perceivable difference in latency goes away. Perhaps when we reach 10GHZ VLIW 1024reg CPUs and have 1TB of RAM, the latency of "java -server" will go away too. Note multi-CPU / hyper-threading isn't going to help single-application load-time.

          SUN JVM 1.5 has already started to do what you're essentially asking though; they've pre JIT'd most of the rt.jar file. Much of this file is used for even the most trivial possible hello-world application, so it made sense to pre-store that material. I suspect in future versions, we'll see dynamicly cached JIT code, which would tremendously improve start-up time. Of course, with major java applications (web servers, database wrappers) the only time you tend to restart java is when you're upgrading the jar files, so who knows if anyone really cares.

        • I don't know much about the Java world, but I suspect that they have similar caching features to the .NET CLR. In the .NET Universe, commonly called sections of code are optimized to machine code, and cached to the disk for later invocations of the same program / dynamic library.

          That said, as the environment in which the program is running changes (such as the supply of available memory decreases), the CLR may decide its worth a recompile to optimize to changed conditions.
    • by gokeln ( 601584 ) on Wednesday October 20, 2004 @02:05PM (#10578479)
      Performance is not the biggest reason to use the GCJ. It is that you don't have to have a pre-installed Java Runtime Environment. Sun's JRE license does not allow you to redistribute, so for every install, you have to be sure there's a JRE or download it from the internet. This is most annoying, especially for disconnected networks.

      With GCJ-built code, you can put everything needed on a single installation medium.
    • You'd think that natively compiled and optimized code would be faster than an interpreter. I guess there's optimization work to be done in gcjlib.

      The JIT inside a Java or CLR (C#) runtime has more information available to it for optimization. That means that, ultimately, it should be able to do better than gcj on long-running, compute-intensive jobs.
    • by jilles ( 20976 ) on Wednesday October 20, 2004 @05:04PM (#10580426) Homepage
      If you'd know what dynamic compilation is, you'd fully expect the opposite. I'm only surprised by the difference in numbers. GCJ is doing much worse than I expected. From the looks of it, the only times it comes close to competing is when executing simplistic numerical benchmarks (which are presumably easy to optimize). What you are seeing here is that runtime optimization actually works when doing real-life complicated stuff.

      The use of the word interpreter is really suggestive. A compiler is nothing more than a static interpreter. The only two differences with a JIT compiler is that it permanently stores the results of the interpretation and that it has much less information to predict the performance of the code it is generating. A static compiler will do well on simplistic programs whereas a run-time optimizing jit compiler will be able to always match the performance of a static compiler (simply by running the same optimizations) or beat it (by applying optimizations to address observed performance bottlenecks in the running program). Of course this costs some computation time but you can take that into account as well when deciding to optimize or not.

      In simplistic scenarios such as simple benchmarks, there won't be much to gain from run-time optimization. So performance is about equal with a slight advantage for the static compiler because it is not wasting resources on figuring out how to optimize the program. Despite this GCJ's performance is disappointing even on these kinds of benchmarks.
      • Non-final non-private methods generally cannot be inlined by GCJ because they may be overridden by subclasses, but with a JIT compiler such methods can be inlined if the class they are in has no subclasses loaded (which is among the majority of cases).

        Also gcj-compiled programs may have worse garbage collection. AFAIK a simple conservative GC is used.

        I don't think commonly-used Java JIT compilers do much more than this.

  • JRocket? (Score:3, Interesting)

    by Lechter ( 205925 ) on Wednesday October 20, 2004 @01:41PM (#10578172)

    I'd also be very curious to see an independent mark of BEA [bea.com]'s JRocket JVM [bea.com], which is supposed to blow Sun and IBM away on X86.

    • From the JRocket license:

      3. Benchmarking. As a condition to the license granted in Section 1,
      above, you may not disclose the results of any performance benchmarks
      for JRockit to any third party unless you comply with all of the
      following requirements:

      (a) The benchmarks to be published are for the most recent
      "general availability" version of JRockit on a certified and supported
      operating system available from BEA at the time of publication, solely
      using product features expressly documented in the JRockit
      d

  • by Anonymous Coward
    I've run probably over 100 benchmarks comparing IBM and Sun jdk for JESS using manners and waltz benchmark. In all of my tests using -server and client for jdk1.4.2, IBM was on average 20-30% faster than Sun. Makes me wonder if there was something specific in the JESS test, which gave Sun the advantage.
  • excellent for C# (Score:5, Interesting)

    by jeif1k ( 809151 ) on Wednesday October 20, 2004 @03:19PM (#10579234)
    What these benchmarks suggest to me is that C#/.NET is fully competitive in terms of performance with the best Java implementations, and C#/Mono is good enough for most work. In fact, given the additional language features of C#, it may well be easier to write fast code for many compute-intensive problems using Mono than Java today.

    The level of performance of Mono is even more impressive given how young the project is; at the same point in time in its evolution, Sun had barely managed to produce a JIT compiler.
    • On the other hand, the mono developers did have full access to all the research papers from sun that have been published on JIT compilation, garbage collection, dynamic compilation etc. It was clear from the beginning that JIT compilation was the only way to make mono perform. You could say that SUN did the hard work for them.

      I haven't seen any decisive language feature that gives C# any edge over Java. Most of it is syntactic sugar which is nice to have but not really that important if you have code compl
      • On the other hand, the mono developers did have full access to all the research papers from sun that have been published on JIT compilation, garbage collection, dynamic compilation etc.

        When Java came out, garbage collection already was a mature field (Java's GC is still not state-of-the-art). Dynamic compilation and dynamic optimization had also been around for quite a number of years. So, both the Java and Mono implementors had the benefit of hindsight. In fact, Sun had many of the world experts on th
        • Besides the lack of struct types (which I find painful to use, but they at least are there when I need them), Java also makes every non-private-non-final method virtual, which AFAIK means less opportunity for inlining.

          Also, the lack of unsigned types in Java can sometimes be painful.

          • Java also makes every non-private-non-final method virtual, which AFAIK means less opportunity for inlining.

            That, in particular, Sun has a good excuse for. A good JIT should be able to make up for that. Also, declaring a class or method "final" can make the JIT's job simpler.

            But there are several other areas where the JIT can't compensate for language differences. For example, template classes used with primitive argument types are much more efficient in C#.

            Also, the lack of unsigned types in Java c
            • Re:Also... (Score:1, Interesting)

              by Anonymous Coward
              For example, template classes used with primitive argument types are much more efficient in C#.

              If you want to use a macro preprocessor with Java, there is nothing stopping you from doing so. The fact that the base language doesn't include a macro system is immaterial, as there are many external alternatives.
        • >When Java came out, garbage collection already was
          >a mature field (Java's GC is still not
          >state-of-the-art).

          WTF? Sun JVMs have had state of the art GCs for years. Actually, there's not just one GC, but a lot of them in the JVM. The default version is a Generational GC. You can choose options like incrementality (to avoid GC pauses, or at least to minimize them). You can choose a Parallel GC (which exploits several CPUs for marking and collections).
          There is a GC option for large heaps (Scavenger a
          • Sun JVMs have had state of the art GCs for years.

            There is Sun's feature list, and then there is actual implementation and performance, and that hasn't been all that impressive. Until fairly recently, Sun Java still had 12 byte object headers (I think they are still 8 bytes now, but I frankly just don't care anymore).

            The only thing that remains is stuff like Escape Analysis

            Yes, important stuff like that, stuff that other systems have had for many years. And other systems also have planned ahead in th
            • @GCs:
              Great statement... so, what actually hasn't been impressive? .NET still only has only the Generational GC, without any options for large heaps or support for SMP. Mono actually uses the Boehm GC, which is conservative, thus doesn't have the benefits of Generational GCs (cheap allocation, no cost for deallocating dead objects), and might actually leak memory (a common property of conservative GCs, as they basically guess which integers are pointers and which are not; don't know if the Mono guys did anyt
              • by jeif1k ( 809151 )
                HotSpot, the Sun JVM, has had 8 byte object headers for years (at least since version 1.3).

                So it still has 8 byte object headers, after all these years, instead of 4 bytes or less. (I qualified my statement only because I thought Sun might have fixed this in 1.5, but apparently not.)

                NET still only [...] Mono actually uses the Boehm GC,

                Yes, the Sun Java implementation is more mature than .NET and Mono, we know that. What's your point?

                Sun has made the right decision to keep the reference-type-only s
                • > Yes, the Sun Java implementation is more mature
                  >than .NET and Mono, we know that. What's your
                  > point?

                  My point? You complained about Sun (claiming that Sun claims to have GC options, but that they have implementation problems).

                  >The semantics of value classes are supposed to be
                  >different. Giving everything reference semantics
                  >would be the wrong thing to do even if the
                  >compiler managed to perform every possible
                  >optimization.

                  And what would the advantage of that be? Having only one w
                  • Re:excellent for C# (Score:3, Interesting)

                    by jeif1k ( 809151 )
                    My point? You complained about Sun (claiming that Sun claims to have GC options, but that they have implementation problems).

                    I didn't "complain" about Sun. I simply pointed out that they started out with a pretty poor implementation and that it has taken them a long time to bring it to maturity. I also pointed out that C# is competitive with Sun's much older Java implementations now in terms of performance.

                    And what would the advantage of that be? Having only one way of accessing data is simpler. Havin
                    • OK... let's say this: I like Javas design, you like C# design; it's futile to discuss taste issues, so I won't.

                      You're wrong about Javas openness. The JVM spec is open:
                      http://java.sun.com/docs/books/vmspec/2nd- e dition/ html/VMSpecTOC.doc.html
                      The Java language spec is open as well:
                      http://java.sun.com/docs/books/jls/second_e dition/ html/jTOC.doc.html
                      Yes, I agree, they don't have a big "ECMA" or "ISO" badge, but well... since when were those worth a penny. C++ has been ISO certified for years or decades, but
                    • The JVM spec is open:

                      "Open" means that you can implement the spec as you like, and the JVM spec is not open (and neither are the language or the libraries): read the license [sun.com] linked to at the bottom of the spec that you yourself point to. Also read Stallman's take on this [newsforge.com] if you don't believe me.

                      Yes, I agree, they don't have a big "ECMA" or "ISO" badge, but well... since when were those worth a penny.

                      They guarantee that Microsoft can't play the kinds of legal games that Sun has been playing with the Ja
                    • > "Open" means

                      That should be rephrased as "I, jeif1k, think, that 'open' means...". The term "open" means something else to everyone, there's no specific definition. The JVM and Java lang spec are open for people to read, and to implement. People have done so for years (I mentioned the JVMs, and there are countless extensions of the Java language, like AspectJ, GJ, ...). BTW, C# as an ECMA standard is nice, but a language is useless without it's libs, and those aren't in the ECMA standard.

                      >They guar
                    • > Mono 1.0.2 (--optimize=aot) (**)

                      You missed the explanation note here. I'll add it for you:

                      **) I'm an Anonymous Coward who can't read a manpage to check the option to use to enable full optimizations.

                      BTW: it's -O=all.
  • For any of these benchmarks, does somebody have comparable statistics for a C/C++ implementation?

    Obviously some of the benchmarks would be apples-and-oranges, but there should be a few of them that would allow a direct comparison...
  • Java GUI performance (Score:4, Interesting)

    by O ( 90420 ) on Wednesday October 20, 2004 @06:15PM (#10581085)
    For one of my classes, I've been using Eclipse to do Java development. The CS lab at school has only Windows machines (XP Pro; P4-3.2G w/ HT, I think) and Eclipse runs very nicely on these computers.

    At home, I run Linux on a P4-2.6G w/ HT. Frankly, Eclipse runs like ass on Linux. I've tested it with pretty much every available JRE/J2SDK for Linux, and it sucks with all of them.

    This is using Eclipse with SWT compiled for GTK+. I have used WebSphere with the Motif libraries, and it was quite fast, but Motif is just horrid to use next to GTK+. But, SWT with GTK+ is just so damned slow. I've run Eclipse on my PowerBook G4, and it sucks even more on Mac OS.

    I find this funny that Windows is the best supported platform for Java GUI programs, considering that MS hates Java. So, is GUI performance with Java apps ever going to be acceptable on Linux? It's really pathetic at this point.

    I know Sun added OpenGL 2D acceleration to their 1.5.0 JRE, but that gives me lots of artifacts with the included demo programs, and SWT doesn't use the acceleration at all.

    • I've found that having the proper video drivers under linux makes a huge difference in X performance. Under Windows, you're probably using graphics drivers with DirectX support. You should be using drivers under linux that support DRI.
      • Using X.org 6.8 and NVidia's drivers with the DRI stuff enabled on a Geforce2 GTS. Admittedly, this card is kind of old now, but should be more than sufficient for accelerated 2D graphics. It beats the socks off the integrated i865 graphics I was using, and that was with all of the latest DRI/DRM (I can't keep those straight anymore) drivers, too.
    • Works fine for me on Linux (with GTK). Yeah, it can be slow at times, but when I'm on Windows 2k (on the same computer) its just as slow (if not slower).
    • Actually, I tried a few months ago a GUI java app (actually, a Java IDE) which uses PGUI (my former company's Swing equivalent, AWT-based). I used the then latest Sun JVM for Linux. This IDE seemed a lot "faster" than stuff using "native" GUI on Linux (Mandrake 10 Community Edition if I remember correctly). Strange... Maybe has something to do with how QT/GTK and Sun AWT are implemented with regards to the X protocol... Ideas, anyone?
  • Hopefuly this should inspire educated discussion on ones favourite JVM/CLR.
    You can always hope...
  • Wow no AMD vs Intel wars :). Hardly even see any Java vs .Net...

    On a serious side, I'm curious about java benchmarks on Itanium. I've seen a tender where the specs were java on Itanium for production and java on x86 for development (d'oh).

    Anyone have benchmarks comparing Java performance on Itanium vs x86? I've seen some on the SPEC website, any others?
  • ... they didn't use that, eh?

    The docs say that it (can't remember the correct name :/) enables some extra optimizations that would take too long for a just in time compiler.

    Quite a good idea, actually, most of the time you do have the source avaible, and you can probably run it off MSIL bytecode too.

Almost anything derogatory you could say about today's software design would be accurate. -- K.E. Iverson

Working...