Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Java Programming

Java Performance Urban Legends 632

An anonymous reader writes "Urban legends are kind of like mind viruses; even though we know they are probably not true, we often can't resist the urge to retell them (and thus infect other gullible "hosts") because they make for such good storytelling. Most urban legends have some basis in fact, which only makes them harder to stamp out. Unfortunately, many pointers and tips about Java performance tuning are a lot like urban legends -- someone, somewhere, passes on a "tip" that has (or had) some basis in fact, but through its continued retelling, has lost what truth it once contained. This article examines some of these urban performance legends and sets the record straight."
This discussion has been archived. No new comments can be posted.

Java Performance Urban Legends

Comments Filter:
  • sounds like the optomizing needs a little, well, optomizing
  • I wonder to what extent this exists in other languages? For example, there is an oft-cited tip that says using persistent database applications with LAMP applications increases performance. I've found in actual practice that this depends on a lot of factors such as server load, amount of available memory, etc.

    I remember in my Turbo Pascal programming days (heh) that a lot of people said that using Units would degrade performance. So I tried it both ways and it never really made a difference, for my applications anyways.

    I'd say before taking someone's word for it on a performance enhancing technique, test it out. Because not everything you read is true, and not everything you read will apply to every environment or every application.
    • Good advice. People sometimes seem to want to solve the problem before knowing what the problem statement is. While their actions may not degrade performance significantly, they often times do not help.

      I've learned over time that everything is relative. There is no cut and dried right and wrong in a lot of cases, but degrees of both. The real answer depends on your need, and not all needs are the same.

    • by Tony-A ( 29931 ) on Saturday May 17, 2003 @11:27PM (#5983763)
      I wonder to what extent this exists in other languages?

      Probably lots. Everywhere.
      As a crude approximation, 90% of the time is due to 10% of the code. Improving the "efficiency" of the 90% of the code that is responsible for only 10% of the time tends to be counter-productive. Of course there are no easy magic rules for how to improve the 10% of the code that is responsible for 90% of the time, or even identify exactly what that 10% really is.
      What does work is to have a sense of how long things should take and find and cure whatever is taking much longer than it should.
      • Actually that easy magic _does_ exist: it's the profiler. I don't know if Java has a profiler, but if it has you should find out how to use it because it is incredibly useful for identifying that small portion that needs more attention (*).

        To support this with some real numbers, a while ago I was profiling a C++ application I was writing. The application has ~200,000 lines of code, and was writing out ~3,000 values per second. This was not good enough, so I profiled, and carefully improved the "top scorer

      • As a crude approximation, 90% of the time is due to 10% of the code.

        I think with respect to web programming, this is itself a myth. This rule of thumb seems to have reached the popular consciousness of developers in the 80s when desktop apps ruled. This was a time when each additional user adds a CPU. And it's true; in such a world, you don't worry about that other 90%. But when you have a fixed number CPUs shared by vastly more clients, you need to worry about more than just the 10% most offending cod

    • a lot (Score:5, Insightful)

      by Trepidity ( 597 ) <delirium-slashdot@@@hackish...org> on Saturday May 17, 2003 @11:29PM (#5983770)
      A lot of them are things that actually used to be good advice, but for some reason or another (changes in hardware, compilers, etc.) aren't anymore. For example, it used to be a good idea in C to iterate through arrays by incrementing a pointer and dereferencing it instead of incrementing an index and then using array subscripting -- that way you had one increment per iteration, instead of one increment plus one offset calculation (basically you saved the addition that takes place during the array subscripting). However, on many modern C and C++ compilers in many situations, array subscripting will actually be faster than the pointer-incrementing method, because it's easier for the compiler to perform certain optimizations with. [Reference: Michael J. Scott, Programming Language Pragmatics (Morgan Kaufmann, 2000).]

      There's quite a bit of other stuff like this out there as well.
      • Re:a lot (Score:3, Insightful)

        by Ed Avis ( 5917 )
        The moral of the story is, whenever stating any programming rule, always give the reason for it as well. Then people reading the rule later can also check whether the reason is still valid. This applies to all programming language commandments, not just those targeted at performance. It's amazing the number of shops that have fossilized rules from ten or more years ago, the reasons for which are long gone but which still have to be followed anyway although nobody really knows why.
      • That's an excellent example of what I consider the best rule for proper coding: make your goal as clear as possible. The code using array subscripting is easier for less-experienced maintenance programmers to work on and doesn't need to be replaced when you find a clever compiler does a better job.
    • I remember in my Turbo Pascal programming days (heh) that a lot of people said that using Units would degrade performance. So I tried it both ways and it never really made a difference, for my applications anyways.

      That's a good example of optimization tip that is "true" yet useless. Turbo Pascal used a 64K code segment for each unit, so function calls within the main program were "near" calls and function calls to another unit were "far" calls. Each far call made the code one byte bigger than each near

    • A few from C++ (Score:5, Insightful)

      by Anonymous Brave Guy ( 457657 ) on Sunday May 18, 2003 @08:30PM (#5988594)
      I wonder to what extent this exists in other languages?

      The C++ world is full of myths about what does and doesn't enhance performance. Amongst my favourites...

      1. Exceptions cripple performance.
      2. Using virtual member functions cripples performance.
      3. Using templates cripples performance.

      In each of these cases, there is some overhead involved if you actually use the language feature, but generally not otherwise with any recent compiler. However, those overheads are usually less than hand-crafting the equivalent functionality (e.g., long jumps, function look-up tables a la C) would incur. Furthermore, if you actually understand the implications of these features, you can keep the overhead way down. The next time I see someone criticise templates for code bloat, and then demonstrate in the next post that they've never come across templated wrappers for generic base classes, I'm going to have to lecture them. }:-)

      On the flip side...

      1. Passing by const& always improves performance.
      2. Pointer arithmetic is faster than array indexing.
      3. Making things const helps the optimiser to improve performance.

      Most of these get much more credit than they deserve. The first is true often, but not always: it sometimes shafts the optimiser in many compilers. The second is not true with any recent compiler. The third is true sometimes, but not nearly as often as you might expect: optimisers miss many of the apparent (to humans) possibilities anyway, and spot some of the others with or without a const there.

      As always, the rule of thumb is to write correct, maintainable code first, and then to use compiler-specific, profiler-induced hackery where (and only where) required. Whether you're writing a database or a graphic engine, this is pretty much always good advice.

  • by ktakki ( 64573 ) on Saturday May 17, 2003 @10:09PM (#5983426) Homepage Journal
    There was this one guy who worked for Sun Micro and was disappointed at how slowly Java ran on his Sparcstation, so he attached one of those JATO rocket engines...

    k.
  • by Anonymous Coward on Saturday May 17, 2003 @10:12PM (#5983441)
    Ok, so none of the things we thought were slow are really slow.

    Then why the hell is it so slow?
    • Re:Java is Slow (Score:5, Informative)

      by CyberGarp ( 242942 ) <`gro.ttebraG' `ta' `nwahS'> on Saturday May 17, 2003 @11:32PM (#5983783) Homepage

      I've worked on two embedded projects using Java on low power (energy consumption/CPU performance both) platforms. Both projects had amazingly similar things happen. I stated up from, "Java is interpreted; it will be slower than the C code of the previous project on the platform, potentially significant."

      The reply, "We don't care about performance."

      Four months later... "Why the hell is your code so slow?"

      Interpreted is as interpreted does.

    • The article is wrong (Score:5, Informative)

      by jdennett ( 157516 ) on Sunday May 18, 2003 @12:10AM (#5983928)
      In a real project, using JDK 1.3 on various platforms, we had performance issues. So, we measured speed in various ways, and found three main problems.

      1: Synchronization.

      This is slow. Really slow. And it just gets worse when you're running on dual or quad processor machines. StringBuffer is a major offender; in a lame attempt to save one object allocation, it uses a simple reference counting device which requires synchronization for operations as trivial as appending a character. Writing a simple UnsynchronizedStringBuffer gave a measurable performance boost.

      2: Object creation

      This is the real problem. GC is slow. GC on SMP machines is still really slow in JDK 1.3 -- maybe JDK 1.4 is better, my experience is a little out of date. By rewriting large chunks of code to create fewer objects (often by using arrays of primitives) we made it much faster -- close to twice as fast, if memory serves.

      3: Immutable objects

      Yes, these add to GC, and so are bad for performance. But not such a great evil, so long as you don't overuse them.

      Funny that the article "debunks" these myths without figures, when our thorough measurements showed that the problems are real, and in our case would have killed our chances of meeting performance targets had we not found them and dealt with them.

      Some bigger issues for server-side design: be careful how you use remote calls (such as RMI) and how you use persistence (such as JDO). But the small things, which the article seems to misrepresent, matter too.
  • The best tip (Score:5, Insightful)

    by Anonymous Coward on Saturday May 17, 2003 @10:14PM (#5983456)
    The best tip in the article, which really applies to any language (even to choice of languages), is IMHO
    "Save optimizations for situations where performance improvements are actually needed, and employ optimizations that will make a measurable difference."
    • Save optimizations for situations where performance improvements are actually needed, and employ optimizations that will make a measurable difference."

      1) become a journalist
      2) use common sense and lots of bullshit
      3)????
      4) profit!

      The missing step appears to be get an MBA and go into management
  • It doesn't help... (Score:5, Interesting)

    by devphil ( 51341 ) on Saturday May 17, 2003 @10:15PM (#5983457) Homepage


    ...when one of the first issues of Java magazine published an article explaining the Java object runtime model, but made it little more than a FUD-filled advertisement. (What killed it for me: claiming that C++ vtbls are always, in every implementation, done as a huge array of function pointers inside each obvject. It wasn't a typo, either; they had glossy color diagrams that were equally deliberately false.)

    I think Java's a decent language, but it invented nothing new. Every language feature had been done before, and without the need for marketing department bullshit.

    • by Zach Garner ( 74342 ) on Saturday May 17, 2003 @10:19PM (#5983474)
      Every language feature had been done before...

      One day, someone is going to come along with a really f'ing amazing language. Everyone will be talking about how great it is. And if you are lucky, it might have half the features that Lisp has had for ages.
  • Times change (Score:4, Interesting)

    by onelin ( 116589 ) on Saturday May 17, 2003 @10:17PM (#5983464)
    I think a lot of the "urban legends" originated before CPUs were at a speed where it didn't matter. Past 500Mhz, as one professor put it, and the CPU starts being fast enough that handling Java doesn't slow it down significantly compared to before then...and now we're dealing with multi-GHz monsters.

    What the article said is true - JVMs have improved a lot. They are getting better and better, even today. My friend likes to fool around with all these little 3d demos in Java and even the latest JDK (1.4.2 beta) suddenly offers big performance boosts over the previous JDK. The fact that I refuse to ever use a beta Java SDK is another story, though...so I won't see those performance gains for a little while.
    • Re:Times change (Score:5, Interesting)

      by etymxris ( 121288 ) on Saturday May 17, 2003 @10:30PM (#5983532)
      JVMs are always improving, I'll definitely agree. But efficiency will always matter. I was writing my own regular expression parser (had to homebrew it for various reasons) that got statistical information from multiple megabytes of text data. Whatever performance improvement that could be made was noticed...a lot. Do you want to wait 60 seconds or 30 seconds for the results? It's a big difference.

      No matter what, there will always be applications that strain our machines to the cusp of their abilities. And there will always be things we want to do that our machines cannot handle. It's only by performance tuning that these tasks can go from impossible to possible.

      If John Carmack was forced to program in Java, for example, Doom would only now be possible. And Doom III wouldn't be possible for many more years. Performance matters. Not always, but often.
      • Re:Times change (Score:5, Interesting)

        by delmoi ( 26744 ) on Sunday May 18, 2003 @02:46AM (#5984419) Homepage
        If John Carmack was forced to program in Java, for example, Doom would only now be possible. And Doom III wouldn't be possible for many more years. Performance matters. Not always, but often.

        Actually, carmack considered java for Quake 3, but decided against it because he was worried about the quality of JVMs (something he couldn't control). Not because of their speed. He's said on many occasions that optimization in game code isn't even important anymore, since the vast majority of the work is done by the CPU is code inside the video card driver. He's said that for quake 3, even doubling the speed of the game code would only give a 10% improvement in framerate.
        • Re:Times change (Score:3, Informative)

          by Yarn ( 75 )
          He wasn't going to use it for the renderer, but for the AI/physics etc which was QuakeC bytecode in Quake 1, DLL/.so in Quake 2 (& Q3a IIRC)
    • Re:Times change (Score:5, Interesting)

      by Orne ( 144925 ) on Saturday May 17, 2003 @11:23PM (#5983746) Homepage
      Totally agree. I remember when Java came out, I was still in college using my 486 DX2 66MHz, with all of 64 MB of RAM. That thing had its hands full with normal Windows 95, much less running Windows while browsing with Netscape to stumble across a web page with a Java applet.

      In those days, I hated Java and Macromedia Flash, because even then, they only used it to do the exact things a scripted mouseovers could reproduce. Those two programs accounted for most of the slowest web page loads...

      Now I have a P4 1.5 GHz with gobs of RAM running XP, and I have a hard time running enough tasks to slow it down. With a cable modem, I don't care about huge binary applets. I guess Java just needed some hardware upgrades for it to become useable...
  • Java's memory usage (Score:2, Interesting)

    by zymano ( 581466 )
    Does anyone know why Java isn't more memory efficient? Lack of pointers?

    Isn't the memory usage one of negatives of java?

    While I don't care too much for java's wordy verbose syntax , anything that competes with Microsoft is A-OK in my book.

    If any of you think Sun is making a ton of money on java , check this link out. [com.com]

    I am beginning to feel sorry for SUN. They are also in some economic hard times laying off alot of people.

    • by onelin ( 116589 )
      If for nothing else than Java competing with .NET, I'm happy it's there. It's not my favorite language, but I'll certain defend that it isn't as bad as most people think it is.
    • by fishbowl ( 7759 ) on Saturday May 17, 2003 @11:13PM (#5983699)
      >Does anyone know why Java isn't more memory
      >efficient?

      Well, java applications tend to use a whole lot of memory compared to C++, because of the way objects are allocated, and because there is no control over the details of allocation available to the programmer. Java Objects tend to be pretty heavy, partly because every Object carries a virtual table (to implement polymorphism, reflection, etc.) and also because every Object has the overhead required for the thread synchronization model.

      As to whether or not it is "efficient", it depends on your point of view. For some applications, the java memory model is very efficient. Java goes to great lengths to ensure coherence among unsynchronized reads and writes in multiprocessor systems, and it must accomplish this via completely abstract means. Platform independence comes with a price, and a big part of that price is that high level optimizations become difficult or impossible to implement. JIT's and native libraries may help, but, there is still a hugely complex problem in exposing any sort of high level control of those things to the programmer. That is, in my opinion, the main reason that it is not fair to compare Java performance to C++ performance. It is not an apples-to-apples comparison. I think it would be reasonable to compare a transaction system written say, in J2EE under an app server, to say, a C++ transaction system running under an ORB. I think you will find that Java compares favorably in that context, and is simpler to write and maintain. I believe that makes it an excellent choice for business software.

      If your model did not require threads and locks, for instance, you could do away with much of the complexity.

      There is a whole heck of a lot going on under the hood to enable concurrent processing. You write your code as if it will execute sequentially, but there are many situations where operations will be interleaved, optimized to execute out-of-order (or even optimized away entirely).

      The java implementation is much more concerned with Visibility, Atomicity, and Ordering, than it is with raw performance. If you really want to learn how it works, the specification is not a difficult read. Most of the details about memory are here:

      http://java.sun.com/docs/books/jls/second_editio n/ html/memory.doc.html#30206

      • Java Objects tend to be pretty heavy, partly because every Object carries a virtual table (to implement polymorphism, reflection, etc.) (...)

        Maybe this is how urban myths come into life?

        An object in java does not have a 'virtual table'; a class has a table with method pointers, an instance of that class does not. One could say that "the method table is static". It would be very foolish to have x exact copies of a method table for x instances of the same class.

  • by philovivero ( 321158 ) on Saturday May 17, 2003 @10:21PM (#5983485) Homepage Journal
    Interesting. As a guy who's been a die-hard PostgreSQL for a number of years, and who recently accepted a job doing hardcore MySQL administration, I was dreading it, because everyone knows MySQL has bad transaction management, horrible administration nightmares, and is only good for developers.

    And I'm sure MySQL DBAs all know PostgreSQL is slow, bloated, and is only good for huge database rollouts.

    Except, well. You get the gist. I'm replying to this article because I now know first-hand that both camps are getting a lot of it wrong.

    I've written up what began as a final in-depth studied proof that MySQL wasn't ready for the corporate environment (because I'm a PostgreSQL guy, see?) but ended up reluctantly having to conclude MySQL is slightly more ready for the corporate environment than PostgreSQL!

    The writeup [faemalia.org] is on a wiki, so feel free to register and add your own experience. Please be ready to back up your opinions with facts.
  • Absolut no content (Score:2, Insightful)

    by TheSunborn ( 68004 )
    Interesting. A text that say that some things in java are not as slow as people belive, and yet it fail to deliver any prove anything. For ekample it say: synchronized methods are not slow, and yet it include no benchmark/test to backup that claim.

    And about the strings example:
    If you want't to prove that the Immutable string class is not slow the right way to do it is to make a program that make a lot of string operations and then compare the speed with one of the non Immutable string classes for java that
    • (A reply to myself)
      I actuelly tried to run the benchmark
      Synchronized took 213 ms
      Unsynchronized took 22 ms

      So you can conclude that calling a synchronized function is 10 times as slow, as calling one that is NOT syncrhonized. But this does NOT matter because the actuell time it take to call a synchronized method is so small. It calls
      the synchronized method 10000000 times in ~200 meaning that you can call a synchronized method 50000 times each ms(If you already got the lock)

      So in any real program the work do
  • Antidote (Score:4, Insightful)

    by Henry V .009 ( 518000 ) on Saturday May 17, 2003 @10:23PM (#5983499) Journal
    One thing to remember is that Java is a 'marketed' language. Hence, be aware of inevitable corporate propaganda. That's not to say that Java is bad, but it is heavily pushed.

    Here's a bit of an antidote: Why Java will always be slower than C++ [jelovic.com]
    • Re:Antidote (Score:3, Interesting)

      by Daleks ( 226923 )
      1. All Objects are Allocated on the Heap

      This issue is debatable. The example the author gives is a bad one.

      What small objects? For me these are iterators. I use a lot of them in my designs. Someone else may use complex numbers. A 3D programmer may use a vector or a point class. People dealing with time series data will use a time class. Anybody using these will definitely hate trading a zero-time stack allocation for a constant-time heap allocation. Put that in a loop and that becomes O (n) vs. zer
    • Re:Antidote (Score:5, Insightful)

      by WolfWithoutAClause ( 162946 ) on Saturday May 17, 2003 @11:47PM (#5983843) Homepage
      My advice is don't follow the link, it's trash. Check this out:

      People dealing with time series data will use a time class. Anybody using these will definitely hate trading a zero-time stack allocation for a constant-time heap allocation. Put that in a loop and that becomes O (n) vs. zero. Add another loop and you get O (n^2) vs. again, zero.

      What? A constant time operation in an 'n' loop is O(n), but then again, the loop is O(n) to start with. Add another loop and you get O(n^2) versus O(n^2). The constant of proportionality has changed, but that's all. If you were using C++, you'd probably call a constructor and possibly destructor anyway; so the difference is not nearly as much as you'd expect; and java heap allocation is only about 3 instructions anyway on a decent VM. This article is total junk.

      • Re:Antidote (Score:5, Insightful)

        by j3110 ( 193209 ) <samterrell&gmail,com> on Sunday May 18, 2003 @01:32AM (#5984258) Homepage
        It's more junk than you even pointed out :)

        My favorite was the cast issue. He fails to recognize that the code he is talking about is run once, then it is a static cast just like most people use in C++. Something that must by dynamically casted at runtime, on the other hand, will be much faster in Java since it doesn't have to figure out the casting for an object every time you cast. It basically does it once, then it will be able to cast any object of the same time to the same casted type as before.

        It's complete idiocy by a person who hasn't spent any significant time using Java.

        If you want to critisize Java you must:

        A: target memory usage, site a specific API and why in your OPINION you don't like it, or target startup time.

        B: have not used C++ techniques of optimization on Java

        C: have tried the latest JVM.

        D: have checked the bug parade, and found that the issue you are talking about is not currently being fixed or has been in the bug parade for a very long time.

        If you don't follow all those, then you are really just taking pot shots at a system that works quite well for a LOT of people. I've never met anyone that didn't like Java after they played with it for a while (except back before 1.3).

        There is a SSH server written in Java now that supports all the features that OpenSSh does... I think I'm going to give it a try... no more CRC buffer overflows for me.
    • Re:Antidote (Score:3, Insightful)

      by Anonymous Coward
      Java will be faster than C++, and very soon (if it isn't already) on the condition that Sun continues to exist for a while.

      Java does virtual method inlining. That, alone, makes it potentially faster than C++. A slightly more intelligent garbage collector designed to decrease page-faults later, and presto, C++ is the slow language.

      [Note that Garbage Collection can't scale very well on multiple CPU machines, so Java still won't be the be-all-and-end-all language unless it divides its objects into seperate "
    • by AT ( 21754 ) on Sunday May 18, 2003 @12:36AM (#5984043)
      Java is not always slower. Java's interpreted nature is generally seen as a weakness, but it has advantages too. For example, the JIT has profiling data immediately at hand when doing optimization, whereas compiled languages won't. Even in cases where compiled languages do use profile feedback, it may not be representative of the current program usage.

      Try writing a simple recursive Fibonacci number calculator in both C++ and Java. The Java one is faster, when using a JIT enabled JVM. Of course, that is a contrived example, but it shows that just-in-time compiling can be faster.
      • He's right!

        I tried the simple and stupid

        int fib(int i) {
        if(i<=2) return 1;
        else return fib(i-1)+fib(i-2);
        }

        without optimization on javac and gcc (the latter was slowed down by it so I figured it wouldn't be fair). Calculating up to 45 on my P3 800MHz took, according to 'time', 1m5.554s. Java used 0m51.807s (and that's including the jvm loading).

        Pretty neat.
        java -Xint (no JIT) is still running though.

    • Re:Antidote (Score:3, Insightful)

      by digicosm2 ( 672998 )

      errr...

      Saying "Java will always be slower than C++" is like saying that there will always be less graduate students than undergraduates. Java and C++ live in the same Von Neumann-ian world, but C++ is allowed to muck with pointers, and Java isn't. Moreover Java has garbage collection.

      However, I dare say that the time that a programmer saves by not mucking around with pointers is far more valuable than the time saved by typical C++ optimization. This may not hold true in performance-critical domains (

      • Re:Antidote (Score:3, Insightful)

        by __past__ ( 542467 )

        Saying "Java will always be slower than C++" is like saying that there will always be less graduate students than undergraduates. Java and C++ live in the same Von Neumann-ian world, but C++ is allowed to muck with pointers, and Java isn't. Moreover Java has garbage collection.

        Welcome to the world of optimizing compilers and runtimes! Both things can actually help application performance, because they allow the language implementation to do smart things.

        For example, some implementations of malloc() have

  • Java is slow (Score:3, Interesting)

    by etymxris ( 121288 ) on Saturday May 17, 2003 @10:23PM (#5983502)
    I've programmed in Java before, and love it as a language. Unfortunately, even in the newer iterations of the JVM, you simply cannot trust the virtual machine to get rid of objects efficiently. If you are doing some heavy processing, you simply will exhaust all memory on your machine, even if you explicitly set references to null.

    If there is an "inner loop" of your application that needs performance above all else, and you need to program it in Java for whatever reason, there are two things you should get rid of:
    1. Memory allocations
    2. Function calls
    Of course, if you can do this in C/C++, it will also improve performance, but it is not as critical to be so careful in these lower level languages.

    I've just found that you can't trust the garbage collector, no matter how good people say it is. People have been saying it's great since the beginning of Java, and now they say, "It wasn't good before, but it is now." And they'll be saying the same thing in 3 more years. No matter what, the opportunistic garbage collection of C/C++ simply leads to better performance than any language that tries to do the garbage collection for you.
    • Re:Java is slow (Score:5, Informative)

      by WolfWithoutAClause ( 162946 ) on Sunday May 18, 2003 @12:38AM (#5984054) Homepage
      Let's put it this way:

      I've used Java on embedded applications, on systems that create lots, and lots of objects. And I don't recall ever running out of memory, if there wasn't a bug in the Java program.

      But I'm not saying you're lying or wrong, only that a well tuned, well supported, JVM doesn't do this.

      the opportunistic garbage collection of C/C++ simply leads to better performance than any language that tries to do the garbage collection for you.

      What opportunistic garbage collection of C/C++? You mean delete and free? Get real! Personally, I wouldn't trust the average programmer to even collect garbage correctly more than half the time, and that doesn't cut it. I've had way, way less problems with Java GC than I've ever had with C/C++ in a realtime system. People have spent weeks finding memory leaks; and one time a leak I found was a ghastly C++ compiler bug where the compiler screwed up the automatic destructors on unnamed objects.

      • Re:Java is slow (Score:3, Insightful)

        by etymxris ( 121288 )
        Well I wasn't going to reply, since I think I've said everything in my other posts. But since it got moderated up...

        I've used Java on embedded applications, on systems that create lots, and lots of objects. And I don't recall ever running out of memory, if there wasn't a bug in the Java program.

        There is a difference between using up all your memory and running out of memory. Java maxes out the memory that it's allowed to take in the circumstances I was talking about.

        Suppose you have a tight loop that c

        • Re:Java is slow (Score:3, Interesting)

          by dubl-u ( 51156 ) *
          Telling the computer when I'm done with an object is a simpler solution (to me, anyway) than having to tweak runtime parameters.

          Simpler? Sure, if you have just a few objects. But with a lot of objects, it's much, much, much more complicated. As anybody who has spent hours running down a malloc/free error in somebody else's code can tell you.

          If you can't use new and delete or free and malloc correctly, then there's probably a lot of other things you can't do well either.

          Welcome to the human condition.
  • It must be all these Java "programmers" that University CS departments world wide keep churning out that couldn't write a well performing program if their life depended on it?

    *looks at Limewire*

    *looks at administration applets written by Sun which don't work over X11*
  • jav vm sucks (Score:2, Interesting)

    by Anonymous Coward
    Fact: Due to similarities of the MSIL/Java Bytecode, Java bytecode can eb converted into MSIL bytecode. (though not vice versa, due to higher expressiveness of MSIL).

    Fact: Valenz and Leopold did a survey (summarrixed in the most recent issue of Dr Dobbs) whereby Java bytecode programs were converted to MSIL, and the .NET, Rotor, MONO, Sun, Blackdown, etc. VMs were compared. In each case, Microsoft's .NET VM had a 5-20% speed increase over the best java VM, and even the MONO and Rotor VMs had a 2-15% spee

  • by Kjella ( 173770 ) on Saturday May 17, 2003 @10:34PM (#5983555) Homepage
    Running Freenet and only freenet

    javaw.exe - Mem usage 70,244K
    java.exe - Mem usage 9,808K

    According to task manager. Granted, now I got 512 to take from but it's still eating up much more memory than anything else.

    Kjella
  • by g4dget ( 579145 ) on Saturday May 17, 2003 @10:36PM (#5983559)
    While the article has a number of glaring errors, the general point the author is making is right that in terms of raw computer performance, Java has become quite fast.

    But that's not really a good thing. Sun pushed on the JIT on the theory that that would address performance problems. It didn't. The Perl and Python runtimes are much slower than Java's, but Perl and Python applications generally start up much faster and are considerably more responsive.

    Java is as sluggish as ever, and more bloated than it has ever been. What is really responsible for Java's poor performance for real-world applications is its class loading, memory footprint, and just plain awful library design and implementation.

  • Finding where your software spends most of its time can be hard. Having a tool measure resource/time consumption of the regiouns of source code is critical in finding bottlenecks and improving performance.
  • by cperciva ( 102828 ) on Saturday May 17, 2003 @10:47PM (#5983607) Homepage
    "Java Performance Urban Legends" should read "Java Performance is an Urban Legend".

  • Bullshit (Score:3, Insightful)

    by vlad_petric ( 94134 ) on Saturday May 17, 2003 @10:48PM (#5983610) Homepage
    Why ? Because it depends so much on the performance optimizations the JVM employs.
    Let's take them one by one:
    <br>
    <LI> Final methods and classes - when you call a final method from the same class you save a lookup in the virtual method table (there is no doubt about what method is going to be called, as it couldn't have been overwritten in a descendent), and furthermore you can inline that method. On a "stupid" JVM (read: from Sun) you won't see any difference, on an optimized one you will.

    <LI> Synchronization can become a bottleneck on SMP systems, because it implies cache synchronization (exiting a synchronized block
    is a memory barrier) - you clearly aren't going to see it on a single processor. But not using synchronization is just as bad (you should use synchronization with <b>all</b> variables that are shared, because you do want memory barriers for correctness)

    <LI> Immutable objects - this one clearly depends on the garbage collector that you use.
    <p>
    Conclusion: the performance of these tricks depend on two things - your JVM and Amdahl's law (how often are these improvements going to manifest themselves)
    <p>
  • by Effugas ( 2378 ) on Saturday May 17, 2003 @10:57PM (#5983646) Homepage
    If you're going to rebutt some urban legends, perhaps it might help to go online, check out Snopes, learn how to argue against things...

    Yeah. Lets look at this article:

    Myth 1: Synchronization Is Really Slow
    a) Ummm, well it was slow...but not anymore!
    b) As the programmer, you have no idea what code is eventually going to be run. (Who knows! It might be FAST!)
    c) Who cares about the actual numbers; it's only a constant overhead! (This is actually a good point -- the sync'd method isn't doing anything, so the syncing can't amortize its penalty across any amount of work. But...still..."regardless of the actual numbers"?!?)
    d) But...but...you need to sync! Your programs will crash if you don't!

    Myth 2: Declaring methods "final" speeds them up
    a) Nuh-uh!
    b) I don't see a benchmark anywhere! (Nor, apparently, can I write one.)
    c) But...but...what if someone wants to extend your class?

    Myth 3: Immutable Objects are bad for performance
    a) It could be faster. It could be slower. Maybe you won't be able to detect a difference.
    b) The myth comes from a fundamental truth about how Java does many things really really slowly.
    c) StringBuffer creates objects too...ummm...in the corner case where the default buffer size isn't big enough to handle the added data, and the buffer wasn't preconfigured to store sufficient amount of data...heh! Look over there! *runs away*
    d) As the programmer, you have no idea what the garbage collector is eventually going to do. (Who knows! It might be FAST!)
    e) But...but...you're sacrificing object oriented principals!

    I was really waiitng for Myth 4..."Who needs performance! It's a Multi-Gigahertz PAHHHTAY!!!" But apparently that got cut.

    I've got a data point for 'em. You know Nokia's n-Gage Gameboy Killer? Hehhehe. Not _only_ did they completely forget to throw in any hardware acceleration for sprites -- heh, an ARM is fast enough -- but it looks like you're supposed to write games in Java.

    So, yeah. Puzzle Bobble @ 5fps. But I'm sure it's very elegant code :-)

    --Dan

    P.S. On the flip side, I'm playing with an app whose 2D GUI was designed with GL4Java...it's a work of art; faster than Win32, at least for tree manipulation. Java can be a beautiful language once its relieved of the duty of actually computing anything...
  • by fm6 ( 162816 ) on Saturday May 17, 2003 @10:59PM (#5983652) Homepage Journal
    There are many reason why "urban legends" (why the quotes? explanation shortly) spread the way they do. Unfortunately, this article demonstrates one of the least forgivable reasons: people often get sloppy about their facts. It's inevitable that facts will get distorted as they go from ear to mouth to ear to mouth. But there's no need to hurry the process!

    What's sloppy about the article? Well first of all, Goetz asserts " even though we know they are probably not true, we often can't resist the urge to retell [urban legends]". Where in Hades did he get such a silly idea? Some half-remembered sociology book? Everybody who's ever told me an urban legend really believed in that exploding microwave poodle or the dead construction workers concealed in Hoover Dam. I myself remember feeling rather peeved when I heard the sewer alligator legend debunked.

    Second, perfomance myths are not "urban legends". ULs are third-hand stories that are difficult to debunk because the actual facts are hard to get at. "Facts" that people can check but don't are just myths or folklore.

    Anyway, here's my favorite performance myth: "more is faster". The most common variation of this is "application performance is a function of CPU speed". Ironically enough, I encountered this one when I was working at JavaSoft. Part of my job was to run JavaDoc against the source code to generate the API docs. The guy who did it before me ran it on a workstation, where a run took about 10 hours. Neither of us had the engineering background to really understand why this took so long. He just took for granted that it was a matter of CPU cycles. I knew a little more than him -- not enough to understand what was actually going on, but enough to be skeptical of his explanation.

    Eventually, I put together the relevent facts: (1) JavaDoc loads class files to gain API data, using the object introspection feature; (2) the Java VM we were using visited every loaded class frequently, because of a primitive garbage collection engine; (3) forcing a lot of Java classes into memory is a good way to defeat your computer's virtual memory feature...

    Eureka! I tried doing the JavaDoc run on a machine with enough RAM to prevent swapping. Run went from 10 hours to 10 minutes.

    Another variation of this myth is "interpreted code is slower than native code". That bit of folklore has hurt Java no end. If your application is CPU-bound, you might get a performance boost by using native code. But, with the obvious exception of FPS games, how many apps are CPU bound?

    Here's another variation: "I/O performance is directly proportional to buffer size". At another hardware vendor I worked for, one of our customers actually filed a bug because his I/O-bound application actually got slower when he used buffer sizes greater than 2 meg. It was not, of course, a coincidence that 2 meg was also the CPU cache size!

    • by awol ( 98751 )
      Speed is all about your "functional" payload per unit time. Some one posted earlier that CPU time is cheaper than programmer time, which is pretty naive, becasue even if it is true the better a system (ie the longer its active life) then the higher the marginal cost of bad CPU utilisation, so it is not advisable to make that compromise if one can avoid it.

      For example, we recently "tuned" our transaction engine to the extent that it was truly CPU bound in a multiple tiered architecture with hundreds of rem
  • Yeah but... (Score:5, Insightful)

    by wideBlueSkies ( 618979 ) on Saturday May 17, 2003 @11:05PM (#5983670) Journal
    OK, after reading everyones thoughts about Java's performance challeneges, I have to say that I agree, it is a slow language.

    However, what I really dig is the library. Sockets, Strings, half a billion data structures, numeric and currency formatting, Internationalization and Localiztion support....
    And there's always the JNI if I need to optimize in C.

    Not to mention servlets, JSP's and JDBC.

    You gotta' admid that this is all great stuff. So for certain applications (specially on the server) the benefits of the library far outweigh the performance problems of the sandbox.

    Yeah, C++ has a great library too. But the Java library is so damned easy to use.

    Seductive the dark side is.....
    • Re:Yeah but... (Score:5, Insightful)

      by brsmith4 ( 567390 ) <.brsmith4. .at. .gmail.com.> on Sunday May 18, 2003 @12:20AM (#5983969)
      I totally agree. I have been a die-hard C programmer for years, writing tons of apps that make use of the BSD sockets on Linux, BSD, Solaris and Windows. When I wrote my first front-end client in java, I was blown away by the ease of use of the Socket class. With all of the exception handling, it may seem like a lot to code, but try doing it in C. After getting all of your headers included and writing all of your conditionals just to set up one measly socket, you are about ready to just quit. Even after that, you have to bounds check the hell out of your arrays and deal with all of the incoming strings and data (C's string libraries are not as pleasant to use as Java's). It might not be the fastest language performance wise, but you can crank out a very reasonable app in very little time. Another thing, Suns documentation on java is outstanding.
  • What about numerics (Score:3, Interesting)

    by Phronesis ( 175966 ) on Saturday May 17, 2003 @11:12PM (#5983695)
    The most common thing I believe about Java is that its performance is well below FORTRAN or even C for numerically intensive work, such as linear algebra on gigabyte complex matrices.

    I notice that while the article mentioned deals with a couple of nit-picky optimizations, it doesn't tell us anything useful about how to make Java rock on the numerics, which is the pace performance matters most to me. For instance, how would you write FFTW in Java?

  • by Waffle Iron ( 339739 ) on Saturday May 17, 2003 @11:20PM (#5983743)
    The "synchronization is slow" myth is a very dangerous one, because it motivates programmers to compromise the thread-safety of their programs to avoid a perceived performance hazard.

    However, the article perpetuates another myth: "Synchronization should be easy. The more things you synchronize, the better off you are."

    My hard experience says otherwise. First off, making multithreaded programs work correctly is very hard. Therefore, multiple threads should be avoided if at all possible. You can avoid a lot of these problems in many cases if you use a function like "select()" in a single-threaded program (which, IIRC, Java unfortunately doesn't support). Even though it looks harder to program, it ends up being easier to debug.

    However, sometimes you just can't avoid threads. IMHO, adding "synchronize" as a language keword and encouraging easy creation of threads was a mistake. That doesn't begin to solve your problems. For example, it does nothing to help you avoid deadlocks. In fact, sprinkling synchronized blocks around your program is a recipe for deadlocks and unexpected timing-dependent buggy behavior.

    If you must use multiple threads, there should be one main thread that runs almost all of the program's logic, and a set of highly constrained, carefully controlled worker threads. These threads should not interact with any other (mutable) data structures in the program. Ideally, there should be at most two synchronization points in the program: a work queue and a results queue. The elements of these queues should package up all of the state needed for a worker thread to solve a piece of a problem or deliver its results.

    With an approach like this that has minimal synchronization, there's no need to add a keyword to the language or put synchronization into many library container classes. And of course, performance is hardly an issue at all when you only synchronize twice per worker thread run.

  • by ishmalius ( 153450 ) on Saturday May 17, 2003 @11:27PM (#5983765)
    Each tool for its own purpose! If the best language for the task is Java, then use Java. If the best language for your project is C++, then use C++. If it happens to be Billy Bob's Bug-Free Language which suits the job best, well, then, use that one.

    Yes, if I need speed, I use C, the same as anyone else. If I am writing a Web application, I use Java. That's an area where Java excels. And maybe I'll get lucky enough to be able to code a project in Assembly or Lisp, who knows? Programming does not follow the "jack of all trades, expert at none" theory. General concepts map well across the spectrum.

    I find it discouraging that there are so many programmers who only want to learn as much about their job, as to merely be good enough . Don't they feel any pride, or any desire to excel at something?

    Coders who can only handle one language should be paid minimum wage; that is all they are worth. That is because it is neither the language nor the implementation that is important. It is the knowledge of how to program which will ensure your career and pay your bills.

    • The problem is, for Web applications, other languages (such as Perl, LISP, and perhaps Python) come readily to mind as comparable at least to Java. C++ and its like are verbose and confining in a domain (like Web apps) where agility is paramount. In C++'s case this is somewhat justified since like its ancestor C, C++ is only a step or two above assembly language, and thus can be used to produce VERY tight code. Java, running on a VM, has no such excuse.

      Java offers two main advantages: a beefy class librar
    • by pHDNgell ( 410691 ) on Sunday May 18, 2003 @01:10AM (#5984187)
      Yes, if I need speed, I use C, the same as anyone else.

      Not me. There are some *fast* functional programming language compilers out there. I've ported some of my log processors from python to OCaml, scheme (for the bigloo compiler) and C. The OCaml and bigloo compiled versions were almost exactly the same speed, and only slightly slower than the C version. The C version took me a *LOT* longer to write due to the difficulty of expressing what I was trying to do in C and making sure it was doing it safely.

      In general, I agree with you, but I make great efforts to avoid C, as I am one of those who believe that C is inappropriate for almost every task for which it's used today (even some of the ones for which I'm using it).
  • by Futurepower(R) ( 558542 ) on Saturday May 17, 2003 @11:42PM (#5983819) Homepage

    It was somewhat shocking to me, but back in the VAX days I learned that software made by hardware manufacturers is as slow as they can get the customer to accept. That makes customers buy more hardware.

    Following the theme of naming products after food items, Sun's next software product is "Molasses".

    If customers accept Molasses, the next January they will release an upgrade called "Molasses in January". The following product will break the naming tradition: It will be a run-anywhere language called "The check is in the mail". After that, there is "When pigs fly", and "When hell freezes".

    The big question in the computing world is how not to become a dog on some manufacturer's leash. Woof, woof, where do you want me to go today, Bill, Steve, or Scott?
  • by Markus Registrada ( 642224 ) on Sunday May 18, 2003 @01:22AM (#5984225)
    Java has always been naturally slow. Like all traditionally slow languages, heroic measures have been taken to attack the problems, and most of the bottlenecks have been looked into. Bytecodes get compiled to native code, garbage-collectors get increasingly clever. Still the programs are way too slow. Why?

    One of the reasons is that interactions with caches are hard to model, making it hard to know what to do to minimize problems. Caching is, inherently, a deal with the devil: you get speed but lose understanding. Sometimes you lose the speed too. Even when you understand, there's not much you can do. Sometimes complicated stuff is inherently expensive.

    When I say caching, I mean not just CPU caches of RAM, but also RAM caches of (potentially swapped-out) process space. If you allow a naive garbage-collector to operate freely, it will happily consume the entire address space available, typically the sum of available RAM and swap space, before garbage-collecting, so the process will run not from RAM but from swap. When it garbage-collects, too, it has to walk a lot of that memory, and swap it all in.

    Just running "ulimit -d" in the shell where the java (or other GC-language) program runs can help a lot. It will GC a lot more often, but if nothing is swapped out, the GC happens a lot faster, and the program's regular execution doesn't have to touch swapped-out pages. You have to know a lot about the program and the data it uses to guess the right ulimit value, and if you guess wrong the program fails, but a thousand-fold speed improvement earns a lot of forgiveness.

    Did you really believe garbage collection would mean you don't have to know about memory management? It makes memory management harder, because the problem remains but there's less you can do about it. (For trivial programs it doesn't matter. If you only write trivial programs, though, you might as well find some other job.)

    There's a similar effect with the CPU cache and RAM. Ideally you want the program code and the data it operates on all to live in cache, because touching the RAM takes 100 times as long at touching cache. With bytecodes, you have a lot more "cache pressure" -- you have the bytecodes themselves, the just-in-time compiler, and the native code it generates. At the same time, since your memory manager generally can't re-use memory that you just freed, it allocates other memory that, when touched, pushes out something else that was useful (such as program code).

    The result is that no matter how clever the JVM is, there's not much it can do to get the performance of real programs close to optimal, or even within a pleasing fraction of equivalent C++ code. This despite all the toy benchmarks that seem to prove otherwise, and which carefully avoid all these real-world problems.

    Of all the promised features of Java (like Lisp before it and C# after it), we're left with the sole remaining feature, that its virtual machine specifies precisely (or abstracts away) enough details of the runtime environment that the code is more portable than a faster native implementation, and the code might get written faster for the author having avoided thinking about details that affect performance.

    The sole saving grace is that most programs don't have much need to run very fast anyway, or if they do it's hard to prove that they ought to run faster. Most people take what they get without complaining, or without complaining to anybody who cares, or without doing anything to make whoever is responsible uncomfortable enough to have to do anything differently. A whole generation trained to accept programs that crash daily or hourly is thrilled to find a program whose biggest problem is that they suspect it might be sluggish.

    • I hope that the above post is part of an elaborate joke. Otherwise, looking at this and the 455 other messages comprising the debate so far, I don't think /. is about to improve its its position in the 'where to come for Java enlightenment' stakes.

      1) Re: swapping. Java memory management will always be superior to that of the OS - OS constraints should never be greater than those applied by the VM. The memory limit of a Java process is defined with the -Xmx=nnn parameter. For production use, this should nev
  • by morissm ( 22885 ) <morissmNO@SPAMlexum.umontreal.ca> on Sunday May 18, 2003 @02:53AM (#5984434) Homepage
    Hmmm... while I understand what the author is trying to say, I believe his article is misguiding. The problems he mentions are not urban legends and could conceivably be at the root of a performance bottleneck.

    What I think the author is trying to say is that "Premature optimization is the root of all evil in programming". Most of the stuff enumerated in the article usually has a minor impact on performance and no programmer should worry about them during coding.

    However, when all the coding is over, the system will have to meet some performance criteria. If it crawls like a quadraplegic snail, a programmer will have to get its hands dirty and tweak his code to remove the bottlenecks.

    It is very possible that one of those bottlenecks will be rooted in these so-called "urban legends". Gross over-allocation of immutable objects and synchronized methods may impact performance.

    It happened to me a while ago. I was working on a system that was designed to use lots of threads and message passing. We had completed the development and were ready to move on to testing. The system worked pretty well on the developers' workstations (1 CPU) but when we deployed it on our much more powerful servers, the throughput went down. At first, we thought that it was a thread contention problem but after some testing, we realized that the cost of obtaining a lock on multiprocessor systems is orders of magnitude higher than on uniprocessor systems.

    This is because on uniprocessor machines, thread synchronization simply amounts to doing an atomic if/set. However, on multiprocessor machines, complex mechanisms have to be used so that the lock becomes effective for both processors. It involves a lot more overhead because the required extra-cpu operations cost a lot of cycles.
  • It is a trade-off (Score:3, Interesting)

    by Trinition ( 114758 ) on Sunday May 18, 2003 @07:17AM (#5984886) Homepage
    The question of Java performancd can be quite subjective. For example, I run jEdit, a 100% Java text editor. It's fast. Unforunately, its not as fast as native Win32 editors like UltraEdit or TextPad. However, my mind and body can only work so fast. Both jEdit and the other text editors are faster than I need them to be for my day-to-day operations. So, by that measure, Java is fast.

    Of course, some people interpret the statement to be a comparison to C or C++. Now, Java has a lot of behaviors that are slower than C/C++.

    For example consider array access. Java implicitlu checks the bounds of an array whereas in C/C++, that is leftas an exercise for the programmer. Unfirtunately, most pogrammers are lazy and don't exercise that. Hence with C/C++ you have buffer overruns where nasty clients can execute arbitrary code. In Java,you'd have an ArrayIndexOutOfBoundsException which would prevent the malicious data form being pushed into memory. This, it was a trade-off between security and speed.

    Garbage collection is another one of these. Ever seen a C/C++ program with memory leaks (why, I even remember the X11 libraries leaking)? With Garbage collection, your memory consumption is slower and your memory freeing slower (since Java has to determine using an algorithm what isn't used anymore whereas in C/C++ its coded into the logic). Java also seems slower becaus ethat GC overhead is generally experiences as "pauses" whereas n C/C++ the object deletion occurs through the execution of the program. But this was a trade-off. A trade-off between making developers lives easier and the programs more stable versus the speed and risk of developer-coded memory deallocation.

    Java also has immutable Strings With a mutable String class, I know I could eliminate a lot of Object creation. But the String class was made immutable so everything could be final, and thus optimized better for. This was a trade-off between the speed of Strings themselves and the speed of creating a new String everytime you need to concatenate.

    There are many more cases, but I think you get the point. Java does things ways that are slower. But many of these are trad-offs -- trade-offs to make the programs more secure, development faster and syntax/API simpler. Then they go and address the speed in other ways by improving the VM (HotSpot, incremental/concurrent GC, etc.)

    In my opinion, I would've accepted a 100% Java version of Microsoft Outlook, even if it was slower, if I didn't have to worry about the nex buffer overrun exploit hijacking my computer.
  • by knubo ( 615210 ) on Sunday May 18, 2003 @12:06PM (#5985733) Homepage
    Everyone writing Java, where the heap size goes ballistic (huge J2EE applications), should really read the "Tuning Garbage Collection" article written by Sun:

    http://java.sun.com/docs/hotspot/gc/ [sun.com]

    Normally you would have an idea how the memory footprint of your J2EE application would be. Do you have long lived objects, or short lived objects? Do you create a lot of new objects in your code? Are there big static tree-structures? Tuning with this in mind could be what makes your application run at all or not at all when the load comes.

    In particular is it a good idea to configure the garbage collector so long lived objects are seldom attempted garbage collected, while new temporary objects are cleaned away so that the allocated memory area are fast freed. Your goal should be that the system should never really need to do full GC.

    One example of this is long startup time due to several full GC in the process. Solution - configure your system to start with sufficient memory from the start. Your system shouldn't be needing to hit full GC without managing to free enough memory.

    If you have a J2EE application with EJBs and stuff - make sure that the OLD region is big enough to keep all the pooled objects, and that the number of young survivor spaces are many enough that you don't move "use and throw" objects into the old region.

    Thinking in these lines you might be able to make your server applications only perform full GC when the system "feels that it needs to", and not when it must. (For instance at the timed intervals due to GC in relation with RMI).

    A sideeffect of optimizing like this is that one needent worry about creating new objects - allocating a new object only takes a couple of cycles of memory management (moves the free pointer), and if you configure the GC to wipe such objects using the incremental GC (fast!) then memory management shouldn't become a bottleneck. (Not counting in the actual initialization process - if it is costly, reusing objects would probably be a good idea...)

    KEB

  • final methods (Score:3, Informative)

    by cmburns69 ( 169686 ) on Sunday May 18, 2003 @06:24PM (#5987994) Homepage Journal
    I tested out what he says about final methods and to my surprise, I found that the final methods were consistantly slower.

    An online Starcraft RPG? Only at [netnexus.com]
    In Soviet Russia, all your us are belong to base!
    Karma: redundant

    Here are the results I found, the code is below:

    First test, method1 is not final
    Running method1() TIME: 4577
    Running method2() TIME: 4596
    Running method2() TIME: 4637
    Running method1() TIME: 4547
    Running method1() TIME: 4547
    Running method2() TIME: 4566
    public static void method1() AVERAGE: 4557
    public static final void method2() AVERAGE: 4599.66

    Second test, method1 is now final
    Running method1() TIME: 4557
    Running method2() TIME: 4576
    Running method2() TIME: 4537
    Running method1() TIME: 4597
    Running method1() TIME: 4636
    Running method2() TIME: 4557
    public static final void method1() AVERAGE: 4596.66
    public static void method1() AVERAGE: 4556.66

    Here is the code I used. Its ugly, but I did it the way I did to best mitigate the effects of the JVM optimizing the code:
    package benchmarks;

    public class FinalTest {

    public static int INC;
    public static final void method1() { INC++; }
    public static void method2() { INC++; }
    public final static int TEST = 1000000000;
    public static void main(String[] args) {
    long start;
    INC = 0;
    start = System.currentTimeMillis();
    System.out.println("Running method1()");
    for(int i=0; i<TEST; i++) method1();
    System.out.println("TIME: "+(System.currentTimeMillis()-start));

    INC = 0;
    start = System.currentTimeMillis();
    System.out.println("Running method2()");
    for(int i=0; i<TEST; i++) method2();
    System.out.println("TIME: "+(System.currentTimeMillis()-start));

    INC = 0;
    start = System.currentTimeMillis();
    System.out.println("Running method2()");
    for(int i=0; i<TEST; i++) method2();
    System.out.println("TIME: "+(System.currentTimeMillis()-start));

    INC = 0;
    start = System.currentTimeMillis();
    System.out.println("Running method1()");
    for(int i=0; i<TEST; i++) method1();
    System.out.println("TIME: "+(System.currentTimeMillis()-start));

    INC = 0;
    start = System.currentTimeMillis();
    System.out.println("Running method1()");
    for(int i=0; i<TEST; i++) method1();
    System.out.println("TIME: "+(System.currentTimeMillis()-start));

    INC = 0;
    start = System.currentTimeMillis();
    System.out.println("Running method2()");
    for(int i=0; i<TEST; i++) method2();
    System.out.println("TIME: "+(System.currentTimeMillis()-start));

    }
    }

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...