Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Java Books Media Programming Book Reviews

Java Performance Tuning, 2nd Ed. 287

cpfeifer writes "Performance has been the albatross around Java's neck for a long time. It's a popular subject when developers get together "Don't use Vector, use ArrayList, it's more efficient." "Don't concatenate Strings, use a StringBuffer, it's more efficient." It's a chance for the experienced developers to sit around the design campfire and tell ghost stories of previous projects where they implemented their own basic data structures {String, Linked List...} that was anywhere from 10-50% faster than the JDK implementation (and in the grand oral tradition of tall tales, it gets a little more efficient every time they tell it)." Want to kill the albatross? Read on for the rest of cpfeifer's review of O'Reilly's Java Performance Tuning, now in its 2nd edition.
Java Performance Tuning, 2nd Edition
author Jack Shirazi
pages 570
publisher O'Reilly and Associates
rating 9/10
reviewer cpfeifer
ISBN 096003773
summary It's the most up to date publication dealing specifically with performance of Java applications, and is a one of a kind resource.

Every developer has written a microbenchmark (a bit of code that does something 100-1000 times in a tight loop and measure the time it takes for the supposed "expensive operation") to try and prove an argument about which way is "more efficient" based on the execution time. The problem, is when running in a dynamic, managed environment like the 1.4.x JVM, there are more factors that you don't control than ones that you do, and it can be difficult to say whether one piece of code will be "more efficient" than another without testing with actual usage patterns. The second edition of Review of Java Performance Tuning provides substantial benchmarks (not just simple microbenchmarks) with thorough coverage of the JDK including loops, exceptions, strings, threading, and even underlying JVM improvements in the 1.4 VM. This book is one of a kind in its scope and completeness.

The Gory Details


The best part of this book is that it not only tells you how fast various standard Java operations are (sorting strings, dealing with exceptions, etc.), but he has kept all of the timing information from the previous edition of the book. This shows you how the VMs performance has changed from version 1.1.8 up to 1.4.0, and it's very clear that things are getting better. The author also breaks out the timing information for 3 different flavors of the 1.4.0 JVM: mixed interpreted/compiled mode (standard), server (with Hotspot), and interpreted mode only (no run time optimization applied).

Part 1 : Lies, Damn Lies and Statistics
The book starts off with three chapters of sage advice about the tools and process of profiling/tuning. Before you spend any time profiling, you have to have a process and a goal. Without setting goals, the tuning process will never end and it will likely never be successful.

The author outlines a general strategy that will give you a great starting point for your tuning task forces. Chapter 2 presents the profiling facilities that are available in the Java VM and how to interpret the results, while chapter 3 covers VM optimizations (different garbage collectors, memory allocation options) and compiler optimizations.

Part 2 : The Basics
Chapters 4-9 cover the nuts and bolts, code-level optimizations that you can implement. Chapter 4 discusses various object allocation tweaks including: lazy initialization, canonicalizing objects, and how to use the different types of references (Phantom, Soft, and Weak) to implement priority object pooling. Chapter 5 tells you more about handling Strings in Java that you ever wanted to know. Converting numbers (floats, decimals, etc) to Strings efficiently, string matching -- it's all here in gory detail with timings and sample code.

This chapter also shows the author's depth and maturity; when presenting his algorithm to convert integers to Strings, he notes that while his implementation previously beat the pants off of Sun's implementation, in 1.3.1/1.4.0 Sun implemented a change that now beats his code. He analyzes the new implementation, discusses why it's faster without losing face. That is just one of many gems in this updated edition of the book. Chapter 6 covers the cost of throwing and catching exceptions, passing parameters to methods and accessing variables of different scopes (instance vs. local) and different types (scalar vs. array). Chapter 7 covers loop optimization with a java bent. The author offers proof that an exception terminated loop, while bad programming style, can offer better performance than more accepted practices.

Chapter 8 covers IO, focusing in on using the proper flavor of java.io class (stream vs. reader, buffered vs. unbuffered) to achieve the best performance for a given situation. The author also covers performance issues with object serialization (used under the hood in most Java distributed computing mechanisms) in detail and wraps up the chapter with a 12 page discussion of how best to use the "new IO" package (java.nio) that was introduced with Java 1.4. Sadly, the author doesn't offer a detailed timing comparison of the 1.4 NIO API to the existing IO API. Chapter 9 covers Java's native sorting implementations and how to extend their framework for your specific application.

PART 3 : Threads, Distributed Computing and Other Topics
Chapters 10-14 covers a grab bag of topics, including threading, proper Collections use, distributed computing paradigms, and an optimization primer that covers full life cycle approaches to optimization. Chapter 10 does a great job of presenting threading, common threading pitfalls (deadlocks, race conditions), and how to solve them for optimal performance (e.g. proper scope of locks, etc).

Chapter 11 provides a wonderful discussion about one of the most powerful parts of the JDK, the Collections API. It includes detailed timings of using ArrayList vs. LinkedList when traversing and building collections. To close the chapter, the author discusses different object caching implementations and their individual performance results.

Chapter 12 gives some general optimization principles (with code samples) for speeding up distributed computing including techniques to minimize the amount of data transferred along with some more practical advice for designing web services and using JDBC.

Chapter 13 deals specifically with designing/architecting applications for performance. It discusses how performance should be addressed in each phase of the development cycle (analysis, design, development, deployment), and offers tips a checklist for your performance initiatives. The puzzling thing about this chapter is why it is presented at the end of the book instead of towards the front, with all of the other process-related material. It makes much more sense to put this material together up front.

Chapter 14 covers various hardware and network aspects that can impact application performance including: network topology, DNS lookups, and machine specs (CPU speed, RAM, disk).

PART 4 : J2EE Performance
Chapters 15-18 deal with performance specifically with the J2EE APIs: EJBs, JDBC, Servlets and JSPs. These chapters are essentially tips or suggested patterns (use coarse-grained EJBs, apply the Value Object pattern, etc) instead of very low-level performance tips and metrics provided in earlier chapters. You could say that the author is getting lazy, but the truth is that due to huge number of combinations of appserver/database vendor combinations, it would be very difficult to establish a meaningful performance baseline without a large testbed.

Chapter 15 is a reiteration of Chapter 1, Tuning Strategy, re-tooled with a J2EE focus. The author reiterates that a good testing strategy determines what to measure, how to measure it, and what the expectations are. From here, the author presents possible solutions including load balancing. This chapter also contains about 1.5 pages about tuning JMS, which seems to have been added to be J2EE 1.3 acronym compliant.

Chapter 16 provides excellent information about JDBC performance strategies. The author presents a proxy implementation to capture accurate profiling data and minimize changes to your code once the profiling effort is over. The author also covers data caching, batch processing and how the different transaction levels can affect JDBC performance.

Chapter 17 covers JSPs and servlets, with very little earth shattering information. The author presents tips such as consider GZipping the content before returning it to the client, and minimize custom tags. This chapter is easily the weakest section of the book: Admittedly, it's difficult to optimize JSPs since much of the actual running code is produced by the interpreter/compiler, but this chapter either needs to be beefed up or dropped from future editions.

Finally, chapter 18 provides a design/architecture-time approach towards EJB performance. The author presents standard EJB patterns that lend themselves towards squeezing greater performance out of the often maligned EJB. The patterns include: data access object, page iterator, service locator, message facade, and others. Again, there's nothing earth shattering in this chapter. Chapter 19 is list of resources with links to articles, books and profiling/optimizing projects and products.

What's Bad?

Since the book has been published, the 1.4.1 VM has been released with the much anticipated concurrent garbage collector. The author mentions that he received an early version of 1.4.1 from Sun to test with. However, the text doesn't state that he used the concurrent garbage collector, so the performance of this new feature isn't indicated by this text.

The J2EE performance chapters aren't as strong as the J2SE chapters. After seeing the statistics and extensive code samples of the J2SE sections, I expected a similar treatment for J2EE. Many of the J2SE performance practices still apply for J2EE (serialization most notably, since that his how EJB, JMS, and RMI ship method parameters/results across the wire), but it would be useful to fortify these chapters with actual performance metrics.

So What's In It For Me?

This book is indispensable for the architect drafting the performance requirements/testing process, and contains sage advice for the programmer as well. It's the most up to date publication dealing specifically with performance of Java applications, and is a one-of-a-kind resource.


You can purchase Java Performance Tuning, 2nd Edition from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.

This discussion has been archived. No new comments can be posted.

Java Performance Tuning, 2nd Ed.

Comments Filter:
  • by chrisseaton ( 573490 ) on Wednesday April 09, 2003 @11:11AM (#5693611) Homepage
    If all these performance hacks are documented, why doesn't the compiler implement them?

    I've often found that will bytecode languages (Java, C#...) the bytecode instructions are made for the language so that the compiler can just throw them out easy peasy, but they seem to overlook the sort of optimizations that C compilers, for example, work hard to implement.
    • Do you mean the Java compiler or the JIT compiler? If you mean the JIT compiler, keep in mind that JIT compilation occurs at runtime, so this kind of analysis would be prohibitively expensive.

      In either case, I think they analysis you're asking for is the kind that's easy for a human to do, but harder for an algorithm to do automatically. I don't know what's involved, and it could probably be done, but to prove that these transformations are safe is no mean feat.

    • by cmburns69 ( 169686 ) on Wednesday April 09, 2003 @11:39AM (#5693726) Homepage Journal
      With non-bytecode langauges, the compiler can optimize to the environment. It can re-order code based on the fastest execution time for the platform the code is compiled for.

      Java (and other bytecode languages) were desinged to run well not just on a single platform, but on a variety of platforms. So as a trade-off, you lose environment-specific optimizations at compile time.

      JIT JRE/compilers can work to prevent this. They can further optimize the bytecodes at execution time because they are platform specific.

      An online Starcraft RPG? Only at [netnexus.com]
      In soviet Russia, all your us are belong to base!
    • by briaydemir ( 207637 ) on Wednesday April 09, 2003 @12:41PM (#5693982)

      If all these performance hacks are documented, why doesn't the compiler implement them?

      The most common reason is that most performance hacks and optimizations are not decidable, and you want a compiler to implement only decidable algorithms becuase those are the ones that enable a compiler to be deterministic. It is usually much easier for a person, i.e., human, to determine what can be done, than it is for a machine to determine that exact same thing.

      Consider the following piece of code.

      boolean f(int[] a, int[] b)
      {
      int x = a[0];
      b[0] = a[0] + 2;
      int y = a[0];
      return (x == y);
      }

      Does f always return true? Only if we can prove that a and b never point to the same array. A person maybe able to do this, but a machine would have great difficulty (assuming the machine could even do it).

      So to sumarize, compiler's don't implement many optimization hacks becuase then they might not be deterministic, and that is a bad thing.

      • If you can assert statements (a!=b) at the start of the code, and the compiler uses these assertions then it can act on them. but nobody does, no compiler supports it, etc, etc. Visual C++ does support this trick, but I have never encountered its use in a real project.

        I'd be happy with cache prefetch hints in both Java and C#, and portable hinting in C++ (you can do it with ugly macros)
      • If someone were building a compiler that contemplated making such optimizations, wouldn't it be better to have an option to output optimization hints to the programmer? If I use a certian switch, the compiler emits another output file of notes about the source code, and what optimizations it suggests might be in order.

        Naturally, the programmer might see that since both array parameters a and b point to the same array, that this is not really a possible optimization. This realization by the programmer i
  • Definite purchase (Score:3, Informative)

    by Timesprout ( 579035 ) on Wednesday April 09, 2003 @11:13AM (#5693622)
    I have drastically cut back on my tech book purchases in recent times but this book will definitely be on my shopping list. The First edition offered many insights into not only getting the best performance from Java but also solid guidelines for when and where to apply optimisations.
    As a side note I would disagree about performance being an albatross for Java. Well written Java code can be very high performant just as poorly written code in ANY language can perform slowly. Many of the performance issues associated with Java are inexperienced developers using inappropriate methods and objects.
  • New Title (Score:3, Funny)

    by borg05 ( 161991 ) on Wednesday April 09, 2003 @11:20AM (#5693655) Homepage
    Java Performance Tuning: A course in C programing
  • by rfischer ( 95276 ) on Wednesday April 09, 2003 @11:21AM (#5693660)
    there is a difference, you know.
  • by zipwow ( 1695 ) <zipwow@g[ ]l.com ['mai' in gap]> on Wednesday April 09, 2003 @11:24AM (#5693669) Homepage Journal
    The bn.com link is broken for me, here's the correct ISBN:

    0596003773

  • by Anonymous Coward on Wednesday April 09, 2003 @11:28AM (#5693693)
    Each String is around 64 bytes of memory minimum. What a stupid decision to make such a fundamental data type so heavy weight.
    • ... because no one will ever need more than 64 bytes.
  • I have noticed my JAVA programs run considerably faster under the Sun Forte/One IDE. Once the JAVA app is on its own (especially through a browser), it slows considerably. Does anyone else have experience with this phenomenon?
  • Process (Score:3, Insightful)

    by spakka ( 606417 ) on Wednesday April 09, 2003 @11:28AM (#5693697)

    The book starts off with three chapters of sage advice about the tools and process of profiling/tuning. Before you spend any time profiling, you have to have a process and a goal. Without setting goals, the tuning process will never end and it will likely never be successful.

    No, you have to profile first. Profiling will tell you whether there is even any point in tuning, and, if so, what goals are reasonable.

    • Re:Process (Score:2, Insightful)

      by egoots ( 557276 )

      No, you have to profile first. Profiling will tell you whether there is even any point in tuning, and, if so, what goals are reasonable.

      It's a classic chicken and egg conundrum... If your program meets your performance requirements, why spend time profiling in the first place (but perhaps this is always necessary with Java apps).

      I still believe that premature optimization is way too prevalent, unnecessary, and problematic. I recommend the following approach:

      Make the program function correctly first

      If it

  • by Anonymous Coward on Wednesday April 09, 2003 @11:29AM (#5693705)
    Java has performance troubles? I thought we were all supposed to deny that. Did I miss a memo or something?
  • Java doesn't cut it (Score:3, Interesting)

    by AirLace ( 86148 ) on Wednesday April 09, 2003 @11:43AM (#5693745)
    We ported some of our internal Java business applications to C# for use with Mono, and emperical results already suggest the solution is several times faster than the Java code. The port was very easy, with each line of Java code mapping onto one line of C# or less. Porting the UI to Gtk# was more difficult, but we find the Gtk# code more maintainable and the UI, along with the Gtk+ WIMP [sourceforge.net] plugin integrates much more nicely with Windows than SWING. We'll be investigating a switch to Linux over the next few months for some of our Point-of-Sales terminals as a result, and it should be easy thanks to the portability of Mono and Gtk#.

    We also ported some of our backend tools for use with Mono. In use with the newly released Mono JIT runtime, Mini [ximian.com], we've achieved some truly stunning results. It turns out that some of the optimisations in the new JIT are better than those used by GCC, so once the code is loaded in memory, it performs better than raw C code. Although I don't yet have hard numbers to back up these result (the transition is still in progress), it has to be said that Mono is the real answer to Java performance. Being Open Source, we can also contribute back to the runtime to make it better suit our needs. It also plays nicely with RedHat 9's NPTL threading implementation, which is more than I can say for the current crop of Java JREs.

    • We ported some of our internal Java business applications to C# for use with Mono, and emperical results already suggest the solution is several times faster than the Java code.


      You could have saved yourself some porting by just compiling your java code with GCJ. GCJ allows you to compile your java byte code to native executables.


      Porting the UI to Gtk# was more difficult, but we find the Gtk# code more maintainable and the UI, along with the Gtk+ WIMP [sourceforge.net] plugin integrates much more nice
      • by AirLace ( 86148 ) on Wednesday April 09, 2003 @12:33PM (#5693951)
        You could have saved yourself some porting by just compiling your java code with GCJ. GCJ allows you to compile your java byte code to native executables.

        This might become an option in a few years, but the GNU classpath is as yet not complete enough for our years. We actually didn't find gcj output that performant, despite it being compiled to native code. The JRE still beat it in many cases.

        Use SWT with Java. SWT uses Windows native widgets on Windows or GTK on Linux.

        We also investigated this. SWT is a _horrendous_ API which offers very little abstraction. You end up writing your code once for the Gtk+ target, and again for the native Windows target. It isn't really a cross-platform abstraction like WxWindows, and it's probably the reason why the Eclipse codebase is so large. You end up writing your application for each UI target platform. Gtk# runs and integrates with the platform instead, so you only write your code once.

        Either your telling a big lie or dont have your facts straight. Unless you can show hard facts your not going to sway anyone into believing interpreted code outperformed compiled.

        I did mention the results are empirical, but they're also pretty obvious from where I stand. You don't need benchmarks when something performs, in some cases, eight times faster than the original implementation. I may well put togther some benchmarks and post them to mono-list or linuxtoday.com. I don't have benchmarks yet; does that make me a liar? Sigh.

        What is exactly wrong with Java's use of native threads on Linux boxes?

        It's pointless to interface with the threads layer directly when pthreads exists. It makes the runtime essentially unportable to other unices/operating systems. Mono plays nicely with the environment, so the runtime can just be compiled on any POSIX-compilant system. Linux is great, but being attached to it so firmly that your application breaks when Linus changes some internal interfaces is not.
        • by AirLace ( 86148 )
          Sorry, that should be "for our uses".
        • We also investigated this. SWT is a _horrendous_ API which offers very little abstraction. You end up writing your code once for the Gtk+ target, and again for the native Windows target. It isn't really a cross-platform abstraction like WxWindows, and it's probably the reason why the Eclipse codebase is so large.

          I've seen the Eclipse codebase, and I'd like to hold you to an explanation. The only modules that are duplicated per platform are the SWT implementations and some minor stuff also tied to the plat
        • I've been using eclipse and SWT for quite sometime.

          We also investigated this. SWT is a _horrendous_ API which offers very little abstraction. You end up writing your code once for the Gtk+ target, and again for the native Windows target.

          Complete BS !!

          SWT offers a very high level of abstraction. If you want a still higher level of abstraction, then use the jface interface.

          I've written a filesystem tool for QFS (QNX file system) and it runs without a single line of modification on QNX, windows and s

          • OK, all you "geniuses" that moderated the parent's parent up to +4 (incorrectly)...mod the parent of this post to +5 informative. :-)

            While you're at it, the parent's parent (grandparent?) could use about 4 "overrated" mods.

            TIA!

        • We actually didn't find gcj output that performant, despite it being compiled to native code.

          Really? One numeric (i.e. HPC) benchmark using gcj which was investigated in detail here on Slashdot (almabench) [coyotegulch.com] was within a few percent of the best gcc time...not bad by most folk's standards. ;-)

          We also investigated this. SWT is a _horrendous_ API which offers very little abstraction. You end up writing your code once for the Gtk+ target, and again for the native Windows target. It isn't really a cross-platf

        • I see in the time it took me to post my last reply, the parent was modded up yet again, all the way to +5. What a travesty.

          I hope people don't think Slashdot moderation ensures accuracy, it certainly doesn't. :-P

        • Since other posters have already indicated that gcj /does/ lead to better performance, I think I have a cause for your performance increase beyond "Java sux":

          Re-implementation removed the bottleneck.

          What kind of profiling did you do against your original Java application? Where was the time being spent? I've worked on some pretty high-performance java applications, and have found them to be quite scalable.

          If you're talking about GUI responsiveness (not client/server or high processing interactions), th
          • Interestingly, the original AWT used components based on native ones for just this reason, but that turned out to be problematic.

            AWT was a fairly poorly designed library. As I recall, it was designed at Sun in something like two weeks.

            At any rate, SWT seems a much better job, also using native widgets.

      • "interpreted code outperformed compiled"

        All modern JREs have a JIT compiler, which compiles frequently used functions to native code. It is possible that the JIT compiler in the JRE is better than gcj.

        From a more general point of view, it is possible for JIT compilation to optimize better than ordinary compilation. This is because the JIT compiler has access to dynamic profiling information that is not available to the "normal" compiler (though you can feed profile information from benchmarks to some "nor
    • You certainly are banking on Microsoft remaining benevolent toward Mono. I'm not sure why it's so hard for people to see that MS has absolutely nothing to gain from Mono becoming popular and widely used, except that it will draw people into the real .NET platform, which only runs under Windows.

      Mono on Linux will always lag behind the Windows-based implementations from Microsoft, and Microsoft knows it. They've opened up the runtime, but not the class libraries that make the platform useful. The result is t
  • by MSBob ( 307239 ) on Wednesday April 09, 2003 @11:44AM (#5693751)
    There are certain design decisions that were made by the java team that limit java's performance in a number of ways. Lack of stack objects comes to mind and collections that cannot store basic types.

    That said for most network centric applications java is plenty fast. Now if we only stopped short of introducing the unbelievable overhead of XML's excessive verbosity...

    • Other things I think are major problems are inability to manage memory, lack of fundamental thread synchronization primitives and predictability, massive native (C) code impedance mismatches, and the enormous program (JVM) startup overhead.

      JDK 1.5 should help with the JVM startup and threading problems. I hope someday for the ability to manage memory efficiently, at least letting me repeatedly resurrect garbage objects and get some sort of guarantees about when collection will happen.

      Larry

  • idiots.; (Score:2, Interesting)

    Why does programming languages have to be an either or situation? Everyone here assumes that anyone who programs in JAva does not know C/C++...why is that? Can't someone know multiple prog langs? I know many (too many too really list here) and find it asinine that people really think that everyone should just program in one lang.
    • Re:idiots.; (Score:5, Insightful)

      by iapetus ( 24050 ) on Wednesday April 09, 2003 @12:24PM (#5693914) Homepage
      Because the sort of people who like to get involved in discussions about whether C# is 'better' than Java or Java is 'better' than Perl or crunchy peanut butter is 'better' than textured masonry paint can't cope with more than one thing at a time, and tend to apply their religious zealotry with great vigour.

      Those of us who can program in more than one language and know that sometimes it's a matter of choosing the right tool for the job (peanut butter for sandwiches, masonry paint for walls) tend to go through three stages:

      1) Try to engage in such discussions on the premise that there's actual intelligent debate going on.

      2) Discover ourselves becoming violently opposed to whatever rant we're reading at the time, writing tracts about how Java sucks when we're reading the work of a Java fanatic and drooling about the glory of Java when faced with a C++-toting moron.

      3) Either give up in disgust and let the language fanboys get on with it, or sit on the sidelines and snipe at both sides - similar to stage 2, but more consciously applied. Normally that progresses towards giving up, though, since the zealots are just too easy and predictable...
  • by brianjcain ( 622084 ) on Wednesday April 09, 2003 @11:50AM (#5693764) Journal
    "Don't concatenate Strings, use a StringBuffer, it's more efficient."

    Perhaps it is more efficient. I say, let the compiler do it for me. Code like this:
    final String foo = frob + " noz " + baz.barCount()
    + " bars found";
    is much more readable/maintainable than
    StringBuffer fooBuff = new StringBuffer();
    fooBuff.append(frob);
    fooBuff.ap pend(" noz, ");
    fooBuff.append(baz.barCount());
    fooBuff.appe nd(" bars found");
    final String foo = fooBuff.toString();
    • by Anonymous Coward
      That's not the kind of situation that's being talked about. In fact, that case is already done by the compiler.

      final String foo = frob + " noz " + baz.barCount()
      + " bars found";

      becomes

      final String foo = new StringBuffer (frob).append (" noz ").append (baz.barCount()).append (" bars found").toString ();

      The problem is when people do things like:

      String s = "";
      while (hasMoreData ()) s = s + nextCharacter ();

      which becomes

      String s = "";
      while (hasMoreData ()) s = new StringBuffer (s
    • by blamanj ( 253811 ) on Wednesday April 09, 2003 @12:43PM (#5694001)
      Actually,that's exactly what the compiler does. The problem occurs in cases like this:
      String foo = "";
      while (source.hasMoreTokens()) {
      foo += source.nextToken();
      }
      where you are creating a destroying a large number of strings. In this case, using a StringBuffer is far more efficient and doesn't really harm readability.

    • Totally agree. Unless you are doing bioinformatics routines or building multimegabyte docs, you don't need the string buffer. Might as well keep the server busy while you are waiting for that 30ms query to complete.
    • Not only is the first example more readable and better maintainable, but it's much more likely to be optimized in future versions of the JVM (or compiler in the example you point out).

      Even before string concatenations were optimized in Java, I used the plus operator. Everyone knew they would optimize it one day, and it really didn't slow anything down enough for anyone to notice (even timed tests had to be run into the millions to see a calculated performance difference in Windows).

      By writing around the
  • Who cares? (Score:5, Insightful)

    by Elgon ( 234306 ) on Wednesday April 09, 2003 @12:10PM (#5693865)
    Okay,

    flippant comment but let's think about this for a second: The majority of the time the alleged efficiency advantage is small or, as is generally the case, a pointless optimisation. Java coders seem to have the major efficiency/speed hangup - they use it to lord it over scripting programmers but they want/lack/desire the swiftness of C. (And yes, I do program in Java.)

    To my mind, this is approching the problem from entirely the wrong direction: CPU time and CPU power are far cheaper than developer time and designer time. Therefore, rather than use some cobbled-together hack, use the standard implementations and take the performance hit.

    This will be cheaper, probably 95% as efficient and, most importantly, be 195% easier to maintain or change at a later date. Consider the big picture rather than a single aspect.

    NB - YMMV, for certain apps, it really does make sense to break all of the above ideas and principles, but if you REALLY need it to run that fast, you should be using C anyway.

    Elgon
    • Please someone mod this up. I've been trying to say this for years, and perhaps never as eloquently. When you are managing 100K LOC (that's lines of code not library of congress), managers don't care that our programs run three seconds slower, but that they are saving heaps of money in development and maintenance.
    • Re:Who cares? (Score:4, Insightful)

      by Jord ( 547813 ) on Wednesday April 09, 2003 @12:25PM (#5693918)
      True, going back and fine tuning to gain a 2% speed increase (example) is a waste of time. However the value I see in books like this is in training/teaching the developer to write more efficient code the first go around. If you get out of the habit of doing String + String + String and use StringBuffers instead your code is more efficient from the beginning.

      That is the value I see from books like this.

      • Re:Who cares? (Score:4, Informative)

        by Randolpho ( 628485 ) on Wednesday April 09, 2003 @12:49PM (#5694038) Homepage Journal
        Perhaps you should pick your examples better. Here's an exerpt from the StringBuffer JavaDoc:
        String buffers are used by the compiler to implement the binary string concatenation operator +. For example, the code:
        x = "a" + 4 + "c"
        is compiled to the equivalent of:
        x = new StringBuffer().append( "a" ).append( 4 ).append( "c" ).toString()
        Granted, people should get in the habit of coding optimizations automatically, but in this case it's actually more efficient to do String + String + String; it takes less time to code than typing the method calls, and is easier to read/understand.

        Which just brings me to my biggest beef about Java: no syntactic sugar. Operator overloading should be a part of Java, and bugger whatever the purists say. I want to save time typing dammit! :)
        • Re:Who cares? (Score:3, Interesting)

          by evilpenguin ( 18720 )
          I used to side with the purists. I've seen operator overloading so badly abused in some C++ programs that it is terrible.

          But I recently had to write a Java program that did financial calculations (more rare, even in business software, than you might think). You don't want to use floating point (for all the classic reasons), and, in this case, you don't want to use integers because you need power functions for interest calculations and so forth.

          The classic solution appears to be to use the BigDecimal cla
          • Java would take a huge leap forward in utility if you could just overload the =, +, -, *, and / operators
            Perhaps you should take a look at Jython [jython.org]. It's an implementation of Python in Java which allows you to execute Python scripts within Java code or to build Java classes with Python code. Since Python supports operator overloading and has a nice clean syntax it may well be helpful in your situation.
        • Re:Who cares? (Score:2, Insightful)

          by onash ( 599976 )
          And what I don't understand is how we can talk about Java OO-purist when the primitive data types like integers aren't object and needs a wrapper to stick it in a container. That and working with Strings is what I don't like about Java.

          I'm pretty sure that a modern compiler should be able to optimize things like that easily by now. If Sun is just holding on to old crap like that just because its old, then Java is doomed to be replaced.

          but still I use Java because the IDEs like Eclipse and IntelliJ IDEA ar
        • The classes of the Java standard library are, by default, thread-safe. This means that all methods that could cause race conditions are synchronized. Unfortunately, unneeded synchronizations are a major performance hit (it depends on the thread implementation).

          So, whether you write s1 + s2 + s3 or rewrite this expression using a StringBuffer (which is, anyway, what the compiler does), you incur on most implementations a performance hit because the StringBuffer will be treated as if it could be shared bet

    • Re:Who cares? (Score:3, Interesting)

      by maraist ( 68387 )
      I'm unsatisfied by the idea that hardware is cheaper than developer time. Word Perfect alledgedly attempted to make a version in java, but scrapped it because of speed concerns.

      If your product just barely runs within an acceptible time-frame, then you confronted with the probability that a given customer will agree with you. If a customer doesn't agree, then they will not use your product.. Thus while you save money on developer time, you lose potential customers (or existing ones). Worse, late in the g
    • Therefore, rather than use some cobbled-together hack, use the standard implementations and take the performance hit.

      This will be cheaper, probably 95% as efficient and, most importantly, be 195% easier to maintain or change at a later date. Consider the big picture rather than a single aspect.

      Right on. Also, I know there are people out there building client-side Java apps that need blazing UI performance but I'd bet that 80% plus of the Java that gets written is server-side code that probably talks to

  • Albatrosses (Score:2, Interesting)

    by colin_zr ( 540279 )

    Want to kill the albatross?

    Ick.

    /. editors: please improve your literary references.

    The albatross doesn't need killing -- it's already dead. The albatross was hanging from the mariners neck because he had killed it, and by doing so had brought bad luck upon his ship.

    Quoting from memory here, because I can't be bothered to go find my copy of the poem:

    "God save thee, ancient mariner,
    from the fiends that plague thee thus.
    Why lookst thou so?" "With my crossbow
    I shot the albatross!"

    ...

    Ah

  • isn't killing the albatross what got the ancient mariner into so much trouble?
  • by slagdogg ( 549983 ) on Wednesday April 09, 2003 @12:49PM (#5694043)
    I read the first edition of this book completely. There are some good tips for extracting a few percentage points of improved performance. However, nothing has as profound an impact as simply using a better VM ... for example, many of my applications saw 25%+ speed increases simply by switching from the 1.2.x series VM to the 1.3.x series VM. Java does a pretty could job as a language of encouraging best practices, i.e. the inclusion of a standard StringBuffer. Extreme optimization at the code level will always be limited given the high abstraction of the language. However, extreme optimization at the VM level is a very real thing, and it doesn't take a whole lot of effort for the Java programmer.
    • The virtue and problem with Java is that it's a virutal machine. A lot of operations that are primitive at the Java byte code level eventually are implemented natively (and differently!) for each different virutal machine. This means that your optimizations for VM X may turn out to be pessimizations for VM Y.

      Take threads, for instance - synchronizing primitives are cheap in a Java VM that fakes threads, more expensive in a uniprocessor machine with real threads, and still more expensive in a multi-proces
  • Java is plenty fast (Score:2, Informative)

    by ChrisRijk ( 1818 )
    I've just been testing with a FFT benchmark I have, where I have both a Java version and a C version. Using GCC 3.2 on Linux, I've yet to be able to build a faster binary than what Sun's 1.4.2 beta JVM can do. IBM's JVMs are generally best at this type of benchmark, though Sun's been catching up fast, quite possibly passed them.

    Even with CPU specific optimisations, advanced compiler options etc, the Java version is 30-80% faster than GCC's binary. (this is on both AMD and Intel CPUs) To get anything faster
    • by be-fan ( 61476 ) on Wednesday April 09, 2003 @01:57PM (#5694654)
      The FFT benchmark is a very specific case. Once the JIT kicks in, it's not Java vs C++ anymore, it's the JVM optimizer vs the GCC one. Contrary to popular belief, the GCC optimizer is very good (check out benchmarks vs ICC at coytegulch.com). However, the FFT benchmark is a case where the additional information available to the JIT optimizer allows it to outperform native code. The whole benchmark is so small, it probably even fits in cache, and doesn't really stress any of the performance pitfalls of the language itself. Now, if you have a larger application, that doesn't consist of a single inner-loop, and meanders through a lot of varied code (ie. most real applications) then the performance story will be very different. At that point, Java's performance faults (excessive bookkeeping overhead, object allocation/deallocation, overhead from the JVM, etc) come much more into play.
      • The FFT benchmark is a very specific case.

        Why? It's smaller than most code, but why does that inherantly benefit Java?

        Once the JIT kicks in, it's not Java vs C++ anymore, it's the JVM optimizer vs the GCC one.

        That's the whole point. Unless you only care about programs where the entire execution time is a few seconds, the JVM optimisation time isn't going to be much of an issue.

        However, the FFT benchmark is a case where the additional information available to the JIT optimizer allows it to outperform
        • by be-fan ( 61476 )
          Why? It's smaller than most code, but why does that inherantly benefit Java?
          >>>>>>
          The reason it inheretly benefit Java is because of the characteristics of the Java language. First of all, it's a JIT language. Thus, if you have a tight inner loop, the JIT optimizer can optimize the hell out of it (even more so because it has access to runtime information that the static C++ optimizer does not) and just hand it over to the processor for execution. The JVM isn't even executed again until the
    • Troll??? The parent post is informative and correct!

      Obviously moderated by some moron who thinks that Java is a purely-interpreted language and therefore "can't possibly be faster than C++". I have news for them: Java virtual machines have been compiling down to native code for about five years. GCJ wasn't very original or very clever, and there's no logical reason why it should necessarily produce faster code than a JVM, just because it does its compiling in one go. In fact, there's reason to believe tha

  • by snatchitup ( 466222 ) on Wednesday April 09, 2003 @01:34PM (#5694423) Homepage Journal
    The bottleneck in our applications is not how fast whatever server-side language we use, and I imagine this is similar is most IT shops.

    Our bottleneck is how fast we can execute lots and lots of stored procedures in our SQL and Oracle databases.

    It really hasn't mattered if one of our coders has been terminating loops via try{}catch{}, or ending on a condition.

    The most important thing has been, "Does each line, each method, each class do what it's actually supposed to do?"

    Our bottlenecks have always been flow back and forth between different systems, including Lotus Domino, Oracle, MS SQL Server, Websphere, etc. etc.

    Java is a small player in all this... C++, C#, Fortran, Lisp would not speed this up for us.

  • by Lardmonster ( 302990 ) on Wednesday April 09, 2003 @03:54PM (#5696070)
    So people write their own versions of linked lists and strings, to get up to 15% performance improvement. Well whoopie-fsck.

    How much does that extra development time cost?

    Writing ones' own java.lang.String takes time. Writing routines to convert com.donkeybollocks.String to java.lang.String and back again takes time. Supporting it takes time. And time is money. Me, I'd rather spend an extra £100 on a faster processor, or a Gb of RAM, and take a 25% performance improvement.

    Come on guys, one of the major wins of the OO methodology is code reuse. Time was when programmers would always have to write their own I/O routines - I thought those days were long-gone. Rewriting fundamental parts of the Java API is just plain silly, unless it has a bug or a serious limitation (eg, it's non-threadsafe).

  • by jemenake ( 595948 ) on Wednesday April 09, 2003 @07:28PM (#5697752)
    Performance has been the albatross around Java's neck for a long time..
    Every time some C/C++ snob snipes Java for being slow, this is what I tell them:

    When I write a Java program... if it's too slow today, then, in time, the problem will go away without any more effort on the part of the programmer. In a year from now, we'll certainly have faster computers, which will make up for any speed problems.

    On the other hand...

    A year from now, we will almost certainly not have CPUs that are suddenly immune from dangling pointers and memory leaks.

    In other words, there are not plausible, near-future-forseeable advancements in computing hardware that could fix the worst problems of C/C++. Meanwhile, the near-future advancements in hardware are almost guranteed to fix Java's worst problem.

    The same holds true for doing your computing today... regardless of what hardware is available a year from now. Personally, I'd rather have a slow program that could keep running than one that was really fast, but crashed before I could save my work.

Only great masters of style can succeed in being obtuse. -- Oscar Wilde Most UNIX programmers are great masters of style. -- The Unnamed Usenetter

Working...