Java Performance Tuning, 2nd Ed. 287
Java Performance Tuning, 2nd Edition | |
author | Jack Shirazi |
pages | 570 |
publisher | O'Reilly and Associates |
rating | 9/10 |
reviewer | cpfeifer |
ISBN | 096003773 |
summary | It's the most up to date publication dealing specifically with performance of Java applications, and is a one of a kind resource. |
Every developer has written a microbenchmark (a bit of code that does something 100-1000 times in a tight loop and measure the time it takes for the supposed "expensive operation") to try and prove an argument about which way is "more efficient" based on the execution time. The problem, is when running in a dynamic, managed environment like the 1.4.x JVM, there are more factors that you don't control than ones that you do, and it can be difficult to say whether one piece of code will be "more efficient" than another without testing with actual usage patterns. The second edition of Review of Java Performance Tuning provides substantial benchmarks (not just simple microbenchmarks) with thorough coverage of the JDK including loops, exceptions, strings, threading, and even underlying JVM improvements in the 1.4 VM. This book is one of a kind in its scope and completeness.
The Gory Details
The best part of this book is that it not only tells you how fast various standard Java operations are (sorting strings, dealing with exceptions, etc.), but he has kept all of the timing information from the previous edition of the book. This shows you how the VMs performance has changed from version 1.1.8 up to 1.4.0, and it's very clear that things are getting better. The author also breaks out the timing information for 3 different flavors of the 1.4.0 JVM: mixed interpreted/compiled mode (standard), server (with Hotspot), and interpreted mode only (no run time optimization applied).
Part 1 : Lies, Damn Lies and Statistics
The book starts off with three chapters of sage advice about the tools and process of profiling/tuning. Before you spend any time profiling, you have to have a process and a goal. Without setting goals, the tuning process will never end and it will likely never be successful.
The author outlines a general strategy that will give you a great starting point for your tuning task forces. Chapter 2 presents the profiling facilities that are available in the Java VM and how to interpret the results, while chapter 3 covers VM optimizations (different garbage collectors, memory allocation options) and compiler optimizations.
Part 2 : The Basics
Chapters 4-9 cover the nuts and bolts, code-level optimizations that you can implement. Chapter 4 discusses various object allocation tweaks including: lazy initialization, canonicalizing objects, and how to use the different types of references (Phantom, Soft, and Weak) to implement priority object pooling. Chapter 5 tells you more about handling Strings in Java that you ever wanted to know. Converting numbers (floats, decimals, etc) to Strings efficiently, string matching -- it's all here in gory detail with timings and sample code.
This chapter also shows the author's depth and maturity; when presenting his algorithm to convert integers to Strings, he notes that while his implementation previously beat the pants off of Sun's implementation, in 1.3.1/1.4.0 Sun implemented a change that now beats his code. He analyzes the new implementation, discusses why it's faster without losing face. That is just one of many gems in this updated edition of the book. Chapter 6 covers the cost of throwing and catching exceptions, passing parameters to methods and accessing variables of different scopes (instance vs. local) and different types (scalar vs. array). Chapter 7 covers loop optimization with a java bent. The author offers proof that an exception terminated loop, while bad programming style, can offer better performance than more accepted practices.
Chapter 8 covers IO, focusing in on using the proper flavor of java.io class (stream vs. reader, buffered vs. unbuffered) to achieve the best performance for a given situation. The author also covers performance issues with object serialization (used under the hood in most Java distributed computing mechanisms) in detail and wraps up the chapter with a 12 page discussion of how best to use the "new IO" package (java.nio) that was introduced with Java 1.4. Sadly, the author doesn't offer a detailed timing comparison of the 1.4 NIO API to the existing IO API. Chapter 9 covers Java's native sorting implementations and how to extend their framework for your specific application.
PART 3 : Threads, Distributed Computing and Other Topics
Chapters 10-14 covers a grab bag of topics, including threading, proper Collections use, distributed computing paradigms, and an optimization primer that covers full life cycle approaches to optimization. Chapter 10 does a great job of presenting threading, common threading pitfalls (deadlocks, race conditions), and how to solve them for optimal performance (e.g. proper scope of locks, etc).
Chapter 11 provides a wonderful discussion about one of the most powerful parts of the JDK, the Collections API. It includes detailed timings of using ArrayList vs. LinkedList when traversing and building collections. To close the chapter, the author discusses different object caching implementations and their individual performance results.
Chapter 12 gives some general optimization principles (with code samples) for speeding up distributed computing including techniques to minimize the amount of data transferred along with some more practical advice for designing web services and using JDBC.
Chapter 13 deals specifically with designing/architecting applications for performance. It discusses how performance should be addressed in each phase of the development cycle (analysis, design, development, deployment), and offers tips a checklist for your performance initiatives. The puzzling thing about this chapter is why it is presented at the end of the book instead of towards the front, with all of the other process-related material. It makes much more sense to put this material together up front.
Chapter 14 covers various hardware and network aspects that can impact application performance including: network topology, DNS lookups, and machine specs (CPU speed, RAM, disk).
PART 4 : J2EE Performance
Chapters 15-18 deal with performance specifically with the J2EE APIs: EJBs, JDBC, Servlets and JSPs. These chapters are essentially tips or suggested patterns (use coarse-grained EJBs, apply the Value Object pattern, etc) instead of very low-level performance tips and metrics provided in earlier chapters. You could say that the author is getting lazy, but the truth is that due to huge number of combinations of appserver/database vendor combinations, it would be very difficult to establish a meaningful performance baseline without a large testbed.
Chapter 15 is a reiteration of Chapter 1, Tuning Strategy, re-tooled with a J2EE focus. The author reiterates that a good testing strategy determines what to measure, how to measure it, and what the expectations are. From here, the author presents possible solutions including load balancing. This chapter also contains about 1.5 pages about tuning JMS, which seems to have been added to be J2EE 1.3 acronym compliant.
Chapter 16 provides excellent information about JDBC performance strategies. The author presents a proxy implementation to capture accurate profiling data and minimize changes to your code once the profiling effort is over. The author also covers data caching, batch processing and how the different transaction levels can affect JDBC performance.
Chapter 17 covers JSPs and servlets, with very little earth shattering information. The author presents tips such as consider GZipping the content before returning it to the client, and minimize custom tags. This chapter is easily the weakest section of the book: Admittedly, it's difficult to optimize JSPs since much of the actual running code is produced by the interpreter/compiler, but this chapter either needs to be beefed up or dropped from future editions.
Finally, chapter 18 provides a design/architecture-time approach towards EJB performance. The author presents standard EJB patterns that lend themselves towards squeezing greater performance out of the often maligned EJB. The patterns include: data access object, page iterator, service locator, message facade, and others. Again, there's nothing earth shattering in this chapter. Chapter 19 is list of resources with links to articles, books and profiling/optimizing projects and products.
What's Bad?
Since the book has been published, the 1.4.1 VM has been released with the much anticipated concurrent garbage collector. The author mentions that he received an early version of 1.4.1 from Sun to test with. However, the text doesn't state that he used the concurrent garbage collector, so the performance of this new feature isn't indicated by this text.
The J2EE performance chapters aren't as strong as the J2SE chapters. After seeing the statistics and extensive code samples of the J2SE sections, I expected a similar treatment for J2EE. Many of the J2SE performance practices still apply for J2EE (serialization most notably, since that his how EJB, JMS, and RMI ship method parameters/results across the wire), but it would be useful to fortify these chapters with actual performance metrics.
So What's In It For Me?
This book is indispensable for the architect drafting the performance requirements/testing process, and contains sage advice for the programmer as well. It's the most up to date publication dealing specifically with performance of Java applications, and is a one-of-a-kind resource.
You can purchase Java Performance Tuning, 2nd Edition from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
Isn't this the compiler's job? (Score:5, Insightful)
I've often found that will bytecode languages (Java, C#...) the bytecode instructions are made for the language so that the compiler can just throw them out easy peasy, but they seem to overlook the sort of optimizations that C compilers, for example, work hard to implement.
Re:Isn't this the compiler's job? (Score:2)
In either case, I think they analysis you're asking for is the kind that's easy for a human to do, but harder for an algorithm to do automatically. I don't know what's involved, and it could probably be done, but to prove that these transformations are safe is no mean feat.
Re:Isn't this the compiler's job? (Score:5, Informative)
Java (and other bytecode languages) were desinged to run well not just on a single platform, but on a variety of platforms. So as a trade-off, you lose environment-specific optimizations at compile time.
JIT JRE/compilers can work to prevent this. They can further optimize the bytecodes at execution time because they are platform specific.
An online Starcraft RPG? Only at [netnexus.com]
In soviet Russia, all your us are belong to base!
Re:Isn't this the compiler's job? (Score:5, Insightful)
If all these performance hacks are documented, why doesn't the compiler implement them?
The most common reason is that most performance hacks and optimizations are not decidable, and you want a compiler to implement only decidable algorithms becuase those are the ones that enable a compiler to be deterministic. It is usually much easier for a person, i.e., human, to determine what can be done, than it is for a machine to determine that exact same thing.
Consider the following piece of code.
boolean f(int[] a, int[] b)
{
int x = a[0];
b[0] = a[0] + 2;
int y = a[0];
return (x == y);
}
Does f always return true? Only if we can prove that a and b never point to the same array. A person maybe able to do this, but a machine would have great difficulty (assuming the machine could even do it).
So to sumarize, compiler's don't implement many optimization hacks becuase then they might not be deterministic, and that is a bad thing.
Re:Isn't this the compiler's job? (Score:2)
I'd be happy with cache prefetch hints in both Java and C#, and portable hinting in C++ (you can do it with ugly macros)
Re:Isn't this the compiler's job? (Score:3, Interesting)
Naturally, the programmer might see that since both array parameters a and b point to the same array, that this is not really a possible optimization. This realization by the programmer i
Re:Isn't this the compiler's job? (Score:4, Informative)
When a program thrashes strings around, why doesn't the compiler detect that, and switch to a string buffer object to perform those operations, and then convert the final result back to a string?
Re:Isn't this the compiler's job? (Score:2, Insightful)
1. Tricky to implement properly
2. Rude
However, emitting a warning when the compiler thinks you could do a better job with a StringBuffer might be nice.
Re:Isn't this the compiler's job? (Score:2)
I don't mean it to swap the object out, just use a temporary object for the operation.
When you say
string foo = "Hello, ";
foo += "World!";
You just want foo to hold the two strings concatenated, you don't care how the compiler did it. The compiler could create a temp stringbuffer object there for that operation, converting back to string and assigning to foo.
I don't see how that's rude, because the programmer wouldn't even know that it'
Re:Isn't this the compiler's job? (Score:2, Insightful)
In this case, the compiler does the following steps:
1. Create new string "foo" with the initial value (this may already exist). ..."
2. Turn foo into a StringBuffer and append "When
3. Turn foo back into a String.
4. Turn foo into a StringBuffer and append "Or..."
5. Turn foo back into a String.
So you can see this is not
Re:Isn't this the compiler's job? (Score:2)
String/StringBuffer (Score:5, Informative)
For concatenating two strings, the concat() method can be faster than using StringBuffer, since it only needs to create a new char[] and do a (fast) arraycopy from the two internal arrays.
Also, everyone should be aware of the 1.4.1 memory leak associated with using StringBuffer's toString() and setLength() methods.
Re:String/StringBuffer (Score:3, Interesting)
It does under the hood whenever you use + for concatenation; this is why using String + String in a loop is ineffective: You create a new StringBuffer object per iteration. The solution in this case is to declare the StringBuffer outside the loop and use append() explicitly within.
I think your missing the point the parent was trying to make. Aren't there much bigger things to worry about than writing around bugs in the compiler?
Two years ago, everyone used StringBuffer.. today, everyone knows the us
Re:Isn't this the compiler's job? (Score:4, Interesting)
The optimization the book proposes are all hit-or-miss adventures. Even for a programmer with intimate knowledge of the code, it is sometime difficult to predict if a change will help or imper performance. The compiler has even less chance to do so correctly -- and nobody like a compiler which slows down their code trying to optimize it.
Definite purchase (Score:3, Informative)
As a side note I would disagree about performance being an albatross for Java. Well written Java code can be very high performant just as poorly written code in ANY language can perform slowly. Many of the performance issues associated with Java are inexperienced developers using inappropriate methods and objects.
New Title (Score:3, Funny)
Re:New Title (Score:2, Funny)
Writing complex enterprise systems that don't get cancelled, because they don't perform and are written by former Taxi drivers turned Java programmers: A course in C++ programming
Is this a review or a synopsis? (Score:3, Insightful)
Correct ISBN is 0596003773 (Score:5, Informative)
0596003773
Re:Correct ISBN is 0596003773 (Score:2)
Re:Correct ISBN is 0596003773 (Score:2)
Java Strings are the main problem (Score:3, Interesting)
Re:Java Strings are the main problem (Score:3, Funny)
Re:New math? (Score:3, Insightful)
Re:New math? (Score:3, Informative)
Java performance better in the Sun IDE? (Score:3, Interesting)
Re:Java performance better in the Sun IDE? (Score:2)
Re:Java performance better in the Sun IDE? (Score:2)
Process (Score:3, Insightful)
The book starts off with three chapters of sage advice about the tools and process of profiling/tuning. Before you spend any time profiling, you have to have a process and a goal. Without setting goals, the tuning process will never end and it will likely never be successful.
No, you have to profile first. Profiling will tell you whether there is even any point in tuning, and, if so, what goals are reasonable.
Re:Process (Score:2, Insightful)
No, you have to profile first. Profiling will tell you whether there is even any point in tuning, and, if so, what goals are reasonable.
It's a classic chicken and egg conundrum... If your program meets your performance requirements, why spend time profiling in the first place (but perhaps this is always necessary with Java apps).
I still believe that premature optimization is way too prevalent, unnecessary, and problematic. I recommend the following approach:
Make the program function correctly first
If it
What performance issues? (Score:5, Funny)
Java doesn't cut it (Score:3, Interesting)
We also ported some of our backend tools for use with Mono. In use with the newly released Mono JIT runtime, Mini [ximian.com], we've achieved some truly stunning results. It turns out that some of the optimisations in the new JIT are better than those used by GCC, so once the code is loaded in memory, it performs better than raw C code. Although I don't yet have hard numbers to back up these result (the transition is still in progress), it has to be said that Mono is the real answer to Java performance. Being Open Source, we can also contribute back to the runtime to make it better suit our needs. It also plays nicely with RedHat 9's NPTL threading implementation, which is more than I can say for the current crop of Java JREs.
Re:Java doesn't cut it (Score:3, Interesting)
We ported some of our internal Java business applications to C# for use with Mono, and emperical results already suggest the solution is several times faster than the Java code.
You could have saved yourself some porting by just compiling your java code with GCJ. GCJ allows you to compile your java byte code to native executables.
Porting the UI to Gtk# was more difficult, but we find the Gtk# code more maintainable and the UI, along with the Gtk+ WIMP [sourceforge.net] plugin integrates much more nice
Re:Java doesn't cut it (Score:5, Insightful)
This might become an option in a few years, but the GNU classpath is as yet not complete enough for our years. We actually didn't find gcj output that performant, despite it being compiled to native code. The JRE still beat it in many cases.
Use SWT with Java. SWT uses Windows native widgets on Windows or GTK on Linux.
We also investigated this. SWT is a _horrendous_ API which offers very little abstraction. You end up writing your code once for the Gtk+ target, and again for the native Windows target. It isn't really a cross-platform abstraction like WxWindows, and it's probably the reason why the Eclipse codebase is so large. You end up writing your application for each UI target platform. Gtk# runs and integrates with the platform instead, so you only write your code once.
Either your telling a big lie or dont have your facts straight. Unless you can show hard facts your not going to sway anyone into believing interpreted code outperformed compiled.
I did mention the results are empirical, but they're also pretty obvious from where I stand. You don't need benchmarks when something performs, in some cases, eight times faster than the original implementation. I may well put togther some benchmarks and post them to mono-list or linuxtoday.com. I don't have benchmarks yet; does that make me a liar? Sigh.
What is exactly wrong with Java's use of native threads on Linux boxes?
It's pointless to interface with the threads layer directly when pthreads exists. It makes the runtime essentially unportable to other unices/operating systems. Mono plays nicely with the environment, so the runtime can just be compiled on any POSIX-compilant system. Linux is great, but being attached to it so firmly that your application breaks when Linus changes some internal interfaces is not.
Re:Java doesn't cut it (Score:3, Insightful)
Re:Java doesn't cut it (Score:2)
I've seen the Eclipse codebase, and I'd like to hold you to an explanation. The only modules that are duplicated per platform are the SWT implementations and some minor stuff also tied to the plat
Re:Java doesn't cut it (Score:3, Insightful)
Complete BS !!
SWT offers a very high level of abstraction. If you want a still higher level of abstraction, then use the jface interface.
I've written a filesystem tool for QFS (QNX file system) and it runs without a single line of modification on QNX, windows and s
Re:Java doesn't cut it (Score:2)
While you're at it, the parent's parent (grandparent?) could use about 4 "overrated" mods.
TIA!
Re:Java doesn't cut it (Score:2)
Really? One numeric (i.e. HPC) benchmark using gcj which was investigated in detail here on Slashdot (almabench) [coyotegulch.com] was within a few percent of the best gcc time...not bad by most folk's standards. ;-)
We also investigated this. SWT is a _horrendous_ API which offers very little abstraction. You end up writing your code once for the Gtk+ target, and again for the native Windows target. It isn't really a cross-platf
Re:Java doesn't cut it (Score:2)
I hope people don't think Slashdot moderation ensures accuracy, it certainly doesn't. :-P
Another explanation for performance increases (Score:3, Insightful)
Re-implementation removed the bottleneck.
What kind of profiling did you do against your original Java application? Where was the time being spent? I've worked on some pretty high-performance java applications, and have found them to be quite scalable.
If you're talking about GUI responsiveness (not client/server or high processing interactions), th
Re:Another explanation for performance increases (Score:2)
AWT was a fairly poorly designed library. As I recall, it was designed at Sun in something like two weeks.
At any rate, SWT seems a much better job, also using native widgets.
just-in-time compilation (Score:2)
All modern JREs have a JIT compiler, which compiles frequently used functions to native code. It is possible that the JIT compiler in the JRE is better than gcj.
From a more general point of view, it is possible for JIT compilation to optimize better than ordinary compilation. This is because the JIT compiler has access to dynamic profiling information that is not available to the "normal" compiler (though you can feed profile information from benchmarks to some "nor
Re:Java doesn't cut it (Score:2)
Mono on Linux will always lag behind the Windows-based implementations from Microsoft, and Microsoft knows it. They've opened up the runtime, but not the class libraries that make the platform useful. The result is t
Inherent performance issues (Score:3, Informative)
That said for most network centric applications java is plenty fast. Now if we only stopped short of introducing the unbelievable overhead of XML's excessive verbosity...
Re:Inherent performance issues (Score:2)
Other things I think are major problems are inability to manage memory, lack of fundamental thread synchronization primitives and predictability, massive native (C) code impedance mismatches, and the enormous program (JVM) startup overhead.
JDK 1.5 should help with the JVM startup and threading problems. I hope someday for the ability to manage memory efficiently, at least letting me repeatedly resurrect garbage objects and get some sort of guarantees about when collection will happen.
Larry
idiots.; (Score:2, Interesting)
Re:idiots.; (Score:5, Insightful)
Those of us who can program in more than one language and know that sometimes it's a matter of choosing the right tool for the job (peanut butter for sandwiches, masonry paint for walls) tend to go through three stages:
1) Try to engage in such discussions on the premise that there's actual intelligent debate going on.
2) Discover ourselves becoming violently opposed to whatever rant we're reading at the time, writing tracts about how Java sucks when we're reading the work of a Java fanatic and drooling about the glory of Java when faced with a C++-toting moron.
3) Either give up in disgust and let the language fanboys get on with it, or sit on the sidelines and snipe at both sides - similar to stage 2, but more consciously applied. Normally that progresses towards giving up, though, since the zealots are just too easy and predictable...
More efficient != better (Score:3, Insightful)
Perhaps it is more efficient. I say, let the compiler do it for me. Code like this: is much more readable/maintainable than
Re:More efficient != better (Score:2, Insightful)
final String foo = frob + " noz " + baz.barCount()
+ " bars found";
becomes
final String foo = new StringBuffer (frob).append (" noz ").append (baz.barCount()).append (" bars found").toString ();
The problem is when people do things like:
String s = "";
while (hasMoreData ()) s = s + nextCharacter ();
which becomes
String s = "";
while (hasMoreData ()) s = new StringBuffer (s
Re:More efficient != better (Score:5, Informative)
Re:More efficient != better (Score:3, Informative)
Re:More efficient != better (Score:2)
Even before string concatenations were optimized in Java, I used the plus operator. Everyone knew they would optimize it one day, and it really didn't slow anything down enough for anyone to notice (even timed tests had to be run into the millions to see a calculated performance difference in Windows).
By writing around the
Re:The original post is wrong, anyway... (Score:2)
Rubbish. Concatenating uses stringbuffers. See this post [slashdot.org] for an example of when manually creating stringbuffers is more efficient.
Re:The original post is wrong, anyway... (Score:2)
But this is
Re:The original post is wrong, anyway... (Score:3, Informative)
Who cares? (Score:5, Insightful)
flippant comment but let's think about this for a second: The majority of the time the alleged efficiency advantage is small or, as is generally the case, a pointless optimisation. Java coders seem to have the major efficiency/speed hangup - they use it to lord it over scripting programmers but they want/lack/desire the swiftness of C. (And yes, I do program in Java.)
To my mind, this is approching the problem from entirely the wrong direction: CPU time and CPU power are far cheaper than developer time and designer time. Therefore, rather than use some cobbled-together hack, use the standard implementations and take the performance hit.
This will be cheaper, probably 95% as efficient and, most importantly, be 195% easier to maintain or change at a later date. Consider the big picture rather than a single aspect.
NB - YMMV, for certain apps, it really does make sense to break all of the above ideas and principles, but if you REALLY need it to run that fast, you should be using C anyway.
Elgon
Re:Who cares? (Score:2)
Re:Who cares? (Score:4, Insightful)
That is the value I see from books like this.
Re:Who cares? (Score:4, Informative)
Which just brings me to my biggest beef about Java: no syntactic sugar. Operator overloading should be a part of Java, and bugger whatever the purists say. I want to save time typing dammit!
Re:Who cares? (Score:3, Interesting)
But I recently had to write a Java program that did financial calculations (more rare, even in business software, than you might think). You don't want to use floating point (for all the classic reasons), and, in this case, you don't want to use integers because you need power functions for interest calculations and so forth.
The classic solution appears to be to use the BigDecimal cla
Re:Who cares? (Score:2)
Re:Who cares? (Score:2, Insightful)
I'm pretty sure that a modern compiler should be able to optimize things like that easily by now. If Sun is just holding on to old crap like that just because its old, then Java is doomed to be replaced.
but still I use Java because the IDEs like Eclipse and IntelliJ IDEA ar
why this is inefficient in any case (Score:2)
The classes of the Java standard library are, by default, thread-safe. This means that all methods that could cause race conditions are synchronized. Unfortunately, unneeded synchronizations are a major performance hit (it depends on the thread implementation).
So, whether you write s1 + s2 + s3 or rewrite this expression using a StringBuffer (which is, anyway, what the compiler does), you incur on most implementations a performance hit because the StringBuffer will be treated as if it could be shared bet
Re:Who cares? (Score:3, Interesting)
If your product just barely runs within an acceptible time-frame, then you confronted with the probability that a given customer will agree with you. If a customer doesn't agree, then they will not use your product.. Thus while you save money on developer time, you lose potential customers (or existing ones). Worse, late in the g
Re:Who cares? (Score:2)
This will be cheaper, probably 95% as efficient and, most importantly, be 195% easier to maintain or change at a later date. Consider the big picture rather than a single aspect.
Right on. Also, I know there are people out there building client-side Java apps that need blazing UI performance but I'd bet that 80% plus of the Java that gets written is server-side code that probably talks to
Albatrosses (Score:2, Interesting)
Ick.
The albatross doesn't need killing -- it's already dead. The albatross was hanging from the mariners neck because he had killed it, and by doing so had brought bad luck upon his ship.
Quoting from memory here, because I can't be bothered to go find my copy of the poem:
Killing the Albatross (Score:2)
It's all about the VM (Score:5, Insightful)
VMs will not be uniformly better as time goes on.. (Score:2)
Take threads, for instance - synchronizing primitives are cheap in a Java VM that fakes threads, more expensive in a uniprocessor machine with real threads, and still more expensive in a multi-proces
Java is plenty fast (Score:2, Informative)
Even with CPU specific optimisations, advanced compiler options etc, the Java version is 30-80% faster than GCC's binary. (this is on both AMD and Intel CPUs) To get anything faster
Re:Java is plenty fast (Score:5, Informative)
Re:Java is plenty fast (Score:3, Insightful)
Why? It's smaller than most code, but why does that inherantly benefit Java?
Once the JIT kicks in, it's not Java vs C++ anymore, it's the JVM optimizer vs the GCC one.
That's the whole point. Unless you only care about programs where the entire execution time is a few seconds, the JVM optimisation time isn't going to be much of an issue.
However, the FFT benchmark is a case where the additional information available to the JIT optimizer allows it to outperform
Re:Java is plenty fast (Score:3, Informative)
>>>>>>
The reason it inheretly benefit Java is because of the characteristics of the Java language. First of all, it's a JIT language. Thus, if you have a tight inner loop, the JIT optimizer can optimize the hell out of it (even more so because it has access to runtime information that the static C++ optimizer does not) and just hand it over to the processor for execution. The JVM isn't even executed again until the
Re:Java is plenty fast (Score:2)
Obviously moderated by some moron who thinks that Java is a purely-interpreted language and therefore "can't possibly be faster than C++". I have news for them: Java virtual machines have been compiling down to native code for about five years. GCJ wasn't very original or very clever, and there's no logical reason why it should necessarily produce faster code than a JVM, just because it does its compiling in one go. In fact, there's reason to believe tha
Java performance is second priority for us (Score:3, Insightful)
Our bottleneck is how fast we can execute lots and lots of stored procedures in our SQL and Oracle databases.
It really hasn't mattered if one of our coders has been terminating loops via try{}catch{}, or ending on a condition.
The most important thing has been, "Does each line, each method, each class do what it's actually supposed to do?"
Our bottlenecks have always been flow back and forth between different systems, including Lotus Domino, Oracle, MS SQL Server, Websphere, etc. etc.
Java is a small player in all this... C++, C#, Fortran, Lisp would not speed this up for us.
Re:Java performance is second priority for us (Score:2)
Blah blah performance tuning... (Score:3, Interesting)
How much does that extra development time cost?
Writing ones' own java.lang.String takes time. Writing routines to convert com.donkeybollocks.String to java.lang.String and back again takes time. Supporting it takes time. And time is money. Me, I'd rather spend an extra £100 on a faster processor, or a Gb of RAM, and take a 25% performance improvement.
Come on guys, one of the major wins of the OO methodology is code reuse. Time was when programmers would always have to write their own I/O routines - I thought those days were long-gone. Rewriting fundamental parts of the Java API is just plain silly, unless it has a bug or a serious limitation (eg, it's non-threadsafe).
Not a bad albatross (Score:3, Insightful)
When I write a Java program... if it's too slow today, then, in time, the problem will go away without any more effort on the part of the programmer. In a year from now, we'll certainly have faster computers, which will make up for any speed problems.
On the other hand...
A year from now, we will almost certainly not have CPUs that are suddenly immune from dangling pointers and memory leaks.
In other words, there are not plausible, near-future-forseeable advancements in computing hardware that could fix the worst problems of C/C++. Meanwhile, the near-future advancements in hardware are almost guranteed to fix Java's worst problem.
The same holds true for doing your computing today... regardless of what hardware is available a year from now. Personally, I'd rather have a slow program that could keep running than one that was really fast, but crashed before I could save my work.
Insightful? (Score:4, Insightful)
Re:Web pages (Score:4, Interesting)
Insightful!? (Score:2, Insightful)
Re:Web pages (Score:2)
Now, on the *server* side, well, Java just plain rocks. Servlets+JSP (when done right, not done moronically and with overuse of EJBs when they aren't needed) is very nice indeed.
Re:Oxymoron ? (Score:3, Insightful)
That said, "slow" performing Java GUI aps are not so much the fault of the platform itself as they are the fault of the Java programmer's inability to deal efficiently with threads.
Re:Oxymoron ? (Score:5, Interesting)
Previously, the startup slowdown was due to the system having to load, verify, and link the twenty or so classes a simple program depends upon. Pjava and J2ME-CDC solved that by storing an image of the heap with the system classes already loaded, verified, and linked (and quickened) so the system was run-ready almost immediately. I wonder if the J2SE folks picked up on that? Alternatively, they could just be skipping the verify for those classes in the signed rt.jar, and offline preverify them prior to signature - the verifier always was the slow part of the process.
Your point about threads is well taken, and applies more generally to much of java programming. Java's language and libraries make it all to easy to write architecturally-slow programs - you really still have to fully understand what you're doing in order to write a decent program, regardless of the language.
Re:Don't use Java.... (Score:3, Insightful)
The best thing about java is the richness of the api. And the size of the documentation. C++/C should take a page from java's book in this department.
You don't have to use the standard classes, go ahead and write the classes you need.
Jonathan
Re:Don't use Java.... (Score:2, Interesting)
Re:Don't use Java.... (Score:4, Insightful)
Well, gosh, you go right ahead and write your own replacement classes for everything that Sun has done already. What's stopping you?
That's exactly why I like Java. They have a lot of good built-in libraries that cover a wide-range of applications. I don't have to reinvent the freaking wheel every time I write an app.
Re:Sysadmins don't buy into this article. (Score:5, Interesting)
I challenge you to make a C++/C# application that is thread-safe and can scale to millions of pageviews per day without writing a ton of supporting code. With a good J2EE app server, a java coder essentially just has to wrap his thread-unsafe code in a syncronized() statement, and he's done-- his app is now thread-safe.
Additionally, the "cross-platform doesn't matter for sysadmins" is a false statement; our CIO asked our net ops group "what would be the impact of us moving to an Intel platform?" and our sysadmins (after consulting with the coders) replied "absolutely no impact". That made our CIO very, very happy. Again, I challenge you to move your C++ apps from Solaris to Linux, or even to Windows, without any hiccup.
All of these other arguments are very specious: "I don't have enough RAM" will get you a reply of "go down to Fry's and spend $125 on another GB" every time. Processor speeds, even on Sun boxes, is getting to the point where the processor will never be a bottleneck for anything. Sure, java won't run as fast as a natively-compiled app. Neither will perl, php, tcl, or what have you. Raw processor speed is not as important when you have a couple of GHz to play with.
Re:Sysadmins don't buy into this article. (Score:3, Insightful)
I disagree.
You should always try to find the best, most efficient, most cost-effective approach/solution to your problem.
If your internal time is billed out at $50 per hour, and you want to save your company money, you aren't going to spend 4 hours to create a custom garbage collector just to save another 5k of RAM-- you're going to go out and buy another stick of memory.
I agree wrt bad codin
Re:Sysadmins don't buy into this article. (Score:3, Interesting)
This reminds me how broken many (most?) corporate accounting systems are. Where I work, for a stick of RAM (or software, or whatever), it would take at least four hours spread over a couple weeks just to figure out who to submit the request to, wait for o
Re:Sysadmins don't buy into this article. (Score:3, Interesting)
- Big O matters. Optimization of constants is an expensive luxury.
- Reimplementing the wheel for the sake of marginal efficiency is a sure way to get a square and inefficient wheel.
Most algorithms of any common use are provided in the standard libraries of each language. If not there, any algorithm can be implemented in any language by virtue of its Turing-completeness. This g
Re:Sysadmins don't buy into this article. (Score:2)
I agree that bigO efficiency do matter, and you can do that in any half-decent language. But you cannot do efficient programs on a Turing-machine, you will often add a linear factor to the time usage by implementing it on a Turing-machine.
Implementing bubblesort is more complex and expensive than calling Arrays.sort().
Reminds me about the first book I read about assembly code. It started explaining that performance was one of the reasons to wri
Re:Sysadmins don't buy into this article. (Score:3, Interesting)
There's some overhead, but it's never that bad. Sure, the overhead matters, which is why there's an investment on improving VM technology, providing access to native operations, etc.
But also the worst overhead offenders are not VM issues, but application design issues: blocking I/O, thre
Re:Pre-written appendix for Java Tuning (Score:5, Informative)
Then he invents other ways to talk about the startup time without seeming to talk about the startup time (for instance, trussing Hello World results in a ton of output, but naturally that's Java starting up and loading its classes. Again, do you consider what the machine has to do to boot itself up when you're talking about C programs?). I will point out again that Java's startup time is almost irrelevant, especially in a server environment (which is what he's talking about).
The rest of the article is picking on the "jar" tool. jar is a program written in Java. Criticisms against the jar tool no more reflect on Java than criticisms against gzip reflect on C. The fact that jar doesn't do a good job of reporting errors is (A) irrelevant, because it's a developer tool and we know how to read exceptions, and (B) still more irrelevant, because how well it reports errors has nothing to do with what language it was written in. Tons of C programs have lousy error reporting as well, such as a number of Unix utilities I might name.
Further, this article is obviously very old. He's talking about Java 1.1.8, which is what, five years old now? Might as well criticize Linux by talking about obscure video driver bugs that were fixed five years ago. Obviously, that's not the article's fault for having been written so long ago, but it is the parent poster's fault for bringing it up as if it is somehow still relevant.
Re:Pre-written appendix for Java Tuning (Score:3, Funny)
Java exception verbosity is a serious problem. Many times I've heard of "java.lang" errors. The correct solution to this problem is to use C-style exception handling:
try {
// code
}
catch (Throwable t) {
System.err.println("segfault");
}
As you can see, Java is every bit as good as C.
Re:Pre-written appendix for Java Tuning (Score:2)
The exceptions are verbose, yes, but it's hardly a problem. Your example shows a lack of understanding, not of any empirical evidence.
If you like C so well, stick with it. I think that's the best solution
Re:So (Score:2)
Re:Free Software in Java? (Score:2)
There are projects out there, just take a look on Sourceforge, but most of them are written to solve problems that Java developers have, not your average user.
In the Windows world, most inexperienced developers use something like VB, which is easy to write flashy, shallow applications in.
On Linux, where most of the Free Software developers hang out, many people have problems with th
Re:Free Software in Java? (Score:2)