C Faces Java In Performance Tests 302
catseye_95051 writes "An independent programmer in the UK (he does not work for any of the Java VM makers) has done a series of direct C vs. Java speed comparisons. The Results are likely to surprise some of the Java skeptics out there. " Author Chris Rijk admits, "This article is not supposed to be an attempt to fully quantify the speed difference between Java and C - it's too simple and incomplete for that," but the results are nonetheless food for thought.
FOURTH? (Score:1)
What exactly was it? My only guess is that it was supposed to be a Fourth generation language...
Is the max-C really the best set of optimizations? (Score:4)
Likewise, he doesn't use the -fprofile-arcs or -fbranch-probabilities options which would probably speed up some of the code quite a bit, I would imagine.
Ours does both. I like this. (Score:2)
For team coding, Java just makes things easier. I can take home my piece of code and work on it on my Linux box; another person can test the code in Windows or Solaris. The final project can be run on anything. It greatly simplifies things, since you don't have to worry about platform or implementation idiosyncrasies.
Also, I think it's _REALLY_ not important what language you learn, but the overall programming concepts you retain. Once I knew C++, it was simple to learn Java, Perl, etc. Even Lisp was easy to pick up, once I understood recursiveness. Learning every bit of a language seems a *tremendous* waste of time, to me, especially if it's at the expense of learning how to code properly.
- A.P.
--
"One World, one Web, one Program" - Microsoft promotional ad
This is a great discussion (Score:3)
If the next generation of programmers are as inflexible and intolerant as the
Re:This is nice, but... (Score:1)
No programmer is perfect. (Score:2)
Nobody can catch everything, and to assume a "good" programmer won't write code which leaks memory is very naive. Why do you think people are trying to retrofit C and C++ with rudimentary GCs now?
- A.P.
--
"One World, one Web, one Program" - Microsoft promotional ad
Re:Some Schools... (Score:2)
A universities job is not to learn students how to fix memory leaks (an activity that cannot be avoided by newby c programmers) but to learn them the basic concepts of programming.
The fact that "leaks" of objects, "forgotten" by sloppy programmer yet referenced by program are harder to detect does not make them go away.
Re:Optimal FFT was not the point (Score:1)
Re:Optimal FFT was not the point (Score:3)
/* This one is not optimized and looks like the one for the FFT */
double prod(double *x, double *y, int len)
{
double sum=0;
int i;
for (i=0;ilen;i++)
sum += x[i]*y[i];
return sum;
}
/*This function in an optimized version of the previous */
double prod(double *x, double *y, int len)
{
double sum1=0, sum2=0, sum3=0, sum4=0;
double *end = x + len
while (x end-3)
{
sum1 += *x++ * *y++;
sum2 += *x++ * *y++;
sum3 += *x++ * *y++;
sum4 += *x++ * *y++;
}
while (xend)
sum1 += *x++ * *y++;
return sum1+sum2+sum3+sum4;
}
I have recently used this optimization on my code and found a performance increase of about a factor of 3 (on a Athlon 500). The fist version of the function has three problems:
1) The loop overhead is very expensive compared to the 2 float operations inside it.
2) The indexing is ofter more expensive than simple pointer increment (thought not always).
3) This one is the most important. In the first example, each sum (+=) requires the previous result of the sum to compute. Now, the problem is that the FP ADD pipeline is stalled. The time it takes for each addition is no more one cycle, but the length of the pipeline. In the second example, the use of multiple partial sums prevent that.
Also, as I said before, a good FFT coded in C can be as fast as the processor is. The Java code can be as fast if it is good, but not twice faster, as in the benchmarks. But because, the FFT code wasn't optimized, the performance difference likely came from the loop overhead.
Re:Optimal FFT was not the point (Score:2)
FORTH is dead?!? (Score:2)
bigFORTH+MINOS [sourceforge.net]
Re:How Java Floating Point Hurts Everyone Everywhe (Score:1)
>My question is, is this analysis and the observations still valid now?
The current version of the JDK (1.3) provides two math libraries one "fast" and one "accurate."
Jim
OT but useful info (Score:2)
Two pissy things to avoid if you want C++ code to run really fast:
Re:Optimal FFT was not the point (Score:2)
AFAIK, the double values (and probably the float) aren't cached in the L1 on the Athlon.
Also, I think I've found another problem with your FFT: the use of the "sin" function instead of tables. This function is very slow, and it's performance depends a lot on the math library implementation. It is possible that the JVM implementation of the sine was faster.
Re:OT but useful info (Score:1)
It can be much, much higher than that! I set my compiler to -O11 (I knew it was a good compiler cos the optimisation goes upto 11!) and my code was compiled to just 10 bytes!
It ran in only 10 ms!
I then changed one line to 'virtual' and it came out to 150K, I started running it yesterday and it still hasn't finished! That just shows how bad C++ is.
I hear that Java only has virtual functions. That must be slow as a rock! Everyone knows that C is the best language in the world, I can't see why anyone would use this object rubbish. I mean that must be the reason I hear all these software projects keep failing, they're all using Java.
And as for security, Java's been out for years and we still get viruses! What gives I ask? When are people going to learn?
Re:No programmer is perfect. (Score:2)
Any language in which a programmer has to manage the memory rather than a garbage collector is doomed to have memory leaks, _no matter how good the programmer thinks he is_.
And no language -- with or without GC -- can prevent leaks if programmer is not careful about the lifespan of objects. The fact that physical deallocation is done by GC does not mean that progammer automagically becomes capable of removing all references to the object at the right time (or ever).
Nobody can catch everything, and to assume a "good" programmer won't write code which leaks memory is very naive.
I can -- as long as I am working on my own program.
Why do you think people are trying to retrofit C and C++ with rudimentary GCs now?
They don't. All garbage colloection mechanisms that I have seen for C and C++ are libraries -- they don't affect the language design.
Give me a Java COMPILER! (Score:3)
Java as a language is fine. But Java on a VM just doesn't cut it for real-world apps.
I am currently developing a product series of WAP servers and gateways. A few competitors have chosen Java and they can support a maximum of 500 simultaneous users, and that with at least 256 MB RAM and 2 to 4 Pentium III 600 MHz+ processors.The C versions of similar products have no problem supporting several thousand users on a single processor 64 MB machine. Java just ain't in the ballpark.(I should also point out that the Java VM's are not very stable and crash frequently.)
Java's specs as a language are really nice. Why don't we leave this VM stuff to the specialty apps that need it and start using Java as a COMPILED language?
Poor tests (Score:2)
While I am only vaguely familiar with Java, I am a seasoned C++ programmer. My understanding is that Java doesn't support pointers. A well designed algorithm will often do things such as walk a pointer through an array instead of indexing off of it. I can't speak for everybody, but most veteran C/C++ programmers I know use pointers extensively to create optimizations in their algorithms that simply can't be simulated with references or other constructs. It's the ability to design these sorts of things that makes C++ a more powerful language.
I look at Java as little more than a C++ that has been dumbified so a Visual Basic programmer can use it. All that said, I have to agree that the vast majority of program time is spent waiting for external dependencies like SQL servers, hard drives, system calls, etc... As such, Java is likely to produce just as acceptable of programs as C++ the vast majority of the time.
Java excecution speed actually good (Score:4)
Most Java VM's are quite good at executing Java code, so the results are not all that surprising.
Java's biggest problem is in memory requirements. Metadata for classes is frequently much larger in size than both bytecodes and allocated objects. This needs to improve if Java is to become a more mainstream language.
Benchmarks misleading - Java vs C (Score:4)
1. Medium to big Java apps need 128mb-256mb system RAM to be useable. HotSpot increases memory footprint (uses memory for compiler + bytecode and native code is in memory), but does not enhance every type of app. HotSpot looks great on many benchmarks (small loop intensive apps tested on systems with much memory), but for many apps it slows things down.
2. By pre-running the code for 1 second to get how many iterations to use for 10 seconds, you're making sure that hotspot and JITs fully kick in, without countin any of their execution overhead.
3. Contrary to what you might expect, there's no UI in the game of Life benchmark.
4. The benchmarks are set up to favour run-time optimizations by having function parameters that are constant for long periods of time (ie matrix size)
Java is just fine when you have tons of memory, but if your users have 64mb or less, go with vb or cpp.
The benchmarks in the article have completely avoided any JVM/hotspot initialization overhead, as well as sticking to things JIT compilers are good at.
Mindcraft? (Score:2)
So it's a Mindcraft test?
;)
___
What about Java - native? (Score:2)
It would have been nice if he'd also tested Java compiled to native code (such as with gcj [cygnus.com]) . He says the the point of his tests is to measure dynamic vs static compilation, so his experiments would be better if that were the only variable. It would also help squash the myth that coding in Java requires using a VM at runtime.
Java chip (Score:3)
Re:Haiku (Score:2)
OS or Driver use C
ELSE Java does fine!
Java vs. C speed (Score:2)
The Life game is the only test which requires the garbage collector a little bit, but here C looks not so bad.
The Fibonacci test is not so important, it proves only that HOTSPOT and the IBM JVM do function calls faster on Athlon. It seems that both C compilers don't do good opimizations here for Athlon. Pipeline stalls might be the cause.
The FFT C code uses calloc() to create the FFT matrixes. malloc() would be sufficient. For arrays with 2^16 doubles the clearing of half a megabyte needs some time. A for-loop is used for copying data and not memcpy(). The Java code uses System.arraycopy()
The result is predictable.
Running current C compilers on an Athlon is also not fair, because both compilers will not produce good code for it. The Visual C++ didn't even had the Pentium specific flags.
I think that the compiler-based JVMs have gone a long way. I wish some of that developement resources had gone into the C++ compilers of both companies.
My conclusion: The Java performance penalty is reducing on some platforms. Linux on x86 is one of them, thank IBM. I doubt, that we will see a free JVM with that performance anytime soon.
Re:Java excecution speed actually good (Score:3)
* C was given the choice of two not-very-good compilers: GCC and MSVC. From experience, I have seen the same code (especially math or array-intensive code) execute an order of magnitude faster when compiled with Kai CC or Portland Group CC. OTOH, Java was using the top of the line compilers and JVM (e.g. MS's JVM is well known to be much faster than even Sun's in Solaris...)
* Java had the advantage of run-time optimization. If you go to Ars Technica and read up on HP's Dynamo, you'll see how run-time optimization *alone* can give you a 15-20% improvement in speed with *compiled* binaries. Granted, run-time optimization is 'in the box' for the Java platform while, besides Dynamo, C/C++ are stuck without it.
Even if you dismiss the run-time optimization advantage as an integral part of the test, the choice of compilers *did* have a speed effect...
At any rate, I *am* a Java fan --I am just curious to see some true, fair benchmarks.
engineers never lie; we just approximate the truth.
I don't like Java (Score:4)
First, I wouldn't say it has everything one could want in an OOP language. The language feels like watered-down C++: templates (and STL), objects on the stack, const, references, and true multiple inheritance, are all missing from Java, but clearly would be useful. Yes, the absence of these features makes life easier for beginners, but it's painful to work around Java's deficiencies when you know how to use such features.
Unfortunately, Java isn't really multiplatform, either, unlike what Sun's marketing team would have you believe. Java is multiplatform in the same way that my Super Nintendo ROM is: I can play it on windows, linux, solaris, etc. I need an emulator, of course. Similarly, I need a "Java Virtual Machine" to run my Java bytecode: it's really just an emulator for a platform that doesn't exist. And if the emulator isn't ported to your favorite platform, well, tough.
But the main thing I don't like about Java is how gratuitously integrated it is. Why should the Java standard library (which is really a platform in itself) be inextricably bound with the Java language? It could easily have been made into a C++ library, since C++ has direct support for all the language features of Java. Then, they could have written the Java language/bytecode interpreter separately, and made it an option to use the Java platform. This would clearly be better for everyone (except Sun): I could use the well-designed Java APIs in my C++ project with no loss of speed.
The same thing goes for much of Java. Why does Javadoc, a program that generates documentation from comments in your code, have to be integrated with the rest of Java? It could also be used to document C/C++ code, with minor modifications.
IOW, Sun is trying to lock you into their platform in the same way that MS is with their strange APIs ; except that Sun's method is much more effective. I am sticking with C++.
OK - glad to hear it! Here's another one... (Score:2)
Re:Newer Ver Of Most Programs Is Usually Slower! (Score:2)
My Criticisms of Java... (Score:3)
...never focused on the fact that it is slow. It was always the fact that they had to go and invent another language that isn't any better than C or C++. I'm all in favor of cross-platform development, but forcing us all to maintain yet another codebase, in yet another language is just a royal pain.
If Sun had produced a platform independant C/C++ environment... wow... just imagine. We may never know how good it could have been.
Javascript: True Heir of Hotspot Technology (Score:2)
As of 8/14/99 the ratio of Javascript demand to Java demand was .304 and as of 4/6/00 it was .331
Since the dynamic optimiazation of Hotspot is actually in the lineage of dynamically typed languages (Smalltalk, CLOS, Self, etc.) it is poetic justice that the most widely deployed programming language, Javascript, is not only of that lineage, but it is overtaking Java -- the language that made dynamic optimization fasionable despite its less-dynamic heritage of Object Pascal and Objective C.
Re:Kernel times (Score:2)
C doesn't have any particularly aesthetically pleasing IO libraries for this, primarily because by the time network programming becase so ubiquitous that better frameworks were needed, much new development had moved to C++. For an example of an extremely well designed C++ networking framework, see the ACE project [wustl.edu] which supports many different single and multi-threaded programming models in a portable fashion while still allowing you to use high-performance features (such as asynchronous IO) that are not supported on all platforms if you want to.
Re:No programmer is perfect. (Score:2)
You must not write large programs.
They are very large -- just modularized well.
Does it really matter if it's a change to the language spec or just the simple addition of a new "malloc" procedure? It's still retrofitting.
Of course, it does. The beauty of C (and at some extent C++) is that the lanugage design is not tied to some [crackpot] idea (or fad) of how resources (of whatever kind) must be managed, and libraries can implement whatever they please as long as the language can implement some basic interface to allocators and support structures (as in C) or the same interface plus the idea of objects and methods (as in C++).
Re:Optimal FFT was not the point (Score:2)
Haiku (Score:2)
They fit not in the same mold
Apples, Oranges
Re:Java excecution speed actually good (Score:2)
There are a lot of eveidence to show that GC whould outperform manual memory management, especially in firm-realtime applications such as multimedia applications (hard realtime systems generally do no allocation at runtime). Basically, GC allows collecting memory to be defered out of a critical loop, or to be scheduled seperately from critical processes, while free() has to be run immediately. Also, malloc()/free() need to do a lot more bookkeeping to know to manage the heap, reduce fragmentation, and so forth. GC can avoid some of this, and amortize the rest over multiple allocations.
Of course, nothing is going to beat performance on a static/automatic array in C w/o bounds-checking. Java really has no way to support such a beast (even *with* bounds checking). This is one (or many) examples of Java "generalizing" to the extreme, which makes a lot of modern static (compile-time) optimizations impossible. They sort of make up for it with dynamic optimization, but as the article about Dynamo on Ars Technica indicates, they tend to compliment each other, so it would be better yet to have a language like C or C++ + run-time optimization ala dynamo.
Re:It's all about optimization (Score:2)
Your comments about gcc tho are pretty much on the mark, although I was under the impression that on x86 gcc was a decent optimizer.
Kernel times (Score:4)
The results of these numeric tests surprised me, but I'd like to have seen Watcom/Borland C compilers used, as both have a reputation for superior numeric code generation to Microsoft's Visual C++ product and GCC.
A view from a sceptic. (Score:2)
I'm sceptical. I would doubt any review that puts Java even close to C in terms of raw performance of code.
Sure...
Bill, embedded software developer.
Re:It's all about optimization (Score:2)
What I think... (Score:5)
One of the easiest languages to learn (provided that you understand OOP). I tried C++, and I failed (for now). I tried Java, and it is very easy for me. And for the ease of learning, it gives me immense power. Everything anyone could ever want in a true OOP language.
It is also multiplatform... we all know about that.
The only language I can think of that comes close is VB. VB is Windows-only, (well, you have VBA in Office98 on MacOS, OK) and it doesn't give you much of OOP (inheritance, etc.).
Finally, there are a lot of people out there that will learn a language simply because it's in demand, so that they can get a lot of money paid for writing things in it, and Java wins here as well. Just go do a search on Dice.
The only thing that bothers me is that Java is now definitely being controlled by a corporation. I'm pretty glad it's not Microsoft, but I'd still rather have it controlled by an unbiased group. OTOH, without Sun's promotion and development, who knows if Java would ever rise to where it is today.
Let's just hope that the damn applets will fade out... I just hate them! Please correct me anywhere you think I'm wrong - that's what the Reply link is for.
--
Speed and server applications (Score:3)
Re:Java excecution speed actually good (Score:2)
In a few years, the JVM and Java programs will probably become so commonplace, that it may be allowed to just lurk in the RAM. Sort of like <gasp> the way IE works in Windows. (Why did you think it opens so quickly?)
--
Re:Java is a FAD. (Score:2)
The only problem with JAVA is that the poseter is right about the write once, rewrite anywhere. Java under LInux != Java under MacOS != Java under Windows != java under Solaris. In fact on the Mac, Java can take three forms, Apple JVM, Microsoft explorer JVM, or Netscape JVM, and they are not the same.
To say nothing of JVM versions within a given platform...
Sun tried to make a great technology, where you could have true cross-platform compatability, and where it was easy to learn to code for it because of its similarity to other languages (most like C++ IMHO). It is not working for the same reason networking and the www are not working, Vendor-introduced incompatabilities.
The only reason these technologies are thought viable at all is mainly the dominance of certain technologies (eg 90% of pc's are Wintel, so if you write for MSJVM, you have 90% of the audience.)
Volano benchmark (Score:4)
You will see there that the best VM is Tower TowerJ 3.1.4 for Linux !
Second point, I never doubt that java on the server is a good solution now. For me, the only trouble with java now is the memory gluttony.
If some of you want to test Jsp/Servlet, here are some good open source products : java.apache.org (JSP, servlet), www.enhydra.org (JSP, servlet, EJB)
Re:Kernel times (Score:3)
I remember writing a small test program in c and identically in Java (a couple of syntax changes only). Using IBM's jdk with -O made it faster than pgcc with all the optimization flags we could think of!
Then I rewrote the program in an OO-way in Java, and of course it was slow:)... but it does show that Java isn't nessecarily slower than C for some tasks.
Re:Volano benchmark (Score:2)
This is nice, but... (Score:2)
Hopefully we will see an effort to optimize the Java libraries soon, for startup time as well as for speed, or else Java will become a pure Serverside language.
Re:Speed and server applications (Score:3)
Re:I don't like Java (Score:2)
Re:OK - glad to hear it! Here's another one... (Score:2)
Memory allocation (Score:4)
It's a bit unfair to blame gcc for poor memory allocation: unlike Java memory allocation isn't built into C.
Re:Java performance comparison (Score:2)
You suggest using structures and pointers to functions to accomplish OO techniques in C. I used to do this about 12 years ago when C++ was not portable/standardized enough to use for the project I was one. But don't fool youself into thinking that you are getting better performance. The C constructs you talk about are largely the same thing that was produced by the CFront compiler in the first place.
The problem now days is that most programmers don't understand how the various features of the language they are using ultimately get implemented at the assembly level. If you use the MFC CString type (for example) as proof that C++ is slower than C you truely are missing the point. In the case of CString, you are sacraficing performance for notational and functional convienence. If you are writing a word processor, that probably isn't the best idea. If you are writing a game that rarely deals in text, it's probably worth the slight speed hit you get for the manageability.
The bottom line is that C++ will produce every bit as tight of code as C will and it will do multi-threading every bit as well as C does. Proper multithreading has a lot more to do with algorithmic design than it does the language you are using.
More often than not, C++ will actually produce faster code than C. The reason being is that few programmers will go to the hassle of setting up arrays of pointers to functions in order to accomplish the polymorphism that is often needed in a programs design. They resort to switch statements and such as the easy way out.
If you blindly put 'virtual' in front of every function cause the manual suggested it, and then wonder why the implementation of a simple GetValue() function is so slow when you marked it 'inline', then is it any wonder? A good knowledge of what is being produced at the assembly level will make you a MUCH better coder and largely if not completely eliminate any C++ performance gotchas.
A Java COMPILER wouldn't help (Score:2)
Re:A view from a sceptic. (Score:2)
Re:Too good to be true (Score:5)
As far as getting the latest JDK in anything but Windoze, you can currently get Java2 v1.3 in Windoze, Solaris and Linux (with other ports on the way). The fact that they came out with the Windoze port first should be no real surprise to anyone: most folks are still using Windoze, hence there is more demand for upgrades on this OS than any other.
I've written Java stand-alone apps that are monumental in size and I've written Java server-based apps. I think that Java's main glory lies in server-side programming for web-enabled applications, but it is no slouch in the large stand-alone application market. You keep hearing people complain that Java eats up so much memory when all you want is a simple Notepad app. You need to understand what Java is doing and learn to work with/around it.
If you load a large app that utilizes many of the Swing widgets and interfaces, the memory load becomes a bit more understandable. On the large apps that I've written for Java, it has actually performed quite well (sub-second sorting and display on a 10K row table, etc).
Most of the comments that I see bashing Java are from people that have only taken a cursory pass at the language. If you try to code a Swing interface using the same paradigms as AWT (or C, C++, etc), you'll wind up with a slow monstrosity. If you code Swing the way it was intended to be coded (using the MVC architecture), you'll find that it's a remarkably powerful and full-featured GUI API.
At any rate...I'll get off my soapbox now. I really don't mean to tout Java as the be-all end-all of programming languages (it's not). But it is one of the better languages out there for the current direction of Internet-enabled programming.
Re:I don't like Java (Score:3)
The final keyword in Java also serves the purspose of constant declarator for variables. This
"In java, in addition to Object, you need a classloader to load your code. And you get no strings, because the compiler has specific syntactic sugar for dealing with strings (why oh why did they hack this into the compiler isntead of just supporting operator overloading?) I'm sure there are more, but its been a while since I hacked java at that low a level."
If you are griping that VMs are too dependent on the Java language itself...well, tough. Sun created the VM spec FOR the Java language...not as some universal VM and byte code instruction set that anybody could write any arbitrary language on top of (although that is certainly possible). People write Java in Java...they don't write in bytecode, against the VM directly.
Also, somebody was griping that the standard libraries and default utilites from the Sun JDK were written in Java. Well...DUH. They are Java libraries and tools. They should be written in Java. Java was created for easy cross-platform development...it would be stupid then to write all the libraries and tools in some native language, then have to ALSO port all those on top of writing a VM. With the libraries and tools written in Java itself, all VM writers have to worry about is writing a VM, and presto, the support libraries and tools are all magically available. It would be hypocritical and tragically stupid to write the supporting tools and libraries for platform-neutral Java in some platform-dependent language.
Re:Java excecution speed actually good (Score:2)
Java's requires more memory simply because you are loading a Java Virtual Machine.
How Java Floating Point Hurts Everyone Everywhere (Score:2)
And they give several examples where java will produce incorrect results in numerically intensive applications.
My question is, is this analysis and the observations still valid now?
Re:Java excecution speed actually good (Score:2)
If you are just running a single Java applet, this is a huge overhead. However, once you have a couple of Java processes running at once, it's fairly trivial. 8Mb of overhead for 10 processes? That's about 800Kb per process - not a huge amount.
I think the biggest issues so far have just been the usual `chicken & egg' scenario: not enough people use Java for normal apps, so nobody writes Java apps since there's no market.
I must admit, I'm a Java sceptic myself; to begin with, the language was a real pig. The near-instant load-time compilation of Perl knocks Java out of the window; even C is much faster in terms of compile-run-test cycles. However, no doubt that will change in time.
The result that really surprised me, though, was Cygwin32 beating MS VC almost across the board - running on Windows! WTF?!
Dynamic Optimization (Score:3)
Nevertheless i think dynamic optimizations are the thing to come: it costs a lot of man hours to find ideal optimizations to code, (you need to figure out the core routines, think about which optimizations make most sense for the current architecture, check those assumptions against reality) and man-hours, in contrast to cpu-time, don't become much cheaper. The dynamic optimizer does all that work for you, and even optimizes for different starting conditions/parameters by looking at what is *really* taking time now.
Look at the success (regarding computing power per bucks) of transmetas crusoe. A dynamic optimizer can gather far more hints for optimisations (branch predictions, loop length, array sizes, memory lookups) than a static one, in the latter case the programmer has to give all the hints (compile a subroutine with the correct set of optimisations, sort the loops right, sort branches, keep in mind some ranges for parameters and how they affect loop length, for some compilers throw in compiler directives, etc.) and even has to reconsider when porting to another architecture.
So with static optimizations it's either optimization limited to the part the compiler can see at compiletime (except for very basic stuff, every decent compiler will get that matrix multiplication right) or man-hour intensive and thus costly optimization.
Re:Java performance comparison (Score:2)
Although Java still sucks... (Score:2)
Re:Why do we need Java: (Score:2)
Re:Kernel times (Score:3)
One area in which C does not offer significant benefits over Java is in the area of network server programming
I agree, but with one exception: Java does not do non-blocking I/O. Therefore you have to use tons of threads, at least one, likely two, for each connection. For a server handling thousands of connections, you can see where this gets out of hand. In Linux's case where all threads are kernel level threads, performance is back in the shitter since it has to make a new set of system calls to manage the threads. But of course, you want to use native threads so that you can take advantage of multiple processors.
If non-blocking I/O were possible, one thread and a huge select is all that is needed. Squid is a good example of a server that can handle thousands of connections using one thread. The cost here is complexity, but the reward is performance.
I'm not advocating non-blocking I/O. I think Java's approach makes for much simpler and more stable servers, but JVMs must make threading as lightweight as possible while still supporting SMP for performance to compete with C-based servers. I think this means supporting a mix of kernel and user level threads.
Re:OT but useful info (Score:2)
The v-table is nothing to do with multiple inheritance at all. It only has to do with virtual functions: the v-table tells the C++ program what function to call for the particular type of variable. This is why the "virtual" keyword is there: it get into performance.
On the other hand, multiple inheritance didn't increase the overhead of function calls for most things--until you use virtual base. At that point you'll have another "base-pointer" to traverse whenever you want to call methods or access members of the base object. But of course, you can have virtual base class even if you only use single inheritance, its the "provision" of multiple inheritance that matters, not the use of it in fact. This is just like the virtual keyword.
Re:Java excecution speed actually good (Score:2)
Danger! Deceptive Graphs. (Score:2)
Those graphs in the article are extremely deceptive since the y-axis does not start at zero. If they did, the curves would be much closer together. As it stands the difference between most curves is less than 30 percent and that is nothing to get your shorts and knickers all bunched up over.
Still, it was a good article even though the sample programs are a little trivial.
Re:Java excecution speed actually good (Score:2)
I don't agree. The reason Java compilation is slow is that the standard java compiler, javac, is written in java so you have the overhead of loading the VM, often for each file. Try a recent verion of jikes, an Open Source java compiler written in C++ from IBM. It's at least an order of magnatude faster than javac.
The result that really surprised me, though, was Cygwin32 beating MS VC almost across the board - running on Windows! WTF?!
Yeah, this is really hard to believe. Last time I did anything with Cygwin on Windows (compile Jikes!) the Cygwin binary was much slower than the MSVC one.
It's all about optimization (Score:5)
What makes me even more suspicious is that I have a K7-500 too and I have done some tests with a heavily optimized FFT (fftw) and I get a performance around 400 mflops. There's just no way a JVM can be 220% faster than that. So my comclusion is "with poor code and poor optimization, Java can be faster than C".
I don't want to take position of the whole Java vs C speed, but what I'm saying is that at least his FFT test is flawed.
Re:Java excecution speed actually good (Score:3)
No Java VM doesn't have any defined tome to collect. Diffrent JVMs do it at diffrent times.
Most have a low pri thread that will GC either whenever it is runable (for GCs that are intrruptable), or when it has waited "long enough", or when "enough" memory is allocated. All JVMs I know of also GC when they are out of memory. Except for the Java subsets that don't GC at all (like SmartCard Java).
The one time Java doesn't GC as a rule is "when you free something", because it doesn't know when you free anything. There is no free operator. You can assign a pointer NULL, but that won't free the object unless that is the last pointer to it! Running a GC on every NULL (or other!) pointer assignment would be staggeringly expensave. Java could keep a reference count hint with each object (like Limbo does), but that has it's own problems (and advantages). I don't know of any JVM that does.
Re:I don't like Java (Score:2)
uh yeah...I guess you haven't done much C++. const can also be used to make a constant object. an object where only constant methos can be called. it can be quite handy to hand back a reference or pointer to a const object. that way the person can't change the object (it becomes read-only). this is a missing item in Java. you would need to clone the object, but assuming you are dealing with Objects (like if you are deserializing from a DB and caching them, so you need the caller to get a readonly copy), you can't assume clone is available, since String isn't cloanable...
Re:Memory allocation (Score:2)
Re:What about Java - native? (Score:2)
It actually does if you intend to use dynamic loading. In any case, Java without all its interesting features is not very interesting to me. It's not just the language that makes it attractive but also that it is a very safe environment to program in: no crashing software, no memory leaks, array bounds checking, etc. In addition, the standard library coming with Java is very good for productivity since it provides a lot of well designed functionality.
Java as we know it is not really possible without the VM, and as the article points out quite clearly, there's not much to be gained anyway by compiling statically. I haven't seen any benchmarks where a static java compiler is much faster than a regular VM (not enough to justify accepting its limitations).
I think this article smashes one myth: Static compilers are slower than dynamic compilers. The C compilers were in a good position here: under development for decades and code they should be able to run well. Yet, despite the novelty of JVMs and despite the fact that the JVM has to do bounds checking on arrays, it delivers comparable performance. The only logical conclusion is that if a dynamic compiler for C were developed it might actually be faster than statically compiled C.
I agree that it would have been nice if he'd thrown in a static java compiler. My guess would be that there would be no surprises and that its performance would be mediocre compared to the c compilers and the JVMs. Even a static java compiler still has to do bounds checking and thus the static c compilers have an advantage (plus that their implementation can be assumed a little more mature).
Re:Too good to be true (Score:2)
You actually make my point quite well without this, however.
Swing is built from the ground up on the MVC paradigm. If you do not use this paradigm when coding a Swing GUI, you'll effectively "disable" much of the work and code that has gone into Swing. No wonder people think it's a memory hog -- they're turning off half of its features.
Swing, when coded under an MVC architecture, operates quite smoothly and quickly. For those not "in the know" on MVC, let me give an example from my own experience:
In the old days of the AWT, if you wanted to display a list of data, the data was an integral part of the list widget. If you wanted to change the data that was displayed in the list, you would perform an operation on the list widget itself. Similarly, if you wanted the same data displayed in a table, you would need to replicate the data and insert it into your table.
In Swing, you use the model/view/controller architecture (MVC) to separate content from display. Actually that just accounts for model and view. The controller refers to the interaction between the data and the display. This control is typically bundled together with the view (display widget), but can be separated out rather easily.
The benefit of the MVC paradigm becomes readily apparent when you take one of the examples I used above. If you want to display the same data in a list and in a table, you don't need to replicate the data. You set the model for both of the GUI widgets to the same source. When you want to change this data, you simply change the source (the model) and both of the GUI widgets are automatically updated with the change (all Swing GUI widgets have listeners on their models that watch for changes).
With a subtle change in your coding model/paradigm, you can acheive _huge_ benefits in both coding time and performance by simply operating within the parameters that Swing was built on.
In regards to your question of using MVC with C, it would have to depend on the libraries that I was using in C. If the libraries that I was using were built to use MVC, then yes, I would consider this a better idea. If not, then no, I wouldn't use MVC. The use of MVC is really up to the individual's coding style, much like OOP in general. However, if you don't want to code using MVC, then don't choose a tool that is based on it.
If you don't like OOP, then you really don't want to program in Java. Similarly, if you don't like MVC, then you really don't want to code in Swing.
--------------------------
Re:My Criticisms of Java... (Score:2)
Of course, they could have put in some special checks to allow the program to determine what the standards are for the particular platform the program is running on. But C/C++ already does this (#ifdefs) and it is gross. Yes, you can write platform-independent C/C++ code, but you have to be careful to avoid certain language features, and the code tends to be a bit difficult to maintain.
Instead, Sun chose to start with a C/C++ish base, and take out a lot of the error-prone constructs (including those which cause platform dependencies). I have found that these constructs are rarely needed: occasionally it would be nice to use some of the "missing" features (I'd be especially happy to have enums back), but in nearly all cases, thinking about the problem for an additional 5 minutes will result in a solution that works just as well, and in most cases the code comes out a bit cleaner in the process.
I'm not saying that Sun did a perfect job coming up with Java. But I will say that for platform independence, Java is much better than C/C++. I'm not just talking about the basic language constructs either: JDBC (for cross-platform, multi-database SQL), Servlets (for cross-platform, multi-web-server CGI-type programs), and other APIs greatly simplify development for server-side applications. I won't speak on the client-side, since I haven't worked as much on that end of things (although the work I have done tells me that I would much rather develop a GUI in Java than some other language, especially if it has to be cross-platform).
Re:Memory allocation (Score:2)
Perhaps it should also be responsible for providing an implementation for various kernel modules, because the underlying OS's may be suboptimal. And while we're at it, let's have it compile in its own kernel, since the underlying OS's might be suboptimal.
malloc is a C standard library routine.
gcc is a compiler suite.
libc shouldn't try to compile my programs, and the C component of gcc shouldn't try to do anything other than compile my code.
J2SE 1.3 is better, but J2SE 1.4 might be sweet (Score:2)
From what I hear, there's a big wodge of effort going into improving graphics performance for 1.4, and it's already well underway. Of course, it'll be at least a year away, *sigh*
The embedded Java market seems to be taking off though...
Re:metadata (Score:2)
Lord Pixel - The cat who walks through walls
Re:How Java Floating Point Hurts Everyone Everywhe (Score:2)
Re:Java excecution speed actually good (Score:2)
Of course, the reason that Jikes can be so quick is that it doesn't do depenedancy checking by default and so can be a dangerous compiler if used carefully - try jikes -depend sometime to have it do the same things javac does, and it's no longer an order of magnitude faster.
It becomes a real issue when you're trying to compile Java code from a makefile - if you don't want to generate dependancy files, your only option is to use javac or compile with jikes -depend.
Re:It's all about optimization (Score:3)
A good example of this is Perl programmers moving to Python. If you do things in Python the "Perl way", typically, it performs rather poorly. Likewise, if you do it the Python way, it performs on par with Perl.
This highlights the fact that no optimizer can replace a good programmer with good design skills. It does, perhaps, highlight that perhaps you can get reasonable results with less skilled programmers using Java than you might get with C, or especially C++.
Re:Java excecution speed actually good (Score:3)
Generational Garbage Collectors try to run a GC sweep after every X K of allocs (where X is about the size of the cache). They are quite a bit faster then most GCs that only collect when memory is critically low (memory accesses in cache are an order of magnitude faster then out of cache references). The downside is they GC more frequently, so rather then being "fast" for five minutes, and then a short pause, they nick you for a few 100ms all the damn time. Of course that is also an upside, they don't feel jerky.
Generational GC also tends to pack items allocated at the same time (and still live) together, which for many programs increases the locality of reference, and helps a whole hell of a lot if the system is paging.
I don't know how many JVMs use generational GC, but since it is a 70s Lisp technology, I can't imagine they use something worse. GC hasn't been a red hot research area, but it has had a lot of good work done in the last 20 years (and a lot more before that!)
I do know many JVMs run GC sweeps periodically if there is idle CPU (get a Java profiler, and check out the activity in the GC thread).
What the original post was complaining about was the "overhead" with each object. I'm not convinced that exists. I know every object has the equivalent of a C++ vptr (four bytes). Every object type has a virtual function table (possibly shared if it doesn't define new functions, or override any of the parent's), and a small description of the data fields, and the name of the type, and the function names and such.
That's a lot of crap -- say 400 bytes. Easily more then a simple structure (say a 2D point, 2 4 byte ints). If you have one point object, you probably have 100 times as much memory dedicated to describing the point object (in case you need to serialize it and send it via an RPC or something). But if you have 5000 points, the overhead of the meta data is vanishing low (400 bytes out of 40,400 bytes, 1%). Er, except for the vptrs (4 bytes on most systems), that'll bring it up to 20,400 overhead bytes for 60,400 total bytes or about 33%.
So for very simple objects Java does have a noticeable overhead. But for less simple objects the overhead is much smaller. If you compare any Java class with a C++ class that has virtual functions the per-instance overhead is identical. The per class overhead is different (with Java almost certainly having more overhead), but the per class overhead isn't interesting. There are not enough classes in most programs to make a difference (and believe it or not, with templates it's far easier to "accidentally" make 1000s of different classes in C++ then in Java).
That leaves arrays. C/C++ arrays need not have a length stored with them while Java ones do. Java is behind 4 bytes per array on that score. Relevant if you have lots of small arrays, irrelevant otherwise. Except....
You know C++ does need to know how many elements are in an array so it can call the destructor for each one (it can omit this, if there is no destructor, but I don't know of compilers that do that). So it doesn't beat Java, it ties.
...Oh, and C needs the length for dynamically allocated arrays (via malloc) so it can free them again. But it does win on static arrays.
Pretty much all of them if you make a thread that calls java.lang.runtime.gc() and then sleeps for a few seconds in a loop. Or even most of them (I think) if you merely have some idle CPU.
Re:Kernel times (Score:3)
In my opinion making a program complex for performance reasons only is a bad idea unless we're talking about long term usage of a program. Developers often forget that most of the cost of developing a product goes into maintenance. Java servlets provide you with a nice scalable architecture for serverside programming and it allows you to focus on the parts of the program that provide the functionality you need rather than performance related stuff.
Re:Kernel times (Score:2)
Servlets are usually more efficient than cgi scripts (such as perl scripts) because they are already running and don't require the overhead of spawning a new process to handle the request. That and jvms such as discussed in the article make that java servlets are a very good alternative for perl (and we are just talking performance here, not all the other nice features of java such as readable source code, OO features etc.).
Visual C++ Test is a bit unusual. (Score:2)
Message from the author (Score:4)
Another post I sent in last night which quickly got rejected was this:
Unfortunately, that release came a little too late for me to do much about, though I have quickly tested the Solaris x86 (on the same hardware as the Windows tests), and the rests are pretty much identical, though Solaris was a bit faster. (but then, I was running without the desktop running which does help).
Also coming a bit too late was results from IBM's Windows 1.2.2 JDK, which I found a bit surprising - it did worse on some tests, and better on others, though I didn't have much time to test things.
Thanks for the replies... kinda makes it all worth it - it took me about 100 hours over 4 weeks to do this. (took up a lot of my evenings)
I better re-install Linux sometime so I can test on it again... (my last install stopped working for unknown reasons)
It'll probably be some time before I update the article - first I want to finish off my MAJC article, which really is too damn big. (22,000 words... ouch).
Re:Memory allocation (Score:2)
The test claimed to use some version of gcc 2.95.mumble-mumble. 2.95 has a lot of x86 improvments (and generic improvments!) from egcs. What 2.95 doesn't have, that the dev snapshots do is -mtune=k7 and -march=k7, which should have made the FP stuff go faster (scheduling for the PPro and running on a K7 isn't bad, but scheduling for the K7 and running on a K7 is better).
There is also some experimental code to do cach line prefetching, but I didn't follow the thread to see how it turned out (last I say it made the streams benchmark numbers twice as good, or better, but there was some concern that it might make other things slower).
Re:C Compiler comparison (Score:2)
Re:Memory allocation (Score:2)
According to the egcs/gcc news page [gnu.org] from 2Sep99: Richard Henderson has finished merging the ia32 backend rewrite into the mainline GCC sources. The rewrite is designed to improve optimization opportunities for the Pentium II target, but also provides a cleaner way to optimize for the Pentium III, AMD-K7 and other high end ia32 targets as they appear.
That was post 2.95, and post 2.95.1, but before 2.95.2. Looking at the article it was using 2.95.2, so I assume it has the new x86 backend.
I think the issue they had with pgcc was it did a lot of x86 things at the machine independent level of gcc, and a lot of machine indepenedent things were in the x86 code (like branch prediction). A lot of the pgcc optimasations are in the new x86 backend, or were properly added to the machine independent code.
That is to say a lot of the stuff pgcc did someone re-did and put in gcc. Not all of it. And gc now does stuff I don't think pgcc does.
I don't know which is faster at this point, but I could beleve either. After all egcs got a whole lot of MI speedups that pgcc hasn't.
Not entirely controlled by Sun... (Score:2)
I know, I subscribed to the Java2D mailing list for a long time before they released the API. They asked the list many questions, and listened to everyone. Sometimes they shot ideas down if they thought they just were not a good idea (support for pallete cycling was one example here), but at least they listened to complaints about the way the API was going and did make changes based on input. One member even went so far as to code up the whole propsed Java2D API so people could try it out in practice and make comments.
Similarily, programmers have the opportuninty to voice thier opinion in code which is released under the SCSL (Sun Community Source Licence). Is it really "Open Source"? No. But then again, it does give you the opportunity to fix things and talk to the developers about how things are in Java in the clearest way possible - through code. Think of it as "free speech" where only one person showed up to hear you talk.
It is still true that Java is under Suns' control,
but never before (that I know of) has a company had a product with as much outside review and commentary in development.
Re:Memory allocation (Score:3)
C++'s new tends to be a thin layer over malloc. The STL allicator wasn't designed to be faster then new/malloc, but to deal with segment issues, shared memory issues, and maybe even object perminance.
The allicator SGI's STL (which gcc currently ships) allocates about 20 objects when you ask it to allocate, and doles them out one by one. That's for "small" objects. Anything over about half a K (or maybe 2K? I forget) goes through to new. This might be faster then 20 mallocs. Or not. Some mallocs are pretty good. Better then the STL allicator overhead. It does tend to reduce fragmentation. By a whole lot more then I expected.
If you don't like the default allicator, they are easy to write, and Alloc_malloc is allways ready to step in. There is even a #def to ask it to be used everywhere.
Volano tests a chat server (Score:2)
Re:Java vs ? (Score:2)
Java bytecode is completely centered around the Java class system. So it has low-level primitives to handle virtual methods, Java interfaces, exceptions, and all of that. But, as a result, it's really painful to use something that can't be shoehorned into Java's class system (though it can be done with lots of "static" junk).
JPython does a REALLY beautiful job, though. I highly recommend it to anybody interested in a high level language for the JVM (www.jpython.org).
--JRZ
Re:Dynamic Optimization (Score:2)
Re:Some Schools... (Score:2)
My university switched to Java from C++ because of the perception that it is easy to learn, and because they were having problems will people using different compilers. The profs were answering more questions about compiler warts than programming. At least in Java, the warts are relatively standard across various platforms
It's also easier for the profs to do interesting graphics assignments, small games n' stuff that are more interesting to most people than, say, printing out Fibonacci numbers.
Anyhoo, my point is, the language taught in the first two years doesn't make a lot of difference.
Dana
Optimal FFT was not the point (Score:2)
Btw, when you say "There are some things a C compiler can't optimized because of aliasing", do you literally mean it would be impossible for any (legal/correct) C compiler?
If you have a nice FFT that can easily to 'converted' to another language, I'll be happy to try it out...
Re:Kernel times (Score:2)
Re:Benchmarks (Score:2)
This is why most 'real' benchmarks are actually a suite of tests, and the final result is just an average.
Re:Java excecution speed actually good (Score:2)
Well, Cygwin with an insain amount of hand-tweaked, unstable compiler optimizations VC++ beat the baseline most of the time.