Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Java Programming

C Faces Java In Performance Tests 302

catseye_95051 writes "An independent programmer in the UK (he does not work for any of the Java VM makers) has done a series of direct C vs. Java speed comparisons. The Results are likely to surprise some of the Java skeptics out there. " Author Chris Rijk admits, "This article is not supposed to be an attempt to fully quantify the speed difference between Java and C - it's too simple and incomplete for that," but the results are nonetheless food for thought.
This discussion has been archived. No new comments can be posted.

C Faces Java In Performance Tests

Comments Filter:
  • I read about FOURTH in an assembly book once, mentioned in passing only as a language that had about the same appeal/hype as java.

    What exactly was it? My only guess is that it was supposed to be a Fourth generation language...
  • I'm not sure about the set of optimizations he uses for his "max-C" setting, which is supposedly GCC with all the optimization options enabled. For example, he uses -funroll-all-loops, but the info for GCC says:

    `-funroll-all-loops' Perform the optimization of loop unrolling. This is done for all loops and usually makes programs run more slowly.

    Likewise, he doesn't use the -fprofile-arcs or -fbranch-probabilities options which would probably speed up some of the code quite a bit, I would imagine.

  • The freshmen courses are all C++, but the upperclass courses are taught mostly in Java. Though it might seem the exact opposite way to do things, I think it's a good idea to get people to wrap their heads around the concept of pointers *before* eliminating the need to worry about them. The upperclass courses focus less on the semantics of the language and more on proper coding style, team coding, and good OO practice -- the *important* stuff, IMO.

    For team coding, Java just makes things easier. I can take home my piece of code and work on it on my Linux box; another person can test the code in Windows or Solaris. The final project can be run on anything. It greatly simplifies things, since you don't have to worry about platform or implementation idiosyncrasies.

    Also, I think it's _REALLY_ not important what language you learn, but the overall programming concepts you retain. Once I knew C++, it was simple to learn Java, Perl, etc. Even Lisp was easy to pick up, once I understood recursiveness. Learning every bit of a language seems a *tremendous* waste of time, to me, especially if it's at the expense of learning how to code properly.

    - A.P.
    --


    "One World, one Web, one Program" - Microsoft promotional ad

  • by Anonymous Coward on Saturday June 03, 2000 @11:10AM (#1028116)
    As someone thinking of picking up Java and adding it to my set of job skills, I must say that this discussion has been very valuable for me. I think it's fantastic that many posters here seem to loathe Java. As experience has shown me, if someone does the opposite of whatever /. recommends, they'll tend to be quite successful -- where success is defined as actually making a living, rather than shaving two microseconds off an executable's run time.

    If the next generation of programmers are as inflexible and intolerant as the /. bunch, I'm going to be employed for a long, long time.
  • Am I the only one who thinks that Sun over did it with all the pluggable look and feel stuff? It's definitely very cool to be able to click a button and have your application switch from Metal to Motif to Win32, but this comes at an enourmous complexity (and presumably speed) penalty. Try running a simple swing program through an analysis tool like OptimizeIt and you'll be horrified about how many classes need to get loaded for even the most simple app. I kind of wish they'd done a straight forward implementation of Metal (Which looks great in my opinion.)
  • Any language in which a programmer has to manage the memory rather than a garbage collector is doomed to have memory leaks, _no matter how good the programmer thinks he is_.

    Nobody can catch everything, and to assume a "good" programmer won't write code which leaks memory is very naive. Why do you think people are trying to retrofit C and C++ with rudimentary GCs now?

    - A.P.
    --


    "One World, one Web, one Program" - Microsoft promotional ad

  • A universities job is not to learn students how to fix memory leaks (an activity that cannot be avoided by newby c programmers) but to learn them the basic concepts of programming.

    The fact that "leaks" of objects, "forgotten" by sloppy programmer yet referenced by program are harder to detect does not make them go away.

  • by Anonymous Coward
    One more comment on aliasing: if you tell the MSC that aliasing isn't going to happen (/Oa), the speed of your FFT example improves by about 15%. Since Java does not have any usable pointers, it can always assume no aliasing. With any modern C/C++ compilers on all existing x86 CPUs well-written FP-heavy code will usually be bound only by the FPU performance (the non-FP part is so much faster on x86...). While it is certainly possible to replicate the results in other languages/compilers, it should be practically impossible to beat them by any decent margin. The huge deviation in your FFT results shows that something is badly wrong with the code. Anyhow, an interesting article.
  • by jmv ( 93421 ) on Saturday June 03, 2000 @07:40AM (#1028121) Homepage
    There are some constructs in C that are almost impossible to optimize. Pointer arithmetic is an example. When the compiler cannot tell if two pointers point at the same place, it has to be very careful during optimization. Also, here's an example of what I mean by loop unrolling having an effect: consider these two dot product implementations:

    /* This one is not optimized and looks like the one for the FFT */
    double prod(double *x, double *y, int len)
    {
    double sum=0;
    int i;
    for (i=0;ilen;i++)
    sum += x[i]*y[i];
    return sum;
    }

    /*This function in an optimized version of the previous */
    double prod(double *x, double *y, int len)
    {
    double sum1=0, sum2=0, sum3=0, sum4=0;
    double *end = x + len
    /*this loop is unrolled by 4*/
    while (x end-3)
    {
    sum1 += *x++ * *y++;
    sum2 += *x++ * *y++;
    sum3 += *x++ * *y++;
    sum4 += *x++ * *y++;
    }
    /*this loop computes the remaining (non multiple of four) */
    while (xend)
    sum1 += *x++ * *y++;
    return sum1+sum2+sum3+sum4;
    }

    I have recently used this optimization on my code and found a performance increase of about a factor of 3 (on a Athlon 500). The fist version of the function has three problems:
    1) The loop overhead is very expensive compared to the 2 float operations inside it.
    2) The indexing is ofter more expensive than simple pointer increment (thought not always).
    3) This one is the most important. In the first example, each sum (+=) requires the previous result of the sum to compute. Now, the problem is that the FP ADD pipeline is stalled. The time it takes for each addition is no more one cycle, but the length of the pipeline. In the second example, the use of multiple partial sums prevent that.

    Also, as I said before, a good FFT coded in C can be as fast as the processor is. The Java code can be as fast if it is good, but not twice faster, as in the benchmarks. But because, the FFT code wasn't optimized, the performance difference likely came from the loop overhead.
  • ...Oops, the post screwed up all the less-than signs in the loops... but it should still be obvious what's missing.

  • maybe you should better tell these guys... :~) (current projects I found at SourceForge [sourceforge.net])

    • bigFORTH+MINOS [sourceforge.net]
      bigFORTH is a native code Forth for x86 processors. MINOS is a portable GUI library for X11 and Win32, written in object oriented Forth, and includes the form editor Theseus

    • Portable Forth Environment [sourceforge.net]
      the Portable Forth Environment implements the ANSI Forth Standard, it is fully written in C, the newer version has a module concept, and it is fully multithreaded. Autoconf used. Tested in embedded environments.

    • LYFO embedded forth [sourceforge.net]
    • lsFORTH [sourceforge.net]

      Native X86 Linux Forth compiler. Will be ported to other processors eventually

    • PPCFORTH for Embedded PowerPC [sourceforge.net]
      FORTH for embedded PPC (IBM40x supported). Currently written in assembler (ASL dialect). Used for monitor or loading other programs via S-Records



  • >java's floating point arithmetic is blighted by five gratuitous mistakes

    >My question is, is this analysis and the observations still valid now?

    The current version of the JDK (1.3) provides two math libraries one "fast" and one "accurate."

    Jim
  • This is off topic, but I think the people interested in this story will find this information useful.

    Two pissy things to avoid if you want C++ code to run really fast:

    1. Multiple inheritance is great for design and code reuse, but very bad for speed. Calling a virtual function can have 5-10 times as much overhead as a conventional call. This is because your program has to look up in a "v-table" to find out exactly which function is actually being used.
    2. Mixed mode arithmetic. I don't just mean floating point vs. integer, although that is very bad too. Don't mix integer sizes in an expression, i.e. unsigned int and unsigned long. Unless space efficiency is a very high priority, the time to convert should be avoided.




  • results suddenly change about, which happens to be where the array size starts to exceed the Athlon's 64KB level-1 data cache

    AFAIK, the double values (and probably the float) aren't cached in the L1 on the Athlon.

    Also, I think I've found another problem with your FFT: the use of the "sin" function instead of tables. This function is very slow, and it's performance depends a lot on the math library implementation. It is possible that the JVM implementation of the sine was faster.
  • 'Calling a virtual function can have 5-10 times as much overhead'

    It can be much, much higher than that! I set my compiler to -O11 (I knew it was a good compiler cos the optimisation goes upto 11!) and my code was compiled to just 10 bytes!

    It ran in only 10 ms!

    I then changed one line to 'virtual' and it came out to 150K, I started running it yesterday and it still hasn't finished! That just shows how bad C++ is.

    I hear that Java only has virtual functions. That must be slow as a rock! Everyone knows that C is the best language in the world, I can't see why anyone would use this object rubbish. I mean that must be the reason I hear all these software projects keep failing, they're all using Java.

    And as for security, Java's been out for years and we still get viruses! What gives I ask? When are people going to learn?

  • Any language in which a programmer has to manage the memory rather than a garbage collector is doomed to have memory leaks, _no matter how good the programmer thinks he is_.

    And no language -- with or without GC -- can prevent leaks if programmer is not careful about the lifespan of objects. The fact that physical deallocation is done by GC does not mean that progammer automagically becomes capable of removing all references to the object at the right time (or ever).

    Nobody can catch everything, and to assume a "good" programmer won't write code which leaks memory is very naive.

    I can -- as long as I am working on my own program.

    Why do you think people are trying to retrofit C and C++ with rudimentary GCs now?

    They don't. All garbage colloection mechanisms that I have seen for C and C++ are libraries -- they don't affect the language design.

  • by DamnYankee ( 18417 ) on Saturday June 03, 2000 @11:55AM (#1028132) Homepage

    Java as a language is fine. But Java on a VM just doesn't cut it for real-world apps.

    I am currently developing a product series of WAP servers and gateways. A few competitors have chosen Java and they can support a maximum of 500 simultaneous users, and that with at least 256 MB RAM and 2 to 4 Pentium III 600 MHz+ processors.The C versions of similar products have no problem supporting several thousand users on a single processor 64 MB machine. Java just ain't in the ballpark.(I should also point out that the Java VM's are not very stable and crash frequently.)

    Java's specs as a language are really nice. Why don't we leave this VM stuff to the specialty apps that need it and start using Java as a COMPILED language?

  • I think it's unfair to compare the two languages by using test programs that are algorithmicly identical between the two languages. Any language can translate simple math and array functions into nearly equivalent assembly. The differences between the languages comes from the types of constructs that can be produced.

    While I am only vaguely familiar with Java, I am a seasoned C++ programmer. My understanding is that Java doesn't support pointers. A well designed algorithm will often do things such as walk a pointer through an array instead of indexing off of it. I can't speak for everybody, but most veteran C/C++ programmers I know use pointers extensively to create optimizations in their algorithms that simply can't be simulated with references or other constructs. It's the ability to design these sorts of things that makes C++ a more powerful language.

    I look at Java as little more than a C++ that has been dumbified so a Visual Basic programmer can use it. All that said, I have to agree that the vast majority of program time is spent waiting for external dependencies like SQL servers, hard drives, system calls, etc... As such, Java is likely to produce just as acceptable of programs as C++ the vast majority of the time.
  • by Waltzing Matilda ( 21248 ) on Saturday June 03, 2000 @03:29AM (#1028142)

    Most Java VM's are quite good at executing Java code, so the results are not all that surprising.

    Java's biggest problem is in memory requirements. Metadata for classes is frequently much larger in size than both bytecodes and allocated objects. This needs to improve if Java is to become a more mainstream language.

  • by GodSpiral ( 167039 ) on Saturday June 03, 2000 @08:23AM (#1028143)
    There's a few things they've done that make Java look better than than real world situations.

    1. Medium to big Java apps need 128mb-256mb system RAM to be useable. HotSpot increases memory footprint (uses memory for compiler + bytecode and native code is in memory), but does not enhance every type of app. HotSpot looks great on many benchmarks (small loop intensive apps tested on systems with much memory), but for many apps it slows things down.

    2. By pre-running the code for 1 second to get how many iterations to use for 10 seconds, you're making sure that hotspot and JITs fully kick in, without countin any of their execution overhead.

    3. Contrary to what you might expect, there's no UI in the game of Life benchmark.

    4. The benchmarks are set up to favour run-time optimizations by having function parameters that are constant for long periods of time (ie matrix size)

    Java is just fine when you have tons of memory, but if your users have 64mb or less, go with vb or cpp.

    The benchmarks in the article have completely avoided any JVM/hotspot initialization overhead, as well as sticking to things JIT compilers are good at.
  • . . .it's too simple and incomplete for that.

    So it's a Mindcraft test?
    ;)
    ___

  • It would have been nice if he'd also tested Java compiled to native code (such as with gcj [cygnus.com]) . He says the the point of his tests is to measure dynamic vs static compilation, so his experiments would be better if that were the only variable. It would also help squash the myth that coding in Java requires using a VM at runtime.

  • by ch-chuck ( 9622 ) on Saturday June 03, 2000 @03:37AM (#1028147) Homepage
    Patriot [ptsc.com] Sci. has a Java processor - actually it was a FORTH in Si processor origionally, but was 'retargeted' since FORTH bit the big one and became as relevant as LATIN. I'd like to try one out, since the RTX2010 [intersil.com] only comes in (expensive) radiation hardened form.
  • IF Program equals
    OS or Driver use C
    ELSE Java does fine!
  • Benchmarks are HOTSPOT-friendly. The same functions are called again and again.

    The Life game is the only test which requires the garbage collector a little bit, but here C looks not so bad.

    The Fibonacci test is not so important, it proves only that HOTSPOT and the IBM JVM do function calls faster on Athlon. It seems that both C compilers don't do good opimizations here for Athlon. Pipeline stalls might be the cause.

    The FFT C code uses calloc() to create the FFT matrixes. malloc() would be sufficient. For arrays with 2^16 doubles the clearing of half a megabyte needs some time. A for-loop is used for copying data and not memcpy(). The Java code uses System.arraycopy() :-).

    The result is predictable.

    Running current C compilers on an Athlon is also not fair, because both compilers will not produce good code for it. The Visual C++ didn't even had the Pentium specific flags.

    I think that the compiler-based JVMs have gone a long way. I wish some of that developement resources had gone into the C++ compilers of both companies.

    My conclusion: The Java performance penalty is reducing on some platforms. Linux on x86 is one of them, thank IBM. I doubt, that we will see a free JVM with that performance anytime soon.
  • by costas ( 38724 ) on Saturday June 03, 2000 @08:29AM (#1028155) Homepage
    Yeah, but this test was comparing apples to oranges:
    * C was given the choice of two not-very-good compilers: GCC and MSVC. From experience, I have seen the same code (especially math or array-intensive code) execute an order of magnitude faster when compiled with Kai CC or Portland Group CC. OTOH, Java was using the top of the line compilers and JVM (e.g. MS's JVM is well known to be much faster than even Sun's in Solaris...)
    * Java had the advantage of run-time optimization. If you go to Ars Technica and read up on HP's Dynamo, you'll see how run-time optimization *alone* can give you a 15-20% improvement in speed with *compiled* binaries. Granted, run-time optimization is 'in the box' for the Java platform while, besides Dynamo, C/C++ are stuck without it.

    Even if you dismiss the run-time optimization advantage as an integral part of the test, the choice of compilers *did* have a speed effect...

    At any rate, I *am* a Java fan --I am just curious to see some true, fair benchmarks.

    engineers never lie; we just approximate the truth.
  • by Broccolist ( 52333 ) on Saturday June 03, 2000 @08:31AM (#1028159)
    Maybe Java is really popular right now, but personally, I don't like it.

    First, I wouldn't say it has everything one could want in an OOP language. The language feels like watered-down C++: templates (and STL), objects on the stack, const, references, and true multiple inheritance, are all missing from Java, but clearly would be useful. Yes, the absence of these features makes life easier for beginners, but it's painful to work around Java's deficiencies when you know how to use such features.

    Unfortunately, Java isn't really multiplatform, either, unlike what Sun's marketing team would have you believe. Java is multiplatform in the same way that my Super Nintendo ROM is: I can play it on windows, linux, solaris, etc. I need an emulator, of course. Similarly, I need a "Java Virtual Machine" to run my Java bytecode: it's really just an emulator for a platform that doesn't exist. And if the emulator isn't ported to your favorite platform, well, tough.

    But the main thing I don't like about Java is how gratuitously integrated it is. Why should the Java standard library (which is really a platform in itself) be inextricably bound with the Java language? It could easily have been made into a C++ library, since C++ has direct support for all the language features of Java. Then, they could have written the Java language/bytecode interpreter separately, and made it an option to use the Java platform. This would clearly be better for everyone (except Sun): I could use the well-designed Java APIs in my C++ project with no loss of speed.

    The same thing goes for much of Java. Why does Javadoc, a program that generates documentation from comments in your code, have to be integrated with the rest of Java? It could also be used to document C/C++ code, with minor modifications.

    IOW, Sun is trying to lock you into their platform in the same way that MS is with their strange APIs ; except that Sun's method is much more effective. I am sticking with C++.

  • would love to get my hands on Smart Firmware [codegen.com] - Imagine switching on your PC, having it boot FORTH from ROM, ala Sun workstations [ultralinux.org]. Sweet. Demo for Linux available.
  • Really, you use an old mac to do website stuff? Increadible. I just buy new hardware. I am performance obsessed too, but I wasn' talking about a pixel level editing program. Neither was I talking about operating system-level stuff like GNOME. (Wether Linux people like it or not, for all intents and purposes, the DE is part of the OS.) I was talking about the 3D renderers and the Office apps, and the web browsers, and the photo editors and the video editors. The meaty stuff that people use every day. Sure there are those who have powerful hardware, but use older programs for the sheer speed, but unhappily, they are a rare case. My point was that developers know what kind of CPUs their stuff will run on (at least in the mainstream, I don't know about UNIX though) because people tend to run modern programs on new hardware. I am sure that the developer of that plain X app had no idea how powerful a PIII would be or how to optimize for it. But that's okay, because 99% of the time, nobody will use that old X app on a PIII. The point is that the compiler doesn't really have to worry about optimizing for different architectures from the future (in the same instruction set) because the program will rarely be used on those architectures. The post I was responding to thought that dynamic compilation would win out, because when the K8 or K9 comes out, software written for the older CPUs would be optimized better for these new CPUs. A static compiler can't do this, because it can only optimize for CPUs that are known at the time. I was saying that it doesn't matter, because people probably won't be using that app when the K8 or K9 comes out. Even then, the software won't need optimization, because it will have been writtent to run well on an older CPU.
  • by istartedi ( 132515 ) on Saturday June 03, 2000 @01:21PM (#1028177) Journal

    ...never focused on the fact that it is slow. It was always the fact that they had to go and invent another language that isn't any better than C or C++. I'm all in favor of cross-platform development, but forcing us all to maintain yet another codebase, in yet another language is just a royal pain.

    If Sun had produced a platform independant C/C++ environment... wow... just imagine. We may never know how good it could have been.

  • First some data on trends in demand for language skills from Randall J. Burns (www.outlander.com) extracted from www.dice.com:

    As of 8/14/99 the ratio of Javascript demand to Java demand was .304 and as of 4/6/00 it was .331

    Since the dynamic optimiazation of Hotspot is actually in the lineage of dynamically typed languages (Smalltalk, CLOS, Self, etc.) it is poetic justice that the most widely deployed programming language, Javascript, is not only of that lineage, but it is overtaking Java -- the language that made dynamic optimization fasionable despite its less-dynamic heritage of Object Pascal and Objective C.

  • Actually I always thought that while Java was OK for network client programming, it sucked for server programming due to the mentioned lack of non-blocking IO, as well as the lack of true destructors (finalize doesn't) which makes resource management impossible to do in an OO fashion.

    C doesn't have any particularly aesthetically pleasing IO libraries for this, primarily because by the time network programming becase so ubiquitous that better frameworks were needed, much new development had moved to C++. For an example of an extremely well designed C++ networking framework, see the ACE project [wustl.edu] which supports many different single and multi-threaded programming models in a portable fashion while still allowing you to use high-performance features (such as asynchronous IO) that are not supported on all platforms if you want to.

  • You must not write large programs.

    They are very large -- just modularized well.

    Does it really matter if it's a change to the language spec or just the simple addition of a new "malloc" procedure? It's still retrofitting.

    Of course, it does. The beauty of C (and at some extent C++) is that the lanugage design is not tied to some [crackpot] idea (or fad) of how resources (of whatever kind) must be managed, and libraries can implement whatever they please as long as the language can implement some basic interface to allocators and support structures (as in C) or the same interface plus the idea of objects and methods (as in C++).

  • Note that C now has a "restrict" type qualifier, specific to pointers, which addresses the very optimisation problem you describe. If a pointer is restrict-qualified, it is a hint to the compiler that the objects pointed to by that pointer cannot and will not be referenced by any other pointer.
  • by 575 ( 195442 )
    Test C and Java?
    They fit not in the same mold
    Apples, Oranges
  • I think (hope!) some of the newer JVMs are better, but no, early JVMs used mark and sweep. This is one of my main complaints about Java -- the hype has always far exceeded the technical capabilities. Even still, I think Java's (or Java implemntation's) memory management is lagging.

    There are a lot of eveidence to show that GC whould outperform manual memory management, especially in firm-realtime applications such as multimedia applications (hard realtime systems generally do no allocation at runtime). Basically, GC allows collecting memory to be defered out of a critical loop, or to be scheduled seperately from critical processes, while free() has to be run immediately. Also, malloc()/free() need to do a lot more bookkeeping to know to manage the heap, reduce fragmentation, and so forth. GC can avoid some of this, and amortize the rest over multiple allocations.

    Of course, nothing is going to beat performance on a static/automatic array in C w/o bounds-checking. Java really has no way to support such a beast (even *with* bounds checking). This is one (or many) examples of Java "generalizing" to the extreme, which makes a lot of modern static (compile-time) optimizations impossible. They sort of make up for it with dynamic optimization, but as the article about Dynamo on Ars Technica indicates, they tend to compliment each other, so it would be better yet to have a language like C or C++ + run-time optimization ala dynamo.
  • Actually, I believe Mandrake no longer uses pgcc because of the instability (someone correct me if I'm wrong). If pgcc barfs on you during compile, that's one thing. If it causes wrong answers or segfaults during program runtime, that's completely another (in response to another poster).

    Your comments about gcc tho are pretty much on the mark, although I was under the impression that on x86 gcc was a decent optimizer.
  • by sql*kitten ( 1359 ) on Saturday June 03, 2000 @03:39AM (#1028206)
    One area in which C does not offer significant benefits over Java is in the area of network server programming, where the code spends most of its time executing system calls, rather than processing logic in userland.

    The results of these numeric tests surprised me, but I'd like to have seen Watcom/Borland C compilers used, as both have a reputation for superior numeric code generation to Microsoft's Visual C++ product and GCC.

    • Java requires a JVM interpreter, C does not require anything of the sort.
    • C is compiled for the target CPUs own machine code. Java needs an extra translation layer at run time.
    • The C integer types change with the CPU, so the "int" type is always the fastest integer about. Java integer types are fixed. (If you want integers of a certain size in C, use the new C99 types.)

    I'm sceptical. I would doubt any review that puts Java even close to C in terms of raw performance of code.

    Sure...

    • Java is (supposedly) easier.
    • Java code runs anywhere a JVM has been written. (BTW, my old 6502 driven 32k BBC micro had a C compiler. Could Java be run on this?)
    • The Java standard library includes a GUI.
    • Java is OO, but C can do OO with structs, pointers to functions and a bit of help from the pre-processor.
    • Java is sexy.

    Bill, embedded software developer.

  • For an example of loop unrolling in C++ code, see the Matrix Template Library. Pretty cool.
  • by pen ( 7191 ) on Saturday June 03, 2000 @03:48AM (#1028211)
    Speed or no speed, Java is becoming one of those languages... you know what I'm talking about. One of the languages, along with C, C++, etc. Why?

    One of the easiest languages to learn (provided that you understand OOP). I tried C++, and I failed (for now). I tried Java, and it is very easy for me. And for the ease of learning, it gives me immense power. Everything anyone could ever want in a true OOP language.

    It is also multiplatform... we all know about that.

    The only language I can think of that comes close is VB. VB is Windows-only, (well, you have VBA in Office98 on MacOS, OK) and it doesn't give you much of OOP (inheritance, etc.).

    Finally, there are a lot of people out there that will learn a language simply because it's in demand, so that they can get a lot of money paid for writing things in it, and Java wins here as well. Just go do a search on Dice.

    The only thing that bothers me is that Java is now definitely being controlled by a corporation. I'm pretty glad it's not Microsoft, but I'd still rather have it controlled by an unbiased group. OTOH, without Sun's promotion and development, who knows if Java would ever rise to where it is today.

    Let's just hope that the damn applets will fade out... I just hate them! Please correct me anywhere you think I'm wrong - that's what the Reply link is for.

    --

  • by pieterh ( 196118 ) on Saturday June 03, 2000 @03:49AM (#1028215) Homepage
    In my experience, aside from gross overheads such as those imposed by interpreted languages, the actual performance of server-side business applications depends primarily on database access. Put simply: the cost of database access is so high that it wipes out subtle shades of difference between language A and B. The key to building fast applications is the choice of good tools, and an understanding of the impact of SQL clauses like ORDER BY. The relative speed of compiled Java over C would be a very bad basis for chosing the language for an important project.
  • Agreed. Loading a little tiny proggie or applet will often eat 8MB of RAM. However, don't forget that the entire JVM is loaded.

    In a few years, the JVM and Java programs will probably become so commonplace, that it may be allowed to just lurk in the RAM. Sort of like <gasp> the way IE works in Windows. (Why did you think it opens so quickly?)

    --

  • The only problem with JAVA is that the poseter is right about the write once, rewrite anywhere. Java under LInux != Java under MacOS != Java under Windows != java under Solaris. In fact on the Mac, Java can take three forms, Apple JVM, Microsoft explorer JVM, or Netscape JVM, and they are not the same.

    To say nothing of JVM versions within a given platform...

    Sun tried to make a great technology, where you could have true cross-platform compatability, and where it was easy to learn to code for it because of its similarity to other languages (most like C++ IMHO). It is not working for the same reason networking and the www are not working, Vendor-introduced incompatabilities.

    The only reason these technologies are thought viable at all is mainly the dominance of certain technologies (eg 90% of pc's are Wintel, so if you write for MSJVM, you have 90% of the audience.)

  • by Krakus Irus ( 149295 ) on Saturday June 03, 2000 @03:59AM (#1028225)
    In order to help you choosing the best VM, you can check the Volano Benchmark [volano.com].
    You will see there that the best VM is Tower TowerJ 3.1.4 for Linux !
    Second point, I never doubt that java on the server is a good solution now. For me, the only trouble with java now is the memory gluttony.
    If some of you want to test Jsp/Servlet, here are some good open source products : java.apache.org (JSP, servlet), www.enhydra.org (JSP, servlet, EJB)
  • by anonymous moderator ( 30165 ) on Saturday June 03, 2000 @04:01AM (#1028226) Homepage
    One thing I have found is that if you port a mathematical program to java from c, it may actually speed up!

    I remember writing a small test program in c and identically in Java (a couple of syntax changes only). Using IBM's jdk with -O made it faster than pgcc with all the optimization flags we could think of!

    Then I rewrote the program in an OO-way in Java, and of course it was slow:)... but it does show that Java isn't nessecarily slower than C for some tasks.
  • No wonder the TowerJ resulted fastest: it is a static compiler!

  • As everyone who has done any real amount of Java coding knows, and as the article also points out, there are still speedissues when it comes to the widgetsets and other basic libraries. The SWING-library is so large that it takes several seconds to start a simple notepad-style app.

    Hopefully we will see an effort to optimize the Java libraries soon, for startup time as well as for speed, or else Java will become a pure Serverside language.

  • by Arg! ( 181859 ) on Saturday June 03, 2000 @04:13AM (#1028231)
    Indeed! Don't waste money on Oracle unless you're going to get a good DBA as well! About Java, I find that it's very modern in relation to C/C++...which is actually very refreshing. Object-oriented-programming with Java seems very natural, which is in stark contrast to the against the grain feel of C++. The use of unicode over ASCII I think is another more "modern" advantage of Java (even though it may imply some performance loss due to the increased size of character and string datatypes). The "write-once, run-anywhere" properties of Java certainly get the most press, but I think a lot of it is actually downplayed. Sure, Java is not the first language to be cross-platform without recompilation, but I think it's probably the most successful. Being able to take a servlet from a Solaris system and run it flawlessly on Windows NT is pretty impressive, I must admit. I think this is a major reason for Java's success on the server. While the public may be content with platform-bound applications, Developers have grown tired of endlessly porting code. The web is a cross-platform, "write-once, run-anywhere" application, so why can't the applications that power the web do the same? Seriously, though, if I sound like a University professor endlessly extolling the virtues of Java and OOP, please tell me to stop! ;)
  • by Anonymous Coward
    First, I wouldn't say it has everything one could want in an OOP language. The language feels like watered-down C++: templates (and STL), objects on the stack, const, references, and true multiple inheritance, are all missing from Java, but clearly would be useful. Yes, the absence of these features makes life easier for beginners, but it's painful to work around Java's deficiencies when you know how to use such features. Templates have no use other than creating generic objects whose paramters include primitives. The wrappers for the primitives (int, float, etc.) in Java handle this fairly smoothly. Everything else you can do with templates you can do using object references. I have NEVER seen a good use of multiple inheritance except for pure virtual classes, which java has in interfaces. OTOH, I have seen alot of absolutely disastrous uses of MI. (Microsoft COM) As for const, every heard of the final keyword? Unfortunately, Java isn't really multiplatform, either, unlike what Sun's marketing team would have you believe. Java is multiplatform in the same way that my Super Nintendo ROM is: I can play it on windows, linux, solaris, etc. I need an emulator, of course. Similarly, I need a "Java Virtual Machine" to run my Java bytecode: it's really just an emulator for a platform that doesn't exist. And if the emulator isn't ported to your favorite platform, well, tough. Ah, but the difference is the Nintendo emulator is not designed to be multiplatform. In many respects, Java is. If you know anything about processor architechture and you look at the JVM spec, you see that they went with the simplest machine design possible so that it could be easily implemented on other platforms. Because it hasn't been ported to your favorite architechture doesn't mean it isn't multiplatform. The JVM spec is there for anyone who wants to read it and implement it on any platform they like. BTW, whens the last time you saw a N64 emulator for the Mac or Irix? But the main thing I don't like about Java is how gratuitously integrated it is. Why should the Java standard library (which is really a platform in itself) be inextricably bound with the Java language? It could easily have been made into a C++ library, since C++ has direct support for all the language features of Java. Then, they could have written the Java language/bytecode interpreter separately, and made it an option to use the Java platform. This would clearly be better for everyone (except Sun): I could use the well-designed Java APIs in my C++ project with no loss of speed. Why is stdio.h so gratuitously integrated with C? Or iostream.h with C++? Because by making things standard, it made it alot easier for people to learn the language and write code. You don't have to use iostream.h if you don't want. You don't have to use java.lang if you don't want either (actually, you'd need java.lang.Object, but thats about it)
  • My G3 Powerbook does that all the time. It's called Open Firmware. Kinda like a BIOS on steroids.
  • by wowbagger ( 69688 ) on Saturday June 03, 2000 @04:24AM (#1028244) Homepage Journal
    One of the interesting comments in the article was about "gcc's default dynamic memory allocation": technically this isn't gcc's memory allocator, but libc's. I wonder a) if the Cygnus malloc/free routines are as optimal as those under glibc2, and b) if there aren't better routines out there.


    It's a bit unfair to blame gcc for poor memory allocation: unlike Java memory allocation isn't built into C.

  • You're the second person who has suggested C++ is slower than C in a round about way (this is a reply to both posts). There are certainly constructs you can make in C++ that will incur a fair amount of overhead; however, remember always that C++ started out as a pre-compiler (CFront) that translated it into C.

    You suggest using structures and pointers to functions to accomplish OO techniques in C. I used to do this about 12 years ago when C++ was not portable/standardized enough to use for the project I was one. But don't fool youself into thinking that you are getting better performance. The C constructs you talk about are largely the same thing that was produced by the CFront compiler in the first place.

    The problem now days is that most programmers don't understand how the various features of the language they are using ultimately get implemented at the assembly level. If you use the MFC CString type (for example) as proof that C++ is slower than C you truely are missing the point. In the case of CString, you are sacraficing performance for notational and functional convienence. If you are writing a word processor, that probably isn't the best idea. If you are writing a game that rarely deals in text, it's probably worth the slight speed hit you get for the manageability.

    The bottom line is that C++ will produce every bit as tight of code as C will and it will do multi-threading every bit as well as C does. Proper multithreading has a lot more to do with algorithmic design than it does the language you are using.

    More often than not, C++ will actually produce faster code than C. The reason being is that few programmers will go to the hassle of setting up arrays of pointers to functions in order to accomplish the polymorphism that is often needed in a programs design. They resort to switch statements and such as the easy way out.

    If you blindly put 'virtual' in front of every function cause the manual suggested it, and then wonder why the implementation of a simple GetValue() function is so slow when you marked it 'inline', then is it any wonder? A good knowledge of what is being produced at the assembly level will make you a MUCH better coder and largely if not completely eliminate any C++ performance gotchas.
  • The problem with writing servers and gateways in Java is that there is no way to do non-blocking i/o, or select(2) in the Java libraries. Instead, you need (at least) one thread per fd, which is often memory expensive. VolanoMark shows this off, and the fastest Java VM for Volano (TowerJ) is the one with the most scalable thread implementation. There is a proposal in process (java.nio) to fix this, but who knows when it will see the light of day.
  • I Strongly agree. Comparing C and Java is "apples and oranges". They are two completly different languages, created for two distinctly different purposes. Some people seem to have forgotten this. There are so many factors that this comparison neglects, it's hard to take the results seriously.
  • by Jackrabbit ( 82738 ) on Saturday June 03, 2000 @04:50AM (#1028274) Homepage
    Look, I code Java for a living. I don't want to be a Java-evangelist (Javangelist?), but I've got a few problems with your post.

    As far as getting the latest JDK in anything but Windoze, you can currently get Java2 v1.3 in Windoze, Solaris and Linux (with other ports on the way). The fact that they came out with the Windoze port first should be no real surprise to anyone: most folks are still using Windoze, hence there is more demand for upgrades on this OS than any other.

    I've written Java stand-alone apps that are monumental in size and I've written Java server-based apps. I think that Java's main glory lies in server-side programming for web-enabled applications, but it is no slouch in the large stand-alone application market. You keep hearing people complain that Java eats up so much memory when all you want is a simple Notepad app. You need to understand what Java is doing and learn to work with/around it.

    If you load a large app that utilizes many of the Swing widgets and interfaces, the memory load becomes a bit more understandable. On the large apps that I've written for Java, it has actually performed quite well (sub-second sorting and display on a 10K row table, etc).

    Most of the comments that I see bashing Java are from people that have only taken a cursory pass at the language. If you try to code a Swing interface using the same paradigms as AWT (or C, C++, etc), you'll wind up with a slow monstrosity. If you code Swing the way it was intended to be coded (using the MVC architecture), you'll find that it's a remarkably powerful and full-featured GUI API.

    At any rate...I'll get off my soapbox now. I really don't mean to tout Java as the be-all end-all of programming languages (it's not). But it is one of the better languages out there for the current direction of Internet-enabled programming.
  • by Hard_Code ( 49548 ) on Saturday June 03, 2000 @07:03PM (#1028275)
    "do you even know C++? const in C++ has nothing to do with java's final directive. final is the opposite of virtual in C++."

    The final keyword in Java also serves the purspose of constant declarator for variables. This /is/ what const is used for in C++, right? You /can/ declare constants in Java. Is there something else special "const" does that you are saying Java does not support?

    "In java, in addition to Object, you need a classloader to load your code. And you get no strings, because the compiler has specific syntactic sugar for dealing with strings (why oh why did they hack this into the compiler isntead of just supporting operator overloading?) I'm sure there are more, but its been a while since I hacked java at that low a level."

    If you are griping that VMs are too dependent on the Java language itself...well, tough. Sun created the VM spec FOR the Java language...not as some universal VM and byte code instruction set that anybody could write any arbitrary language on top of (although that is certainly possible). People write Java in Java...they don't write in bytecode, against the VM directly.

    Also, somebody was griping that the standard libraries and default utilites from the Sun JDK were written in Java. Well...DUH. They are Java libraries and tools. They should be written in Java. Java was created for easy cross-platform development...it would be stupid then to write all the libraries and tools in some native language, then have to ALSO port all those on top of writing a VM. With the libraries and tools written in Java itself, all VM writers have to worry about is writing a VM, and presto, the support libraries and tools are all magically available. It would be hypocritical and tragically stupid to write the supporting tools and libraries for platform-neutral Java in some platform-dependent language.
  • Java's requires more memory simply because you are loading a Java Virtual Machine.

    • You have to bring in the overhead of another operating system environment. Imagine loading another instance of Linux every time you had to launch a program.
    • When a JVM is launched, the maximum amount of memory it can use is set. Since the JVM uses garbage collection, it will eventually use all of this memory, whether it needs it or not. Most JVMs don't clean up after themselves until they have used up most of their available memory. This way garbage collection is more efficient since it is done less often.
    While Java's program code can be larger, I think it is only a small part of the memory usage most people see. Personally, I want a JVM that will only use as much memory it needs plus a "buffer" so it doesn't have to garbage collect everytime I free something. It would probably just need to garbage collect every few seconds whether it needed it or not. It would sacrifice some performance for memory; therefore, it would be best of the client side and useless on the server side. I missing an existing one that does this?
  • Catchy subject, no? I read this article [berkeley.edu] about a year ago and found it fascinating. Sorry I can't cut and paste because its a PDF document, but it can be summarised as:
    java's floating point arithmetic is blighted by five gratuitous mistakes

    And they give several examples where java will produce incorrect results in numerically intensive applications.

    My question is, is this analysis and the observations still valid now?

  • Java's biggest problem is in memory requirements. Metadata for classes is frequently much larger in size than both bytecodes and allocated objects. This needs to improve if Java is to become a more mainstream language.

    If you are just running a single Java applet, this is a huge overhead. However, once you have a couple of Java processes running at once, it's fairly trivial. 8Mb of overhead for 10 processes? That's about 800Kb per process - not a huge amount.

    I think the biggest issues so far have just been the usual `chicken & egg' scenario: not enough people use Java for normal apps, so nobody writes Java apps since there's no market.

    I must admit, I'm a Java sceptic myself; to begin with, the language was a real pig. The near-instant load-time compilation of Perl knocks Java out of the window; even C is much faster in terms of compile-run-test cycles. However, no doubt that will change in time.

    The result that really surprised me, though, was Cygwin32 beating MS VC almost across the board - running on Windows! WTF?!

  • by gotan ( 60103 ) on Saturday June 03, 2000 @05:05AM (#1028294) Homepage
    I can't estimate the extent to which dynamic optimizations went into the results of the article in contrast to good optimizations from scratch.

    Nevertheless i think dynamic optimizations are the thing to come: it costs a lot of man hours to find ideal optimizations to code, (you need to figure out the core routines, think about which optimizations make most sense for the current architecture, check those assumptions against reality) and man-hours, in contrast to cpu-time, don't become much cheaper. The dynamic optimizer does all that work for you, and even optimizes for different starting conditions/parameters by looking at what is *really* taking time now.

    Look at the success (regarding computing power per bucks) of transmetas crusoe. A dynamic optimizer can gather far more hints for optimisations (branch predictions, loop length, array sizes, memory lookups) than a static one, in the latter case the programmer has to give all the hints (compile a subroutine with the correct set of optimisations, sort the loops right, sort branches, keep in mind some ranges for parameters and how they affect loop length, for some compilers throw in compiler directives, etc.) and even has to reconsider when porting to another architecture.

    So with static optimizations it's either optimization limited to the part the compiler can see at compiletime (except for very basic stuff, every decent compiler will get that matrix multiplication right) or man-hour intensive and thus costly optimization.
  • Your comment on multi-threading is very true. If you write a visual app with a thread doing the hard work and then a thread which handles the interface you're not doing yourself much good. One of the nice thing about the way Be sets up their libraries is everything is multi-threaded and they usually help you multi-thread things in an intelligent way as to actually speed up the code on multiple processors. Silly C programmers.
  • ... I'd have to congratulate whoever came up with the coffee cup to attract all those coders :)
  • Uh dude? Most companies who want to make a profit on their software don't want to release their code to the public without licensing fees. Open Source is not nearly as profitable as closed source. Support and advertisments are pidly shit compared to licensing fees. And to help you out, Java is a good language for portability because there is no porting. If you want it to run quick on your processor you just compile the code for your system. C++ and C are portable but across platforms you need to make library changes and such things. Java needs none of that. And of course you can always compile a Java applet as just bytecode which will be run on a machine's JVM. If we could get some JVM modules in our kernels things would be awfly nice...
  • by garver ( 30881 ) on Saturday June 03, 2000 @05:18AM (#1028308)

    One area in which C does not offer significant benefits over Java is in the area of network server programming

    I agree, but with one exception: Java does not do non-blocking I/O. Therefore you have to use tons of threads, at least one, likely two, for each connection. For a server handling thousands of connections, you can see where this gets out of hand. In Linux's case where all threads are kernel level threads, performance is back in the shitter since it has to make a new set of system calls to manage the threads. But of course, you want to use native threads so that you can take advantage of multiple processors.

    If non-blocking I/O were possible, one thread and a huge select is all that is needed. Squid is a good example of a server that can handle thousands of connections using one thread. The cost here is complexity, but the reward is performance.

    I'm not advocating non-blocking I/O. I think Java's approach makes for much simpler and more stable servers, but JVMs must make threading as lightweight as possible while still supporting SMP for performance to compete with C-based servers. I think this means supporting a mix of kernel and user level threads.

  • 1. Multiple inheritance is great for design and code reuse, but very bad for speed. Calling a virtual function can have 5-10 times as much overhead as a conventional call. This is because your program has to look up in a "v-table" to find out exactly which function is actually being used.

    The v-table is nothing to do with multiple inheritance at all. It only has to do with virtual functions: the v-table tells the C++ program what function to call for the particular type of variable. This is why the "virtual" keyword is there: it get into performance.

    On the other hand, multiple inheritance didn't increase the overhead of function calls for most things--until you use virtual base. At that point you'll have another "base-pointer" to traverse whenever you want to call methods or access members of the base object. But of course, you can have virtual base class even if you only use single inheritance, its the "provision" of multiple inheritance that matters, not the use of it in fact. This is just like the virtual keyword.

  • No Java VM garbage collects every time you free something. Where'd you get that idea? GC happens on a thread a regular intervals or when it's out of memory.
  • Hi,

    Those graphs in the article are extremely deceptive since the y-axis does not start at zero. If they did, the curves would be much closer together. As it stands the difference between most curves is less than 30 percent and that is nothing to get your shorts and knickers all bunched up over.

    Still, it was a good article even though the sample programs are a little trivial.
  • even C is much faster in terms of compile-run-test cycles. However, no doubt that will change in time.

    I don't agree. The reason Java compilation is slow is that the standard java compiler, javac, is written in java so you have the overhead of loading the VM, often for each file. Try a recent verion of jikes, an Open Source java compiler written in C++ from IBM. It's at least an order of magnatude faster than javac.

    The result that really surprised me, though, was Cygwin32 beating MS VC almost across the board - running on Windows! WTF?!

    Yeah, this is really hard to believe. Last time I did anything with Cygwin on Windows (compile Jikes!) the Cygwin binary was much slower than the MSVC one.

  • by jmv ( 93421 ) on Saturday June 03, 2000 @05:40AM (#1028327) Homepage
    I can't comment about the other tests, but I've looked at the FFT code and can say it is very badly optimized. There are some things a C compiler can't optimized because of aliasing, but a Java compiler can. There are ways to code these kinds of things so it can work, for example doing explicit loop unrolling. In the FFT code he had, the FPU pipeline would always be stalled, because lots of loops only have 4 multiplications and 2 additions.

    What makes me even more suspicious is that I have a K7-500 too and I have done some tests with a heavily optimized FFT (fftw) and I get a performance around 400 mflops. There's just no way a JVM can be 220% faster than that. So my comclusion is "with poor code and poor optimization, Java can be faster than C".

    I don't want to take position of the whole Java vs C speed, but what I'm saying is that at least his FFT test is flawed.
  • by stripes ( 3681 ) on Saturday June 03, 2000 @05:41AM (#1028328) Homepage Journal
    No Java VM garbage collects every time you free something. Where'd you get that idea? GC happens on a thread a regular intervals or when it's out of memory.

    No Java VM doesn't have any defined tome to collect. Diffrent JVMs do it at diffrent times.

    Most have a low pri thread that will GC either whenever it is runable (for GCs that are intrruptable), or when it has waited "long enough", or when "enough" memory is allocated. All JVMs I know of also GC when they are out of memory. Except for the Java subsets that don't GC at all (like SmartCard Java).

    The one time Java doesn't GC as a rule is "when you free something", because it doesn't know when you free anything. There is no free operator. You can assign a pointer NULL, but that won't free the object unless that is the last pointer to it! Running a GC on every NULL (or other!) pointer assignment would be staggeringly expensave. Java could keep a reference count hint with each object (like Limbo does), but that has it's own problems (and advantages). I don't know of any JVM that does.

  • "The final keyword in Java also serves the purspose of constant declarator for variables. This /is/ what const is used for in C++, right? You /can/ declare constants in Java. Is there something else special "const" does that you are saying Java does not support? "

    uh yeah...I guess you haven't done much C++. const can also be used to make a constant object. an object where only constant methos can be called. it can be quite handy to hand back a reference or pointer to a const object. that way the person can't change the object (it becomes read-only). this is a missing item in Java. you would need to clone the object, but assuming you are dealing with Objects (like if you are deserializing from a DB and caching them, so you need the caller to get a readonly copy), you can't assume clone is available, since String isn't cloanable...
  • I'd be curious about the C++ allocation. I think some STL classes have a special allocator that's supposed to be faster than malloc (is new using the same allocator as malloc?).
  • "It would also help squash the myth that coding in Java requires using a VM at runtime"

    It actually does if you intend to use dynamic loading. In any case, Java without all its interesting features is not very interesting to me. It's not just the language that makes it attractive but also that it is a very safe environment to program in: no crashing software, no memory leaks, array bounds checking, etc. In addition, the standard library coming with Java is very good for productivity since it provides a lot of well designed functionality.

    Java as we know it is not really possible without the VM, and as the article points out quite clearly, there's not much to be gained anyway by compiling statically. I haven't seen any benchmarks where a static java compiler is much faster than a regular VM (not enough to justify accepting its limitations).

    I think this article smashes one myth: Static compilers are slower than dynamic compilers. The C compilers were in a good position here: under development for decades and code they should be able to run well. Yet, despite the novelty of JVMs and despite the fact that the JVM has to do bounds checking on arrays, it delivers comparable performance. The only logical conclusion is that if a dynamic compiler for C were developed it might actually be faster than statically compiled C.

    I agree that it would have been nice if he'd thrown in a static java compiler. My guess would be that there would be no surprises and that its performance would be mediocre compared to the c compilers and the JVMs. Even a static java compiler still has to do bounds checking and thus the static c compilers have an advantage (plus that their implementation can be assumed a little more mature).
  • Sorry about the mention of C/C++ when I was talking about coding paradigms for GUIs. I'm not a C/C++ coder by trade, so I'll gladly rescind my inclusion of them in my statement.

    You actually make my point quite well without this, however.

    Swing is built from the ground up on the MVC paradigm. If you do not use this paradigm when coding a Swing GUI, you'll effectively "disable" much of the work and code that has gone into Swing. No wonder people think it's a memory hog -- they're turning off half of its features.

    Swing, when coded under an MVC architecture, operates quite smoothly and quickly. For those not "in the know" on MVC, let me give an example from my own experience:

    In the old days of the AWT, if you wanted to display a list of data, the data was an integral part of the list widget. If you wanted to change the data that was displayed in the list, you would perform an operation on the list widget itself. Similarly, if you wanted the same data displayed in a table, you would need to replicate the data and insert it into your table.

    In Swing, you use the model/view/controller architecture (MVC) to separate content from display. Actually that just accounts for model and view. The controller refers to the interaction between the data and the display. This control is typically bundled together with the view (display widget), but can be separated out rather easily.

    The benefit of the MVC paradigm becomes readily apparent when you take one of the examples I used above. If you want to display the same data in a list and in a table, you don't need to replicate the data. You set the model for both of the GUI widgets to the same source. When you want to change this data, you simply change the source (the model) and both of the GUI widgets are automatically updated with the change (all Swing GUI widgets have listeners on their models that watch for changes).

    With a subtle change in your coding model/paradigm, you can acheive _huge_ benefits in both coding time and performance by simply operating within the parameters that Swing was built on.

    In regards to your question of using MVC with C, it would have to depend on the libraries that I was using in C. If the libraries that I was using were built to use MVC, then yes, I would consider this a better idea. If not, then no, I wouldn't use MVC. The use of MVC is really up to the individual's coding style, much like OOP in general. However, if you don't want to code using MVC, then don't choose a tool that is based on it.

    If you don't like OOP, then you really don't want to program in Java. Similarly, if you don't like MVC, then you really don't want to code in Swing.

    --------------------------
  • Yep, a platform-independent C/C++ environment would have been great. Except it wouldn't have been platform-independent. C/C++ depend too much on being close to the hardware. Pointer arithmetic, and endian-dependent code come to mind. Use of this type of functionality introduces dependencies on the platform the code is running on.

    Of course, they could have put in some special checks to allow the program to determine what the standards are for the particular platform the program is running on. But C/C++ already does this (#ifdefs) and it is gross. Yes, you can write platform-independent C/C++ code, but you have to be careful to avoid certain language features, and the code tends to be a bit difficult to maintain.

    Instead, Sun chose to start with a C/C++ish base, and take out a lot of the error-prone constructs (including those which cause platform dependencies). I have found that these constructs are rarely needed: occasionally it would be nice to use some of the "missing" features (I'd be especially happy to have enums back), but in nearly all cases, thinking about the problem for an additional 5 minutes will result in a solution that works just as well, and in most cases the code comes out a bit cleaner in the process.

    I'm not saying that Sun did a perfect job coming up with Java. But I will say that for platform independence, Java is much better than C/C++. I'm not just talking about the basic language constructs either: JDBC (for cross-platform, multi-database SQL), Servlets (for cross-platform, multi-web-server CGI-type programs), and other APIs greatly simplify development for server-side applications. I won't speak on the client-side, since I haven't worked as much on that end of things (although the work I have done tells me that I would much rather develop a GUI in Java than some other language, especially if it has to be cross-platform).
  • And so you would have a C compiler modify a programmer's intent, and implement a memory management system (either on top of malloc, or kernel interfaces) itself?

    Perhaps it should also be responsible for providing an implementation for various kernel modules, because the underlying OS's may be suboptimal. And while we're at it, let's have it compile in its own kernel, since the underlying OS's might be suboptimal.

    malloc is a C standard library routine.
    gcc is a compiler suite.

    libc shouldn't try to compile my programs, and the C component of gcc shouldn't try to do anything other than compile my code.
  • 1.2 was slow. (from what I hear, this is basically because a load of develoeprs went to Sun and said "we can't live without X,Y,Z,etc" and doing all that slowed things down a lot). 1.3 managed to recover from this - ie back to 1.1 levels.

    From what I hear, there's a big wodge of effort going into improving graphics performance for 1.4, and it's already well underway. Of course, it'll be at least a year away, *sigh*

    The embedded Java market seems to be taking off though...

  • Certianly the meta data for each class is huge, often bigger than the code itself. Still both Sun and (believe it or not) Apple are putting in work designed to only load the parts of JAR files that are actually used, I believe (though this is vague and from memory) by memory mapping the file and providing a good index so classes are loaded on demand. (some of this may already be in Java 1.3). The other strategy is to enable the sharing of meta data more effectively, or so I'm told, but I'm not sure what it meant by that... Smart people *are* working on the memory overhead. Sun is claiming up to 40% reduction in memory for the new HotSpot in JDK 1.3, but by all accounts 5%..20% is the common range. Still a 20% memory saving is very good.
    Lord Pixel - The cat who walks through walls
  • I'll check it out fully sometime, but some things are definately out of date. (the authors also seem to have a flair for the dramatic) I haven't done a great deal of FP work in Java, but I'm quite happy with the results.
  • I don't agree. The reason Java compilation is slow is that the standard java compiler, javac, is written in java so you have the overhead of loading the VM, often for each file. Try a recent verion of jikes, an Open Source java compiler written in C++ from IBM. It's at least an order of magnatude faster than javac.

    Of course, the reason that Jikes can be so quick is that it doesn't do depenedancy checking by default and so can be a dangerous compiler if used carefully - try jikes -depend sometime to have it do the same things javac does, and it's no longer an order of magnitude faster.

    It becomes a real issue when you're trying to compile Java code from a makefile - if you don't want to generate dependancy files, your only option is to use javac or compile with jikes -depend.
  • by GooberToo ( 74388 ) on Saturday June 03, 2000 @06:10AM (#1028372)
    You do a wonderful job of highlighting why it hard to get meaningful benchmarks for applications when presented using different languages. Some languages work well with certain constructs, as such, can optimize a specific implementation rather well. Likewise, the same construct, my not be able to be optimized at all in another language.
    A good example of this is Perl programmers moving to Python. If you do things in Python the "Perl way", typically, it performs rather poorly. Likewise, if you do it the Python way, it performs on par with Perl.
    This highlights the fact that no optimizer can replace a good programmer with good design skills. It does, perhaps, highlight that perhaps you can get reasonable results with less skilled programmers using Java than you might get with C, or especially C++.
  • by stripes ( 3681 ) on Saturday June 03, 2000 @06:13AM (#1028373) Homepage Journal
    When a JVM is launched, the maximum amount of memory it can use is set. Since the JVM uses garbage collection, it will eventually use all of this memory, whether it needs it or not. Most JVMs don't clean up after themselves until they have used up most of their available memory. This way garbage collection is more efficient since it is done less often.

    Generational Garbage Collectors try to run a GC sweep after every X K of allocs (where X is about the size of the cache). They are quite a bit faster then most GCs that only collect when memory is critically low (memory accesses in cache are an order of magnitude faster then out of cache references). The downside is they GC more frequently, so rather then being "fast" for five minutes, and then a short pause, they nick you for a few 100ms all the damn time. Of course that is also an upside, they don't feel jerky.

    Generational GC also tends to pack items allocated at the same time (and still live) together, which for many programs increases the locality of reference, and helps a whole hell of a lot if the system is paging.

    I don't know how many JVMs use generational GC, but since it is a 70s Lisp technology, I can't imagine they use something worse. GC hasn't been a red hot research area, but it has had a lot of good work done in the last 20 years (and a lot more before that!)

    I do know many JVMs run GC sweeps periodically if there is idle CPU (get a Java profiler, and check out the activity in the GC thread).

    What the original post was complaining about was the "overhead" with each object. I'm not convinced that exists. I know every object has the equivalent of a C++ vptr (four bytes). Every object type has a virtual function table (possibly shared if it doesn't define new functions, or override any of the parent's), and a small description of the data fields, and the name of the type, and the function names and such.

    That's a lot of crap -- say 400 bytes. Easily more then a simple structure (say a 2D point, 2 4 byte ints). If you have one point object, you probably have 100 times as much memory dedicated to describing the point object (in case you need to serialize it and send it via an RPC or something). But if you have 5000 points, the overhead of the meta data is vanishing low (400 bytes out of 40,400 bytes, 1%). Er, except for the vptrs (4 bytes on most systems), that'll bring it up to 20,400 overhead bytes for 60,400 total bytes or about 33%.

    So for very simple objects Java does have a noticeable overhead. But for less simple objects the overhead is much smaller. If you compare any Java class with a C++ class that has virtual functions the per-instance overhead is identical. The per class overhead is different (with Java almost certainly having more overhead), but the per class overhead isn't interesting. There are not enough classes in most programs to make a difference (and believe it or not, with templates it's far easier to "accidentally" make 1000s of different classes in C++ then in Java).

    That leaves arrays. C/C++ arrays need not have a length stored with them while Java ones do. Java is behind 4 bytes per array on that score. Relevant if you have lots of small arrays, irrelevant otherwise. Except....

    You know C++ does need to know how many elements are in an array so it can call the destructor for each one (it can omit this, if there is no destructor, but I don't know of compilers that do that). So it doesn't beat Java, it ties.

    ...Oh, and C needs the length for dynamically allocated arrays (via malloc) so it can free them again. But it does win on static arrays.

    Personally, I want a JVM that will only use as much memory it needs plus a "buffer" so it doesn't have to garbage collect every time I free something. It would probably just need to garbage collect every few seconds whether it needed it or not. It would sacrifice some performance for memory; therefore, it would be best of the client side and useless on the server side. I missing an existing one that does this?

    Pretty much all of them if you make a thread that calls java.lang.runtime.gc() and then sleeps for a few seconds in a loop. Or even most of them (I think) if you merely have some idle CPU.

  • by jilles ( 20976 ) on Saturday June 03, 2000 @06:16AM (#1028375) Homepage
    Of course having multiple threads makes java programs really scalable. On a server you should use servlets. Servlets are actually quite efficient and have load balancing built in. You also avoid the non blocking IO problem this way since the webserver (apache for instance) takes care of the connections and passes the request to the servlet. Servlets are not spawned for each request but typically are reused for new requests. That and the ability to scale to multiple CPU's make that Java is very suitable for server side network programming.

    In my opinion making a program complex for performance reasons only is a bad idea unless we're talking about long term usage of a program. Developers often forget that most of the cost of developing a product goes into maintenance. Java servlets provide you with a nice scalable architecture for serverside programming and it allows you to focus on the parts of the program that provide the functionality you need rather than performance related stuff.
  • I should add some stuff to this:
    Servlets are usually more efficient than cgi scripts (such as perl scripts) because they are already running and don't require the overhead of spawning a new process to handle the request. That and jvms such as discussed in the article make that java servlets are a very good alternative for perl (and we are just talking performance here, not all the other nice features of java such as readable source code, OO features etc.).
  • The poor showing of Visual C++ was very surprising, given that Visual is held to be one of the fastest x86 compilers. The fastest x86 compiler at the moment is Intel's own x86 compiler and it would have been interesting to see how it would fare on the benchmark. I have done performance tests between Visual and Intel's on graphics apps on a PII, and Visual's is rarely more than 5-10% behind Intel's. A future extension to this could be to test this code with Intel's compiler since it is downloadable on the web (full but time limited) and easily pluggable into Visual. I suspect that the weakness of the Visual complier may have something to do with Microsoft not optimizing as much of a non intel CPU and the recent K7 optimizations gcc has gotten.
  • by ChrisRijk ( 1818 ) on Saturday June 03, 2000 @06:23AM (#1028380)
    Hi, I actually posted this to Slashdot last night... and it's still sitting in the pending queue, heh.

    Another post I sent in last night which quickly got rejected was this:

    • Although Sun released Java 2 Standard Edition 1.3 for Windows a few weeks ago, until now there hasn't even been a beta from Sun for either Linux or Solaris, until tonight.
    • J2S E 1.3 for Linux Beta [sun.com] (for x86) which also includes HotSpot Server. This was with the help of the Blackdown guys, though the credits [sun.com] are in a somewhat obscure place. J 2SE 1.3 for Solaris Beta [sun.com] SPARC and x86 (includes HotSpot Server) was also announced today. In the future, releases for all platforms will be at the same time.

    Unfortunately, that release came a little too late for me to do much about, though I have quickly tested the Solaris x86 (on the same hardware as the Windows tests), and the rests are pretty much identical, though Solaris was a bit faster. (but then, I was running without the desktop running which does help).

    Also coming a bit too late was results from IBM's Windows 1.2.2 JDK, which I found a bit surprising - it did worse on some tests, and better on others, though I didn't have much time to test things.

    Thanks for the replies... kinda makes it all worth it - it took me about 100 hours over 4 weeks to do this. (took up a lot of my evenings)

    I better re-install Linux sometime so I can test on it again... (my last install stopped working for unknown reasons)

    It'll probably be some time before I update the article - first I want to finish off my MAJC article, which really is too damn big. (22,000 words... ouch).

  • Gcc i386 optimizations have gotten much better lately. They recently integrated a new i386 backend (I don't think that was used in these tests, though.)

    The test claimed to use some version of gcc 2.95.mumble-mumble. 2.95 has a lot of x86 improvments (and generic improvments!) from egcs. What 2.95 doesn't have, that the dev snapshots do is -mtune=k7 and -march=k7, which should have made the FP stuff go faster (scheduling for the PPro and running on a K7 isn't bad, but scheduling for the K7 and running on a K7 is better).

    There is also some experimental code to do cach line prefetching, but I didn't follow the thread to see how it turned out (last I say it made the streams benchmark numbers twice as good, or better, but there was some concern that it might make other things slower).

  • It would be a great deal faster since Visual C++ 6.0 Standard doesn't do any optimization.
  • AFAIK, 2.95 also does not have the new i386 backend yet. However, I've seen no benchmarks on that, so I really don't know how it does.

    According to the egcs/gcc news page [gnu.org] from 2Sep99: Richard Henderson has finished merging the ia32 backend rewrite into the mainline GCC sources. The rewrite is designed to improve optimization opportunities for the Pentium II target, but also provides a cleaner way to optimize for the Pentium III, AMD-K7 and other high end ia32 targets as they appear.

    That was post 2.95, and post 2.95.1, but before 2.95.2. Looking at the article it was using 2.95.2, so I assume it has the new x86 backend.

    2.95.mumble-mumble also lacks most of the (deemed unstable or too hackish) optimizations from pgcc.

    I think the issue they had with pgcc was it did a lot of x86 things at the machine independent level of gcc, and a lot of machine indepenedent things were in the x86 code (like branch prediction). A lot of the pgcc optimasations are in the new x86 backend, or were properly added to the machine independent code.

    That is to say a lot of the stuff pgcc did someone re-did and put in gcc. Not all of it. And gc now does stuff I don't think pgcc does.

    I don't know which is faster at this point, but I could beleve either. After all egcs got a whole lot of MI speedups that pgcc hasn't.

  • Many people think Java is controlled by Sun. However, that is not entirely the case - for just about any area of Java you care to name, there are mailing lists and discussion forums [sun.com] where they discuss developments of various API's, and they really listen!

    I know, I subscribed to the Java2D mailing list for a long time before they released the API. They asked the list many questions, and listened to everyone. Sometimes they shot ideas down if they thought they just were not a good idea (support for pallete cycling was one example here), but at least they listened to complaints about the way the API was going and did make changes based on input. One member even went so far as to code up the whole propsed Java2D API so people could try it out in practice and make comments.

    Similarily, programmers have the opportuninty to voice thier opinion in code which is released under the SCSL (Sun Community Source Licence). Is it really "Open Source"? No. But then again, it does give you the opportunity to fix things and talk to the developers about how things are in Java in the clearest way possible - through code. Think of it as "free speech" where only one person showed up to hear you talk.

    It is still true that Java is under Suns' control,
    but never before (that I know of) has a company had a product with as much outside review and commentary in development.

  • by stripes ( 3681 ) on Saturday June 03, 2000 @06:29AM (#1028391) Homepage Journal
    I'd be curious about the C++ allocation. I think some STL classes have a special allocator that's supposed to be faster than malloc (is new using the same allocator as malloc?).

    C++'s new tends to be a thin layer over malloc. The STL allicator wasn't designed to be faster then new/malloc, but to deal with segment issues, shared memory issues, and maybe even object perminance.

    The allicator SGI's STL (which gcc currently ships) allocates about 20 objects when you ask it to allocate, and doles them out one by one. That's for "small" objects. Anything over about half a K (or maybe 2K? I forget) goes through to new. This might be faster then 20 mallocs. Or not. Some mallocs are pretty good. Better then the STL allicator overhead. It does tend to reduce fragmentation. By a whole lot more then I expected.

    If you don't like the default allicator, they are easy to write, and Alloc_malloc is allways ready to step in. There is even a #def to ask it to be used everywhere.

  • If you're actually running a simple chat server, then the results are okay, but Volano doesn't actually test Java optimisations very well as it spends most of it's time in pre-compiled static code, including the OS kernel.
  • No, there are definitely limitations in the bytecode. I wrote a compiler to take another language (Tiger, it's a toy language, kind of like Pascal, I guess) to Java bytecode. It was definitely like fitting a square peg into. . . well, you know what I mean.
    Java bytecode is completely centered around the Java class system. So it has low-level primitives to handle virtual methods, Java interfaces, exceptions, and all of that. But, as a result, it's really painful to use something that can't be shoehorned into Java's class system (though it can be done with lots of "static" junk).
    JPython does a REALLY beautiful job, though. I highly recommend it to anybody interested in a high level language for the JVM (www.jpython.org).
    --JRZ
  • I don't think you realize that you aren't talking about dynamic optimization, you're talking about optimizing compliers in general. A dynamically optimizing complier has a few edges over a statically optimizing complier, but in general, the static compiler wins. Think about the differences in hardware archicture. Aside from something like an OS, a developer already knows what CPUs their program will be used on. If I make a performance critical program now, say a 3D renderer, I know that it will be used on PIIIs, PIIs, Athlons, Willamette, and Maybe K8. That's it. People will not be still using my program on K9s and 10s. Sure some people use older programs, but think about it. In programs where performance really matters, do you use anything older than two years?
  • Well....I used Pascal in my first couple of years, and I don't seem to have been hampered by it. Thing is, your courses should be teaching you programming concept..."How to program", not "How to program in Language X". If you know one language reasonably well, it is generally pretty easy to jump between languages. Unless you go to an odd-ball like APL :)

    My university switched to Java from C++ because of the perception that it is easy to learn, and because they were having problems will people using different compilers. The profs were answering more questions about compiler warts than programming. At least in Java, the warts are relatively standard across various platforms :)

    It's also easier for the profs to do interesting graphics assignments, small games n' stuff that are more interesting to most people than, say, printing out Fibonacci numbers.

    Anyhoo, my point is, the language taught in the first two years doesn't make a lot of difference.

    Dana
  • I actually spent about 5 hours trying to find a nice FFT in C that I could easily convert to Java. In the end I found an FFT routine written in Java (though the author ported it from a C routine) and converted it to C. However, I was more interested in testing the compiler/optimiser and generic language differences than trying to come up with a really good FFT.

    Btw, when you say "There are some things a C compiler can't optimized because of aliasing", do you literally mean it would be impossible for any (legal/correct) C compiler?

    If you have a nice FFT that can easily to 'converted' to another language, I'll be happy to try it out...

  • Ever heard about mod_perl (you know they use it on that obscure slashdot site), FastCGI, or a few other ways to avoid respawning the perl scripts? Beside, I think the original poster was talking about writing servers rather than doing web server programing. I don't see how Java servlets can help you writing something like Apache. In my experience, non-blocking I/O is always more efficient on heavy loads then multithreading. It makes code more complex though.
  • It's even worse than that - some tests show one on at top at the start, and someone else on top at the end... Very easy for press release writers to selectively choose what to show.

    This is why most 'real' benchmarks are actually a suite of tests, and the final result is just an average.

  • The result that really surprised me, though, was Cygwin32 beating MS VC almost across the board - running on Windows! WTF?!

    Well, Cygwin with an insain amount of hand-tweaked, unstable compiler optimizations VC++ beat the baseline most of the time.

If all else fails, lower your standards.

Working...