Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Java Programming

JIT vs AOT Compilation 57

jg21 writes "This article on "Penguin-Driven" JVMs takes a look the performance of Java GUI applications based on the JFC/Swing API, and contends that the JIT-powered JVMs can't match a JVM with an ahead-of-time compiler ported to the Linux/x86 platform. With AOT compilation, says the CTO who has written this piece, real-world Swing applications performed perceivably faster. One is left wondering, will we now see the 'microbenchmark war' carried into the Linux camp?"
This discussion has been archived. No new comments can be posted.

JIT vs AOT Compilation

Comments Filter:
  • by Muda69 ( 718162 ) on Friday November 05, 2004 @04:04PM (#10737031)
    IMO, this is the way to go. Dynamic compilation is a mix of interpretation and translation. It defers compilation for a body of code, until it believes that it will have an impact upon the runtime of the program. Dynamic compilation has the same benefits as JIT compilation, but mitigates the compilation times and pauses by limiting the amount of code it actually compiles. Additionally, dynamic compilation can take advantage of runtime characteristics of the program itself, allowing it to perform optimizations like monomorphic inlining. (Although it wouldn't be fair to elide the fact that AOT compilers could potentially make use of feedback/runtime profiling to achieve some of the same characteristics).
    • by mmusson ( 753678 ) on Friday November 05, 2004 @04:14PM (#10737151)
      One of the points in the article is that the JIT compilers did not do well with Swing code because it did not have hot spots. Any type of after the fact compile is not going to perform well if it can't identify the parts of the code that need to be compiled.
      • I'm suprised that JREs don't ship with class libraries precompiled into native libraries.

        I'm curious how much performance could be gained from a highly optimized precompiled class library.
        • The Java Advanced Imaging API which is part of the class library ships with a Java and some native implementations. This is done purely for performance reasons because the pure Java version does not perform well.

          In fairness to Java, I think this is because that library is not written well. My company made its own imaging routines that cover part of its functionailty and they are substantially faster.

    • 1998 called (Score:5, Funny)

      by p3d0 ( 42270 ) on Friday November 05, 2004 @05:02PM (#10737704)
      They want their idea back.
      • Let me clarify (Score:5, Informative)

        by p3d0 ( 42270 ) on Friday November 05, 2004 @05:43PM (#10738045)
        If you can show me a commercial JIT compiler that doesn't already do this, I'll eat my hat. What you call "dynamic compilation" is so routine and mundane that when people talk about "JIT compilers" these days, that is exactly what they are talking about. Nobody blindly compiles on the first invocation of a method any more.

        What you have mentioned is not the crux of the problem. I can't say much more because I do confidential work on IBM's JIT compiler.

    • by Anonymous Coward
      So far, all dynamic compilation advocates have managed to demonstrate is programs that - given a sufficiently long runtime, and we're talking weeks not minutes, are able through the use of dynamic optimisation to match the speed of traditionally compiled software.

      That's quite impressive, considering that programs executing in virtual machines are generally at least an order of magnitude slower than machine code. But it doesn't make it a superior option, I'm afraid.

      Not yet, anyway...
    • This is exactly what Suns client VM does - it'll only compile a block of code if it is executed multiple times; if you use the server VM it'll precompile all code though.
  • ibm's daisy (Score:4, Interesting)

    by ophix ( 680455 ) on Friday November 05, 2004 @04:38PM (#10737424) Homepage
    doesnt ibm have a project named daisy that is a JIT vm running on top of the very machine it emulates?

    i seem to remember reading that they were able to achieve up to 25% performance increase by doing so (by taking advantage of run time profiling of the executable)

    i might be mistaken though
    • next time i will RTFA before posting (when resnet here decides i can open the article's website)

      after finally being able to look at the article i was mistaken as to what they meant by AOT compile (i just thought they meant native compile from source, not what they actually mean)
    • ``doesnt ibm have a project named daisy that is a JIT vm running on top of the very machine it emulates?''

      No. Daisy [ibm.com] is a project that emulates other architectures (x86, PowerPC, and also JVM) on top of a VLIW processor.
    • HP's dynamo.
  • by 12357bd ( 686909 ) on Friday November 05, 2004 @04:39PM (#10737441)

    Why not spend our ever increasing computing power to create 'introspective' programs? I mean programs that dinamically examines and changes program behaviour, looking for 'better' execution modes.

    VM's are a perfect environment for that kind of programming, ie, why not feature a JVM that dinamically adjust/perfect bytecode execution methods on a program by program basis?.
    The article spots something old, optimizing is not a one size fits all matter, what is good for a given case is bad for another.

    As our procressing power increases, we can achieve that 'programming about programming' indirection level.

    • ``why not feature a JVM that dinamically adjust/perfect bytecode execution methods''

      The HotSpot JVM [sun.com] actually does this.
      • We need to go further, examine local execution instances, ie. I dayly run a given program (say Jedit), but I am only using/executing 50% of the code, and usually on a strongly patterned way. What about a JVM that detects that kind of behaviours and adapt itself ?.

        That's why I am talking about 'introspective' programming, programs that continously examine how other/selfs programs are run and extract/apply usefull information.

    • Why not spend our ever increasing computing power to create 'introspective' programs?
      We do. That's what modern JVMs do.
      • We do. That's what modern JVMs do.

        I don't think so.
        The actual introspection level is low, very low, in fact systems and not only VMs should be designed with introspection in mind to have a satisfactory environment for adaptative behaviour.
        Think about usage patterns, maybe I am wrong but no actual VM execution model adapts program execution at running/history characteristics. We are still simply 'running' programs, the system/vm is not designed to learn/find the best way to run those programs at each ex

        • We've got this for normal compilers already in the form of profile based compilation. JIT does it per instance. I don't know for sure that any current JIT has a way of saving that sort of information/state across invocations but seems like a really obvious extension.
        • by p3d0 ( 42270 )

          Think about usage patterns, maybe I am wrong but no actual VM execution model adapts program execution at running/history characteristics.

          Yep, I think you're wrong. Look up "value profiling" for instance. It means finding values that are effectively constant on a particular run of a program, and recompiling that program on the assumption that the value will indeed be constant.

          For instance, imagine the UNIX "wc" (word count) program were written in Java. An advanced JIT compiler could tell, for in

          • I think what he's saying is caching some of that information across program invocations, so that the next time you invoke "wc -l" the JVM looks at its JIT cache and pulls up (depending on whats available or practical) the profiling information it has from the last run or the actual compiled code.

            This seems obvious and I wonder why it's not widespread. Maybe it is and I just don't know it? Maybe the work of figureing out whether or not your cache is stale outweighs the benefits? Taken to it's logical concl

            • As you say, you end up with a slippery slope that leads to AOT compilation, which would be fine, except that it makes you ask the question, why are we jitting in the first place? At that point, whatever answers you come up with often work against keeping persistent data between runs of the same application.

              However, it's certainly not impossible in principle, and I'm sure there are profitable points on the JIT-versus-AOT spectrum that have yet to be explored.

              • As you say, you end up with a slippery slope that leads to AOT compilation, which would be fine, except that it makes you ask the question, why are we jitting in the first place?

                Although "Just In Time" does imply that compilation is only done immediately prior to execution, that is only accurate of the early implementations. Though newer VMs still say JIT, I think JIT has come to mean "dealing with a canonical representation that is not directly executable." Caching is a transparent optimization in

            • Well, the main idea is that currently a running program instance is a sum of factors, ie let's suppouse a java program; we have the bytecode, the JVM, the program parameters,and the system state.

              Introspection adds three facts, a) the ability to inspect/analize/record actual or past runnings instances, b) the ability to try/apply different execution models and c) predictive behaviour to to try/use/evaluate the results.

              Current optizing techniques are still 'static', we learn that is good or bad to use he

              • if a given program is usually called to parse a big file, adjust allocation/prefetching to optimize that process
                I think the OS usually does a good job with that. Do you have any reason to think that there's an opportunity here?
    • Yeah, it's called 'LISP'.
  • Why Not Both? (Score:5, Interesting)

    by RAMMS+EIN ( 578166 ) on Friday November 05, 2004 @04:49PM (#10737550) Homepage Journal
    Why compare JIT against AOT? Why not have both?

    AOT compilation makes for fast start up time and fast run time. JIT advocates claim that it can lead to better performance, as more optimizations can be performed with run-time information. So why not combine the two? Compile it before the first run, and further optimize it at run-time where appropriate. That way, you get the best of both worlds.
    • Re:Why Not Both? (Score:3, Informative)

      by breadbot ( 147896 )

      That's an interesting suggestion. I think it would be tough to pull off because, in order to detect hot spots, you need performance metrics. Compiled code that generates metrics on itself (for example, running with embedded profiling tools) is usually very slow. To take advantage of pre-compiling, you'd basically need a profiler with as little speed penalty as possible.

      Also, it may turn out to be easier to recompile based on the original bytecode rather than the compiler machine code. Which would mean

      • It's not as difficult as it may seem. You just ship the bytecode as for Java. Then, the receiver compiles the code before running it. You can then run a run-time optimizer similar to the one in the Java HotSpot VM alongside the program. You could even save the run-time optimized version of the code. Or leave out the run-time optimizer, if you prefer.

        Native machine code may not be a lot harder to optimize than JVM bytecode (which also consists of low-level instructions). And if it is, you can always keep a
    • Re:Why Not Both? (Score:3, Insightful)

      by OmniVector ( 569062 )
      this is what i have been clamoring for for years. why don't we just cache JIT compilation information somewhere, and upon launching an application, check a hash of the system resources (memory, cpu, other information that aids to JIT compilation performance) and if the hash hasn't changed, just use the pre-JIT compiled binary information? and even if it DID change and the platform is still the same, why not keep the pre-JIT compiled information and redo the JIT compilation in the background?

      something mu
      • Makes sense, and I've been asking the same question...

        Another thing I would like to see is AOT compiled classes libraries that can be inlined by modifying the actual machine code. Viruses have been playing with relative addresses for some time, so why not Java?

        Honestly though, I think the reason they haven't done this is because of the disk space. That's a lot of space to be using.

        On a side note, I think IBM's JVM does have the class library precompiled. Too bad I can't really use it very much because
      • Microsoft provides this for .NET, in the form of the "ngen" tool [microsoft.com]. Although it isn't commonly used by developers (nor does MS really encourage them to), when you install the .NET Framework, the standard libraries are "ngen-ed".

        (FWIW, my company tried ngen to speed up our app's startup time, and it made surprisingly little impact.)

    • Re:Why Not Both? (Score:2, Informative)

      by Anonymous Coward
      You might want to look at LLVM [uiuc.edu]. One of the ideas behind the project is to utilize information from the entire lifetime of the program to perform optimizations.
  • Regardless of whether you use AOT or JIT compilation, if you're compiling anyway, what's the point in Java bytecode? I mean not why it exists, but why is it like machine code? Wouldn't it be more compact, quicker to interpret, and easier to optimize if it were higher level (say, like parse trees detailing the method invocations)?
  • by Sentry21 ( 8183 ) on Friday November 05, 2004 @10:17PM (#10739897) Journal
    When will the TLA madness end? Oh, the humanity! At least provide definitions, for those god-fearing folk who may be interested but not up-to-date.

    JVM: Java Virtual Machine, the virtual environment that every Java bytecode program runs within, abstracting real hardware for the program in question.

    GUI: Graphical User Interface

    JFC: Java Foundation Classes - the basic classes that are provided to developers upon which, or rather, with which, to build their programs.

    API: Application Programming Interface, a defined way for software to interface with other software (i.e. to make library calls)

    JIT: Just-In-Time compilation, compiles the program when it is being launched, for the machine it is being launched on, in order to prevent poor performance by compiling every instruction whenever it needs to be done

    AOT: Mentioned in the article text, it means Ahead of Time. For details, read the linked story.

    CTO: Chief Technology Officer. Name given to an executive in charge of new or current technology.

    Now that you know what is going on, RTFA.

    --Dan
  • I don't like the article.

    For several reasons I think they cheated and they coin old terms into new terms for no reason (except publicity). E.g. they mix up AOT with static compilation.

    AOT is an extension of JIT compilation, and not a compilation done ahead of the first execution.

    A VM based on a JIT compiler loads always only byte code and compiles the byte-code to native code on the fly depending on several algorithms/heuristics.

    A AOT compiler does JUST THE SAME on the first invocation of the byte code
  • This discussion is interesting in that we've got some actual knowledgeable people responding to what is essentially a marketing article disguised as a technical one.

    Without this, I would have partially believed the article (though even I noted that they did not report the results for hotspot -server).
  • JET seems reasonably priced at least. It has been pointed out elsewhere that while the GNU JRE is not finished the compiler is. GNU GCJ is a replacement for javac that can generate either native code or bytecodes. And since SWT is open source you should be able to write completely open source Java apps now or at some point in the future. Also native compilation solves the problem of decompilation of bytecodes. The legal status of all this is way beyond the scope of this post. O'reilly has a good book

Happiness is twin floppies.

Working...