Forgot your password?
typodupeerror
Firefox Mozilla Programming

Inside Mozilla's New JavaScript JIT Compiler 97

Posted by CmdrTaco
from the no-not-peanut-butter dept.
An anonymous reader writes "IonMonkey is the name of Mozilla's new JavaScript JIT compiler, which aims to enable many new optimizations in the SpiderMonkey JavaScript engine. InfoQ had a small Q&A with Lead Developer David Anderson, about this new development that could bring significant improvements in products that use the SpiderMonkey engine like Firefox, Thunderbird, Adobe Acrobat, MongoDB and more. This new JIT infrastructure, will feature SSA compiler intermediate representations which will facilitate advanced optimizations such as type specialization, function inlining, linear-scan register allocation, dead-code elimination, and loop-invariant code motion."
This discussion has been archived. No new comments can be posted.

Inside Mozilla's New JavaScript JIT Compiler

Comments Filter:
  • by TaoPhoenix (980487) <TaoPhoenix@yahoo.com> on Monday May 02, 2011 @08:46AM (#35999590) Journal

    What's different between IonMonkey and JaegerMonkey (the last engine I thought)?

    • Re: (Score:3, Funny)

      -JaegerMonkey was dreamt up during a night spent consuming JaegerMiester shooters.
      -IronMonkey was designed the morning after, behind iron bars in the slammer, following the JaegerMonkey "JaegerMiester" release party.

      • by Zwets (645911)
        Yeah, it's funny. It also contains a pet peeve of mine: don't apply the i/e rule of thumb to proper names! If you're not sure about the correct order, a quick google will help. Jaegermeister, Einstein, Heinlein, etc. Helpful Wikipedia reference [wikipedia.org] :-)
    • by BZ (40346) on Monday May 02, 2011 @09:04AM (#35999798)

      This part is one major difference "Clean, textbook IR so optimization passes can be separated and pipelined with well-known algorithms."

      The current JM is somewhat goes directly from SpiderMonkey bytecode to assembly, which means any optimizations that happen in the process are ad-hoc (and typically don't happen at all; even basic things like CSE aren't done in JM right now).

      • by Trepidity (597)

        Do textbook IR optimizations make a big different with JS code, though? My impression is that the big gains in JS performance don't have much to do with traditional C-compiler optimizations like loop optimization, and have a lot more to do with optimizations derived from the compiling-dynamic-languages literature, like type specialization and class inference.

        V8 also goes pretty directly to asm, for example, and gets some big wins from techniques developed in Smalltalk compilers, like compiling code that ass

        • by BZ (40346) on Monday May 02, 2011 @11:35AM (#36001386)

          The big gains from JS interpreters to most of the current crop of JS JIT compilers (excluding Crankshaft and Tracemonkey, and possibly excluding Carakan's second-pass compiler) come from class infrence, type specialization, ICs, etc. That work is already done. The result is about 30x slower than C code (gcc -O2, say) on numeric manipulation.

          To get better performance than that, you need to do more complicated optimizations. For example, Crankshaft does loop-invariant code motion, Crankshaft and Tracemonkey both do common subexpression elimination, both do more complicated register allocation than the baseline jits, both do inlining, both do dead code elimination. For a lot of code the difference is not that big because there are big enough bottlenecks elsewhere. For some types of code (e.g. numeric manipulation) it's a factor of 2-3 difference easily, getting into the range of gcc -O0. And that's without doing the really interesting things like range analysis (so you can eliminate overflow guards), flow analysis so you know when you _really_ need to sync the stack instead of syncing after every arithmetic operation, and so forth.

          For code that wants to do image or audio manipulation on the web, the textbook IR optimizations make a gigantic difference. That assumes that you already have the dynamic-language problems of property lookup and whatnot solved via the techniques Smalltalk and Self pioneered.

          One end goal here is pretty explicitly to be able to match C or Java in performance on numeric code, at least in Mozilla's case.

          • by Trepidity (597)

            I was under the impression that the IR optimizations are mostly what makes the difference between gcc -O0 and gcc -O2, and isn't that a minor speed gap compared to the still-existing gap between JS and C that doesn't have all these IR optimizations enabled? I think most people would be overjoyed if JS had performance like C code compiled with gcc -O0, so doesn't that point to different kind of optimizations to target? Or is the argument that bridging the remaining performance gap with unoptimized C is no lo

            • by BZ (40346)

              > I was under the impression that the IR optimizations
              > are mostly what makes the difference between gcc
              > -O0 and gcc -O2,

              Depends on your code, but yes.

              > isn't that a minor speed gap compared to the
              > still-existing gap between JS and C that doesn't
              > have all these IR optimizations enabled?

              No.

              > I think most people would be overjoyed if JS had
              > performance like C code compiled with gcc -O0

              On numeric code, Tracemonkey and Crankshaft have about the same performance as gcc -O0 in my measu

              • by Trepidity (597)

                Hmm, interesting. It's been about a year since I've done any benchmarking of JS stuff, but at the time my attempts at some numerical algorithms in javascript were on the order of 10x-1000x slower than the unoptimized C equivalent. Have JS JITs really improved to the point where a typical double-nested loop over a 2d array doing some standard image-processing kernel will now run at the same speed as gcc -O0? That certainly wasn't the performance I was getting!

                • by BZ (40346)

                  It really depends on the code. In particular, in both Tracemonkey and Crankshaft it's quite possible to end up with code that can't be compiled with the optimizing compiler and then falls back to a less-optimizing JIT (JM and the V8 baseline compiler respectively) .

                  If you have an example of the sort of code we're talking about here, I'd be happy to see what numbers for it look like.

        • Yeah - V8 is really unbelievable for some applications - so much faster (10x) than any of the other JS interpreters (or should we call them compilers at this point?). In other cases it's not really that much, if any, better, but some of the shit those google JS dudes are doing is pretty impressive.

          As much as I resent Google sometimes, in a lot of cases they really do have the smartest people in the room.

    • by Millennium (2451)

      IonMonkey compiles from SpiderMonkey's interpretation to an intermediate representation.

      JaegerMonkey currently compiles from SpiderMonkey's interpretation to bytecode, which then gets passed to TraceMonkey. It will be changed to compile from IonMonkey's intermediate representation to bytecode instead, though it will still pass that to TraceMonkey.

    • by kmoser (1469707)
      It's monkeys all the way down.
  • LLVM (Score:3, Interesting)

    by nhaehnle (1844580) on Monday May 02, 2011 @09:27AM (#36000046)

    I'd be interested to hear why the Mozilla developers don't use an existing compiler framework like LLVM, which already implements many advanced optimization techniques. I am aware that JavaScript-specific problems need to be dealt with anyway, but it seems like they could save themselves a lot of hassle in things like register allocation etc. Those are not so interesting problems that are handled by existing tools like LLVM quite well, so why not use that?

    • Re:LLVM (Score:5, Informative)

      by BZ (40346) on Monday May 02, 2011 @09:31AM (#36000080)

      http://blog.mozilla.com/dmandelin/2011/04/22/mozilla-javascript-2011/ [mozilla.com] has some discussion about LLVM in the comments. The short summary is that the LLVM register allocator, for example, is very slow; when doing ahead-of-time compilation you don't care much, but for a JIT that could really hurt. There are various other similar issues with LLVM as it stands.

      • So why is the solution to roll your own, rather than fix LLVM or develop a more direct competitor to it? All of the issues mentioned so far seem to be with the current implementation, not with the fundamental idea.

        • by BZ (40346)

          No idea; you'll have to ask someone familiar with both LLVM and the needs of a JIT for that....

        • Re: (Score:3, Informative)

          by lucian1900 (1698922)
          Because it's easier. All the projects that tried to write a good JIT with LLVM came to the same conclusion: LLVM sucks for JITs. That's both because of implementation issues and design choices. LLVM is a huge, complex C++ program. No one's going to fix it. Nanojit and the register allocators floating around are competitors.
        • by dzfoo (772245)

          Isn't "rolling your own" the same as "developing a more direct competitor to it"?

          • by jesser (77961)

            "Developing a competitor to LLVM" would imply creating something generic enough to be a backend for many programming languages. Maybe something geared toward dynamic rather than static languages, and tuned better fast compilation, but still comparable in scope to LLVM.

            • In other words, something like Parrot.

              I mean, I get that Parrot itself probably isn't a good choice, but I still wonder why everyone is so busy reinventing wheels independently and in such a language-specific manner.

        • So why is the solution to roll your own, rather than fix LLVM or develop a more direct competitor to it? All of the issues mentioned so far seem to be with the current implementation, not with the fundamental idea.

          The goals are somewhat different. LLVM is great for optimizing statically typed code, so it will do e.g. licm that takes into account that kind of code. The IR that would be useful for a JS engine would be higher-level, so it could do licm on higher-level constructs.

          The issue is sort of similar to why dynamic languages don't run very fast on the JVM or .NET. They have very efficient JITs, but they are tuned for the level that makes sense for languages like Java and C#, not JavaScript or Python or Ruby. A

      • Sounds like they don't know how to use LLVM then. It has a small selection of different register allocators, and you can pick the fast one if you care about JIT time more than you care about execution time.
    • by Animats (122034)

      I'd be interested to hear why the Mozilla developers don't use an existing compiler framework like LLVM...

      LLVM is intended for more static languages than Javascript. The Unladen Swallow [wikipedia.org] project tried to use it for Python, and that was a failure..

  • by Anonymous Coward

    Damn, Mozilla changes JIT engines like every year. Why should be believe this one will last any longer than all the others they have tried?

    Javascript is a poorly designed language that is hard to JIT and I imagine that's why we keep seeing people trying to redo things better than the previous JIT engine. It's damn near impossible though, just micro-fractional improvements and a whole lot of wasted effort.

    Meanwhile things like LuaJIT [luajit.org] offer a Javascript-like scripting environment with damn near the performa

    • by codepunk (167897)

      LuaJIT is pretty darn quick but then again node.js running on googles V8 engine is right there with it.

    • Mozilla doesn't change engines, they just give a different name to each version.

      By the way, Julien Danjou's Why not Lua [danjou.info].

      • by BZ (40346)

        Not quite. More precisely they add new engines, and give them names, then use whichever one does the job best (or is likely to do the job best).

    • by BZ (40346)

      > Damn, Mozilla changes JIT engines like every year

      Unlike Chrome, who has now had two (more actually, but two that they publicly named) jit engines between December 2008 and December 2010?

      > Why should be believe this one will last any longer
      > than all the others they have tried?

      What does "last longer" mean? Mozilla is still using the very first jit they did (tracemonkey); they're just not trying to use it for code it's not well-suited for.

      > just micro-fractional improvements

      If a factor of 2-3 i

    • by Millennium (2451)

      Damn, Mozilla changes JIT engines like every year. Why should be believe this one will last any longer than all the others they have tried?

      That's not actually what's happening, though Mozilla isn't helping matters with all the confusing names.

      What's actually going on is that Mozilla is essentially implementing different parts of a longer pipeline. Even as recently as FF4, SpiderMonkey is still present at the front of that pipeline, and TraceMonkey is still present at the end (actually nanoJIT is at the very end, but it's not named after a monkey so we won't count it here). JaegerMonkey, IonMonkey, and all the other monkeys go in the middle.

  • by hey (83763) on Monday May 02, 2011 @09:58AM (#36000366) Journal

    Now javaScript is so fast perhaps other interpreted languages would be "compiled" to JS. I am thinking of Perl6 for one.

  • by ferongr (1929434) on Monday May 02, 2011 @10:12AM (#36000468)

    Didn't Mozilla cry bloody murder when IE9 was discovered to perform dead-code elimination claiming it was "cheating" because it made some outdated JS benchmarks finish in 0 time?

    • by BZ (40346) on Monday May 02, 2011 @10:20AM (#36000566)

      No, they just pointed out that IE's dead-code elimination was somewhat narrowly tailored to the Sunspider benchmark. That's quite different from "crying bloody murder".

      Tracemonkey does dead-code elimination right now; it has for years. It's just that it's a generic infrastructure, so you don't usually get artifacts with code being eliminated only if you write your script just so. On the other hand, you also don't usually get large chunks of web page dead code eliminated, because that wasn't a design goal. Keep in mind that a lot of the dead code JS JIT compilers eliminate is code they auto-generated themselves (e.g. type guards if it turns out the type doesn't matter, stack syncs it it turns out that no one uses the value, and so forth), not code the JS author wrote.

      • by ferongr (1929434)

        Narrowly pointing out and having their lead "evangelist", Asa Dotzler, scream condemning words and making absurd claims is a different thing.

        In any case, I haven't seen any proof that the dead-code eliminator is "somewhat narrowly tailored for Sunspider". It could just be that it's quite aggressive, so any code that doesn't touch the DOM or change any variable (like calculating 1M of pi and sending it to null) get elliminated.

        • by BZ (40346) on Monday May 02, 2011 @10:43AM (#36000806)

          > Narrowly pointing out and having their lead
          > "evangelist", Asa Dotzler

          Uh... I think you're confused about Asa's role here. But anyway...

          > I haven't seen any proof that the dead-code
          > eliminator is "somewhat narrowly tailored for
          > Sunspider"

          Well, it only elimintated code that restricted itself to the set of operations used in the Sunspider function in question. This is pretty clearly described at http://blog.mozilla.com/rob-sayre/2010/11/17/dead-code-elimination-for-beginners/ [mozilla.com]

          For example, it eliminated code that included |TargetAngle > CurrAngle| but not code that was otherwise identical but said |CurrAngle >) was not. Addition and subtraction were eliminated while multiplication and division and % were not.

          If it eliminated "any code that doesn't touch the DOM" that would be a perfectly good general-purpose eliminator. That's not what IE9 was doing, though.

          There was the side issue that it also produced buggy code; that's been fixed since, so at least IE9 final doesn't dead-code eliminate live code.

    • I was rather curios about the 'dead code' elimination... That would seem to me to be one of the first things to go solve, if nothing is pointing to that particular code portion, simply do not compile it. I must be missing something where that would not be one of the first targets to eliminate.
      • by BZ (40346)

        Turns out, figuring out whether code in JS is dead is hard. See http://blog.mozilla.com/rob-sayre/2010/11/17/dead-code-elimination-for-beginners/ [mozilla.com] for some examples of things that look dead but aren't.

      • That's not what dead code elimination does. It removes code that does not affect the program's result. For example, if you have i++ and you then don't use i before it goes out of scope, this is a dead expression and the compiler can remove it. In a language as dynamic as JavaScript, it's quite hard to do because you have to make sure that the expression that you eliminate doesn't have any side effects before you eliminate it, and for nontrivial JavaScript expressions determining whether something has sid
        • by _0xd0ad (1974778)

          For example, if you have i++ and you then don't use i before it goes out of scope, this is a dead expression and the compiler can remove it.

          Fine, then you can tell me what the result of the following code should be, I assume?

          for (var n = 0; n < 100; n ++) {
          i ++;
          }

          Now what if I tell you that I had originally defined i like this:

          var i = { valueOf: function() { var a = Math.floor(Math.random()*100); document.write(a); return a; } };

          • by _0xd0ad (1974778)

            Of course, the variable definition for i would obviously have to go in the opening statement of the for-loop (along with the definition of the loop counter n) in order to demonstrate the variable going out of scope without being used after the i ++ statement..

      • by Jonner (189691)

        I was rather curios about the 'dead code' elimination... That would seem to me to be one of the first things to go solve, if nothing is pointing to that particular code portion, simply do not compile it. I must be missing something where that would not be one of the first targets to eliminate.

        Dead code elimination is certainly a worthy goal, though not necessarily easy in a highly dynamic language like Javascript.

    • by Ant P. (974313)

      In IE9 beta adding a single useless variable assignment to some modern benchmarks made them take several orders of magnitude longer. An unused variable sounds like the sort of thing that should be optimised out by DCE, but here it's obviously enough to trip up the thing they use to detect common benchmarks and cheat using built-in precompiled code.

  • I think its time that Javascript and HTML get transmitted in a pre-compiled format, like Java.I'm sure the compiled file will be smaller than its mark-up counterpart, and would run faster in the browser since the browser won't have to analyze the mark-up before "compiling it" and rendering it. Plus, it might help people code their web-pages to standards if the compilers won't compile their half-assed HTML.
    • That would be a tremendous step backward.
    • No, what would happen is we'd end up with sites compiled for some obscure language-version, which nothing but one browser half-understands. Oh, and the compiler's buggy also.

      As it is now, when there's a critical error in some obscure website's code, the problem can be diagnosed, and either fixed or worked around with a Greasemonkey script.
      (Had to do that last night: Some educational login page had a id='txtpassword' field, which they called as getElementById('txtPassword'). It worked in IE, but not in anyth

  • I wonder how do javascript engines optimize hash table lookups. Especially since it is faster to access a member of an object than to access a variable of an outer function. It seems to me that the former requires a hash table lookup with a string key while the later suffices with a pointer into the closure of an outer function and the offset of the variable.

    • by LUH 3418 (1429407)
      They don't use hash maps to represent objects. Both SpiderMonkey and V8 regroup objects into pseudo-classes based on the set of properties they have. The technique was actually pioneered by SELF many years ago.

      V8 calls this "hidden classes": http://code.google.com/apis/v8/design.html [google.com]

      As for closures, they can be represented in a multitude of ways... Some more efficient than others.
  • and I still cannot run a mozilla javascript environment inside IE, Opera, or Chrome.

    You think I'm joking but I'm plain serious. Why are we so dependent on each particular javascript implementation?

    • by jesser (77961)

      Because each browser's JS engine is closely tied with its DOM, which is in turn closely tied with its layout engine. Otherwise it would be difficult to have efficient DOM calls and complete garbage collection.

      • Okay so take it a step further...

        The question now becomes: why can't I run mozilla's DOM and layout engine inside IE, Opera, or Chrome or any other browser?

        So your html could start like:
        <HTML>
        <HEAD>
        <META browser="my_browser.so" /> ... etc ...

        Where "my_browser.so" is a platform-independent shared object file which you may have compiled yourself (from Mozilla's code).

        See the advantage? Your javascript, DOM manipulation, css layout, etcetera will work on ANY browser. No more cross-browser mis

        • by Anonymous Coward

          This is basically what Chrome Frame does. It's a bad idea. Why is it a bad idea? Well, either "my_browser.so" is downloadable from the internets, or it isn't. If it is, that's a security hole you can drive a truck through (basically the same problem as ActiveX controls). If it isn't, then everyone whose operating system doesn't ship "mshtml.dll" is screwed.

The world is no nursery. - Sigmund Freud

Working...