Forgot your password?
typodupeerror
Firefox Mozilla Programming

Inside Mozilla's New JavaScript JIT Compiler 97

Posted by CmdrTaco
from the no-not-peanut-butter dept.
An anonymous reader writes "IonMonkey is the name of Mozilla's new JavaScript JIT compiler, which aims to enable many new optimizations in the SpiderMonkey JavaScript engine. InfoQ had a small Q&A with Lead Developer David Anderson, about this new development that could bring significant improvements in products that use the SpiderMonkey engine like Firefox, Thunderbird, Adobe Acrobat, MongoDB and more. This new JIT infrastructure, will feature SSA compiler intermediate representations which will facilitate advanced optimizations such as type specialization, function inlining, linear-scan register allocation, dead-code elimination, and loop-invariant code motion."
This discussion has been archived. No new comments can be posted.

Inside Mozilla's New JavaScript JIT Compiler

Comments Filter:
  • by BZ (40346) on Monday May 02, 2011 @10:04AM (#35999798)

    This part is one major difference "Clean, textbook IR so optimization passes can be separated and pipelined with well-known algorithms."

    The current JM is somewhat goes directly from SpiderMonkey bytecode to assembly, which means any optimizations that happen in the process are ad-hoc (and typically don't happen at all; even basic things like CSE aren't done in JM right now).

  • Re:LLVM (Score:5, Informative)

    by BZ (40346) on Monday May 02, 2011 @10:31AM (#36000080)

    http://blog.mozilla.com/dmandelin/2011/04/22/mozilla-javascript-2011/ [mozilla.com] has some discussion about LLVM in the comments. The short summary is that the LLVM register allocator, for example, is very slow; when doing ahead-of-time compilation you don't care much, but for a JIT that could really hurt. There are various other similar issues with LLVM as it stands.

  • by xouumalperxe (815707) on Monday May 02, 2011 @10:44AM (#36000192)
    It's a good question, but version numbers would really imply the wrong thing here. SpiderMonkey is the "root" Javascript implementation. TraceMonkey adds trace trees to SpiderMonkey (which apparently means it JIT compiles a type-specialised version of a function if it detects a specific version of a function to be "hot" -- executes very often). Jaeger Monkey uses a more traditional "JIT compile everything" perspective, but then apparently also has the tracing feature in the backend to further optimise hot paths. From skimming TFA, IonMonkey adds to this a first pass that translates all the code into an intermediate representation that makes further optimisations easier. So in reality, all the FooMonkeys seem, to me, to be closer to large-scale plugins into SpiderMonkey than new engine versions per se.
  • Re:LLVM (Score:3, Informative)

    by lucian1900 (1698922) on Monday May 02, 2011 @10:56AM (#36000342)
    Because it's easier. All the projects that tried to write a good JIT with LLVM came to the same conclusion: LLVM sucks for JITs. That's both because of implementation issues and design choices. LLVM is a huge, complex C++ program. No one's going to fix it. Nanojit and the register allocators floating around are competitors.
  • by ferongr (1929434) on Monday May 02, 2011 @11:12AM (#36000468)

    Didn't Mozilla cry bloody murder when IE9 was discovered to perform dead-code elimination claiming it was "cheating" because it made some outdated JS benchmarks finish in 0 time?

  • by Anonymous Coward on Monday May 02, 2011 @11:15AM (#36000504)

    Yup. Python is translated to javascript in pyjamas. http://pyjs.org

  • by BZ (40346) on Monday May 02, 2011 @11:20AM (#36000566)

    No, they just pointed out that IE's dead-code elimination was somewhat narrowly tailored to the Sunspider benchmark. That's quite different from "crying bloody murder".

    Tracemonkey does dead-code elimination right now; it has for years. It's just that it's a generic infrastructure, so you don't usually get artifacts with code being eliminated only if you write your script just so. On the other hand, you also don't usually get large chunks of web page dead code eliminated, because that wasn't a design goal. Keep in mind that a lot of the dead code JS JIT compilers eliminate is code they auto-generated themselves (e.g. type guards if it turns out the type doesn't matter, stack syncs it it turns out that no one uses the value, and so forth), not code the JS author wrote.

  • by Anonymous Coward on Monday May 02, 2011 @11:22AM (#36000596)

    Check Coffeescript, Objetive J, Pyjamas, Skulpt.

  • by BZ (40346) on Monday May 02, 2011 @11:43AM (#36000806)

    > Narrowly pointing out and having their lead
    > "evangelist", Asa Dotzler

    Uh... I think you're confused about Asa's role here. But anyway...

    > I haven't seen any proof that the dead-code
    > eliminator is "somewhat narrowly tailored for
    > Sunspider"

    Well, it only elimintated code that restricted itself to the set of operations used in the Sunspider function in question. This is pretty clearly described at http://blog.mozilla.com/rob-sayre/2010/11/17/dead-code-elimination-for-beginners/ [mozilla.com]

    For example, it eliminated code that included |TargetAngle > CurrAngle| but not code that was otherwise identical but said |CurrAngle >) was not. Addition and subtraction were eliminated while multiplication and division and % were not.

    If it eliminated "any code that doesn't touch the DOM" that would be a perfectly good general-purpose eliminator. That's not what IE9 was doing, though.

    There was the side issue that it also produced buggy code; that's been fixed since, so at least IE9 final doesn't dead-code eliminate live code.

  • by BZ (40346) on Monday May 02, 2011 @12:35PM (#36001386)

    The big gains from JS interpreters to most of the current crop of JS JIT compilers (excluding Crankshaft and Tracemonkey, and possibly excluding Carakan's second-pass compiler) come from class infrence, type specialization, ICs, etc. That work is already done. The result is about 30x slower than C code (gcc -O2, say) on numeric manipulation.

    To get better performance than that, you need to do more complicated optimizations. For example, Crankshaft does loop-invariant code motion, Crankshaft and Tracemonkey both do common subexpression elimination, both do more complicated register allocation than the baseline jits, both do inlining, both do dead code elimination. For a lot of code the difference is not that big because there are big enough bottlenecks elsewhere. For some types of code (e.g. numeric manipulation) it's a factor of 2-3 difference easily, getting into the range of gcc -O0. And that's without doing the really interesting things like range analysis (so you can eliminate overflow guards), flow analysis so you know when you _really_ need to sync the stack instead of syncing after every arithmetic operation, and so forth.

    For code that wants to do image or audio manipulation on the web, the textbook IR optimizations make a gigantic difference. That assumes that you already have the dynamic-language problems of property lookup and whatnot solved via the techniques Smalltalk and Self pioneered.

    One end goal here is pretty explicitly to be able to match C or Java in performance on numeric code, at least in Mozilla's case.

Our informal mission is to improve the love life of operators worldwide. -- Peter Behrendt, president of Exabyte

Working...