Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming Safari

WebKit Unifies JavaScript Compilation With LLVM Optimizer 170

An anonymous reader tips this post at Webkit.org: "Just a decade ago, JavaScript – the programming language used to drive web page interactions – was thought to be too slow for serious application development. But thanks to continuous optimization efforts, it's now possible to write sophisticated, high-performance applications – even graphics-intensive games – using portable standards-compliant JavaScript and HTML5. This post describes a new advancement in JavaScript optimization: the WebKit project has unified its existing JavaScript compilation infrastructure with the state-of-the-art LLVM optimizer. This will allow JavaScript programs to leverage sophisticated optimizations that were previously only available to native applications written in languages like C++ or Objective-C. ... I'm happy to report that our LLVM-based just-in-time (JIT) compiler, dubbed the FTL – short for Fourth Tier LLVM – has been enabled by default on the Mac and iOS ports. This post summarizes the FTL engineering that was undertaken over the past year. It first reviews how WebKit's JIT compilers worked prior to the FTL. Then it describes the FTL architecture along with how we solved some of the fundamental challenges of using LLVM as a dynamic language JIT. Finally, this post shows how the FTL enables a couple of JavaScript-specific optimizations."
This discussion has been archived. No new comments can be posted.

WebKit Unifies JavaScript Compilation With LLVM Optimizer

Comments Filter:
  • Additional benchmarks for Safari with this technology vs. Chrome with its JavaScript acceleration would be appreciated; is this a closing of the speed gap, a move ahead, or a lateral move (i.e. faster in some areas than Chrome, slower in others)?

    Also: the purpose Chrome has had in adopting JavaScript acceleration is that Google's web properties are JavaScript heavy, and accelerating JavaScript gives them a better overall user experience for Google Docs, GMail, and similar Google products. Was this a "compe

    • by Pieroxy ( 222434 )

      are other sites now following Google's lead, with increasingly sophisticated in-browser programs written in JavaScript ?

      Do you even have to ask? Do you never go out on the internet?

      ALL websites are more loaded than last year in JavaScript, and this is not a new trend. GMail was just pioneering (in the webmail space that is) the webapp that has only one page and everything you do is driven by JavaScript and the DOM.

      • are other sites now following Google's lead, with increasingly sophisticated in-browser programs written in JavaScript ?

        Do you even have to ask? Do you never go out on the internet?

        ALL websites are more loaded than last year in JavaScript, and this is not a new trend. GMail was just pioneering (in the webmail space that is) the webapp that has only one page and everything you do is driven by JavaScript and the DOM.

        Actually, I *do* have to ask. One of Googles blind spots in doing very active UI's with JavaScript is the fundamental assumption that everyone has the same RTT you do, sitting on Google's fiber optic backbone, and that you have very high bandwidth to go with that. Not all companies developing web sites have that blind spot, and I think that it contributes significantly to Google's willingness to make certain assumptions.

        I'd say that, other than companies that obviously aren't going to go anywhere, Google

        • by Pieroxy ( 222434 )

          I can tell you one thing: When gmail was released, it was the *fastest* webmail out there, bar none, especially on a slow network connection. So no, javascript doesn't mean high bandwidth, and I most certainly don't think that Google has that blindspot.

  • by StripedCow ( 776465 ) on Wednesday May 14, 2014 @05:26AM (#46997289)

    Cool stuff, but until one can write an optimized javascript virtual machine in it, I don't consider this project finished.

    • Cool stuff, but until one can write an optimized javascript virtual machine in it, I don't consider this project finished.

      Done. [github.com]

  • by Hognoxious ( 631665 ) on Wednesday May 14, 2014 @05:41AM (#46997321) Homepage Journal

    Just a decade ago, JavaScript â" the programming language used to drive web page interactions â" was thought to be too slow for serious application development.

    Of course know we know better.

    It's just too shit.

    • Re: (Score:2, Funny)

      That never stopped serious enterprise application development.
    • by jellomizer ( 103300 ) on Wednesday May 14, 2014 @11:36AM (#46999517)

      JavaScript has a lot of faults... However there isn't a good standard to replace it. The Core Browser Engines. Trident (IE), Mozilla (FireFox), and WebKit (Chrome/Safari) They all support Javascript. Others are plugins that may work in some but not all.

      Alternatives would be not using Web Pages for development. Which sounds good, until you get into problems to platform compatibility, deployment, upgrades. Or you got middle approaches such as Flash, Active X, and Java Applets, that have their own issues.

      Until we can get the three core Engines to support an other uniformed language. we will be stuck with Javascript.

      • by narcc ( 412956 )

        JavaScript has a lot of faults.

        Yeah. New, constructor functions, and now classes. Thankfully, the language is otherwise well designed and flexible enough that you can avoid those warts.

        Of course, every language has a lot of faults. What would you consider to be a better replacement?

  • by Required Snark ( 1702878 ) on Wednesday May 14, 2014 @06:22AM (#46997413)
    I read the article, and it's clear that they are trading space for speed. At every step they create multiple versions of JavaScript code, each with a specific optimization goal. As far as I can tell, they are not garbage collected as long as the code is in use, because at any point they can switch from a more optimized version to a less optimized version.

    Not only do they have many copies of the code around, they also keep a lot of information about how each version behaves as well as mapping structures so they can switch between the versions while they are running.

    I infer that this means a lot of code bloat. I have no sense of how this memory usage compares to the memory used for DOM storage and the like. Does anyone know if code memory is a significant contributor to the overall memory footprint of a WebKit based browser?

    Using FIrefox on the Mac, I experience an ever increasing memory footprint if I keep the browser running for a long period of time, i.e. over the course of a few days. This is partly laziness, but it also reflects how I use online references. Often I have multiple pages open at the same time for references, and I don't want to close them until I finish what I am working on (typically coding). After I have found a lot of relevant information online, I really don't want to end the browser session and then restore everything when I return two work.

    So how will WebKit perform in this environment? How does it compare to Chrome and Firefox for memory usage? If something besides FF didn't suffer from memory bloat I might consider changing. Any experience or benchmarks would be interesting to hear about.

    • I really don't want to end the browser session and then restore everything when I return two work.

      If restarting Firefox and reloading all the same pages again is enough to significantly drop the memory-consumption, that implies Firefix is leaking memory.

      It doesn't sound to me like laziness on your part is to blame (one shouldn't have to constantly be restarting applications to work around memory leaks), nor your browsing habits.

      • If restarting Firefox and reloading all the same pages again is enough to significantly drop the memory-consumption, that implies Firefix is leaking memory.

        Not necessarily. It could be memory fragmentation or in-memory caching. (Not knowing anything about FF in particular, but remembering a friend who was a developer for another web engine talking about these issues.)

      • by tlhIngan ( 30335 )

        I really don't want to end the browser session and then restore everything when I return two work.

        If restarting Firefox and reloading all the same pages again is enough to significantly drop the memory-consumption, that implies Firefix is leaking memory.

        Actually, Firefox does another optimization on reload that speeds things up - it doesn't restore the entire session. Close Firefox with 10 tabs open and when you restore it, Firefox only reloads the last tab visible. The others merely are placeholders with p

      • If restarting Firefox and reloading all the same pages again is enough to significantly drop the memory-consumption, that implies Firefix is leaking memory.

        Not really, it implies something related to firefox is leaking.

        Over the past decade, by far the largest memory leaks with firefox have come from extensions. Adblockers can be particularly bad culprits here.

    • I am no specialist but I believe that code size these days is negligible compared to data size (unless we are talking about low-memory embedded systems.) Even if you keep multiple copies of the application code around, you only keep one copy of the application data.

    • My guess is that the amount of space this takes is comparable to a couple of full-sized pngs that the graphics guy forgot to scale down.

  • Just a decade ago. (Score:5, Insightful)

    by serviscope_minor ( 664417 ) on Wednesday May 14, 2014 @06:29AM (#46997439) Journal

    "Just a decade ago, JavaScript â" the programming language used to drive web page interactions â" was thought to be too slow for serious application development. ... and now after 10 years of increases in CPU speed, increase in amounts of RAM and fevered development in optimizing it's now got to the stage where new Javascript kinds sorta competes with C++ on a ten year old machine.

    As before, people will still write screeds on how it's really as fast as C++ this time, honest.

    Interestingly, there's one language out there which regularly benchmarks better than C++, and it's called FORTRAN (suck it '95ers) and that's one of the few where you never see long articles on micro benchmarks claiming how *this* year it's as fast as C++.

    Anyway, yeah, in some inner loop micro benchmarks where there's really good information available to the compiler, the dynamic languages (including Java) benchmark similarly to the native ones.

    Basically it's all about aliasing and consistency. The more assumptions of independence the compiler can make about what doesn't alias, the better the optimizations it can make. This folds well into consistency: if you have an inner loop diddling a bunch of floats, then due to the aliasing rules this means that usually the loop bounds are fixed, allowing lots of nice optimizations.

    FORTRAN does that really, really, really, REALLY well. C++ does it pretty well. C99 does it a bit better than C++11, though in practice pretty much all C++ implementations support the C99ism as an extension. Everything else is much worse, which means that a much smaller range of things can be optimized well.

    And then on to memory. Firstly, garbage collected languages can only get with a factor of 2 for memory use (or so) before the computational cost of GC starts to dominate [citation needed]. Really, there are papers studying this. This has an impact on speed because it can make the cache coherency worse, and does also affect scaling on large datasets.

    There's also the thing that memory allocation in C++ (and languages with a similar machine model) is largely completely free: unless you hammer the free store, it's all done on the stack which means the memory allocation is pre-computed at compile time as part of the function prolog. Sure, some of the dynamic languages can approach such things with escape analysis, but they can never do as well for the same reason C++ doesn't do as well as FORTRAN. With C++ you promise to never let local variables escape and the compiler can fuck you if you lie. With the dynamicer languages, the compiler has to figure it out to be correct.

    Now interestingly, the things like the simple inner loops of something like naive matrix multiplication can be optimized quite well in other languages now, because compielrs are getting quite good at that. However, not much of my stuff is as simple as that. If you have bigger, more complicated inner loops like you often get in computer vision for example (my job), then C++ really shines.

    This is why I've never been much of a fan of the "do the logic in python and the inner loop in C" style of things. Unless your inner loop is really simple, you have to do complex things in a not very expressive language otherwise it's slow.

    These improvements sort of make that model obsolete: if you have a really simple inner loop, they can optimize as well as C++. It's everythin else where they can't.

    TL;DR

    Alisaing.

    • The interesting point to take is that with these new javascript improvements, it is now possible to target your language of choice to the browser. So you could write FORTRAN if you like, and have it run in the browser. Or python, or haskell.

    • Interestingly, there's one language out there which regularly benchmarks better than C++, and it's called FORTRAN

      It doesn't in real life.
      In practice, most of the production code written in C++ performs better than code written to solve the same problems in FORTRAN.

      I own a business dedicated to optimizing numerical and scientific code (including computer vision, your job), and we can beat FORTRAN any day. For naive code, it might be slightly better, but when you care about performance, you're going to want t

      • It doesn't in real life.
        In practice, most of the production code written in C++ performs better than code written to solve the same problems in FORTRAN.

        Depends. C++ is more expressive, certainly. That makes it faster to write and therefore one can spend more time on tuning the algorithms. Interestingly, this is also the claim of Walter Bright about D: it's not that it's inherently faster, but the expressivity allows you to get faster algorithms developed more quickly. Implying of course that with constraine

        • I gather the $$$ commercial compilers are quite a bit ahead of GCC

          Of course when I'm speaking of FORTRAN compilers, I mean Intel and SGI.
          And they're not even really better than GCC or Clang in many cases.

          FORTRAN wins (last time I checked) more of the funny language shootout benchmarks than any other single language.

          It wins when you compare trivial syntactically equivalent code.
          The point in that when you use C++, you can do more than that.

          However, I do sometimes notice missing optimizations in C++. Especiall

          • Of course when I'm speaking of FORTRAN compilers, I mean Intel and SGI. And they're not even really better than GCC or Clang in many cases.

            Oh, huh. Well, I guess the gap has closed then.

            It wins when you compare trivial syntactically equivalent code. The point in that when you use C++, you can do more than that.

            Yep: other languages pull close for trivial code. Fortran is one of the few capable of beating C++ for trivial code. The reason I like C++ so much though is it makes the non-trivial code both easy to

            • It's not just that: even without redundant loads, the optimizer needs to know about aliasing.

              Just write your code in a way where the compiler does not need to make guesses about who aliases who.
              Just load it once explicitly, problem solved.

              i.e. instead of


              out1[i] = in[i];
              out2[i] = in[i];

              write


              T value = in[i];
              out1[i] = value;
              out2[i] = value;

              This way, even if the compiler doesn't know that in and out1 don't alias, you only load once and not twice.
              This is the kind of optimization (it's called scalarization) that

              • This way, even if the compiler doesn't know that in and out1 don't alias, you only load once and not twice.

                Sure, but that's the most trivial example.

                If you have:

                out[i] = in[i]

                If out and in don't alias, it can batch them up and e.g. use some SSE things to copy faster.

                You should vectorize, unroll and pipeline explicitly if you really want it to happen.

                Why? often the compiler does a decent job, and I'd rather not do the compiler's job for it.

                In C++ it's not hard with the proper tools, and it can be done generi

                • If you have:

                  out[i] = in[i]

                  If out and in don't alias, it can batch them up and e.g. use some SSE things to copy faster.

                  I didn't realize this sort of thing prevented vectorization, that might indeed be true.

                  You still avoid the problem by writing it as this: (verbose, you might want to investigate another option)


                  T in0 = in[i];
                  T in1 = in[i+1];
                  T in2 = in[i+2];
                  T in3 = in[i+3];
                  out[i] = in0;
                  out[i+1] = in1;
                  out[i+2] = in2;
                  out[i+3] = in3;

                  Why? often the compiler does a decent job, and I'd rather not do the compiler's

    • by jbolden ( 176878 )

      You have your order a bit wrong. The articles in the 1950s were written about how Fortran in practice wasn't much slower than Assembler. After Fortran the move was to try and do for general programming including systems programming what Fortran does did for numerical computation, ALGOL-60 came out of that movement. C through many more steps evolved from ALGOL-60. Fortran isn't as fast as C, C is as fast as Fortran.

      As for the rest of your post... Of course languages designed not to compromise performance

    • by sribe ( 304414 )

      And then on to memory. Firstly, garbage collected languages can only get with a factor of 2 for memory use (or so) before the computational cost of GC starts to dominate [citation needed]. Really, there are papers studying this. This has an impact on speed because it can make the cache coherency worse, and does also affect scaling on large datasets.

      IIRC, you're wrong about the factor of 2; it's actually much worse than that.

      Other than that, yes all good reasons that javascript is not now fast, but is instead now less incredibly slow... And don't forget all the inlining that you get with C++ and templates...

    • I was developing before object oriented development took over.

      I still don't any great advantage to OO development, and lots of disadvantages.

      • by Viol8 ( 599362 )

        The only true advantage of OO in as far as C++ is concerned is that you don't have to manually manage arrays of function pointers in a struct for inheritence. Other than that, its just a slightly cleaner syntax. Most of the other advantages are overstated and usually advocated by people who can't program in any other way.

        • RAII is a pretty huge advantage, even in the absence of exceptions. I wish this is something that C would tackle in its own simpler way (e.g. by introducing scope guards into the language).

          • by Viol8 ( 599362 )

            Unless everything is on the stack or use the STL for everything you have to manually manage resource anyway in the con/destructor so its not that a big deal. As for exceptions - they're worse than gotos. Used and abused by people too lazy to do proper value checks. "What will happen? Who cares, just let it throw". Bah.

            • Unless everything is on the stack or use the STL for everything you have to manually manage resource anyway in the con/destructor so its not that a big deal.

              In a well-written C (or C++) program, the proportion of stack-allocated objects and subobjects should be very significant.

              Anyway, the point of RAII is not about memory management. It's about having a single definitive place where you put the code to free your resource - in case of scope guards, it'll be an explicit call to free() or fclose() etc, the point is that you only write it once, and then any control transfer that leaves the guarded scope (such as e.g. early return with an error code) will run that

              • by Viol8 ( 599362 )

                In most large programs I've worked on the majority of heap resources were global because they're used in a multitude of areas, not just down one particular code path, so freeing them was done ah-hoc. C++ requires RAII because of the obsession with some people of using objects all the time regardless of whether its necessary: eg std::string instead of char*, vector instead of an array even if the max size is already known beforehand. And so on. They don't seem to realise that constant allocation and dealloca

    • In most applications only a really small part of the code actually yields a significant overall performance gain from having all these optimizations available. Unfortunately many times writing an application is an all or nothing game, either you write everything in language A or everything in language B. The reasons vary, but in the old days things were a little better in this regard, you could write C/Pascal and put assembly code right in the middle of it to get the time-critical parts handled better, thes

      • by spiralx ( 97066 )

        Civ 4 does some of the AI in Python, IIRC mostly evaluating heuristics for moves, but most of it is C++. The SDK for customising the AI came out about six months after the main SDK, it wasn't originally designed to be exposed to Python.

        But I agree that the hybrid approach is a good one, especially as I feel you're overstating the cost of using two different languages together - neither Python nor Lua are very hard to integrate with C/C++ at all, even without tools like SWIG that automate a lot of the boiler

  • is also the back end behind the Julia Programming Language, and its thoughtful use by the Julia guys has made that language blindingly fast as compared to R, Matlab, Python etc. etc., while still "officially" being an interpreted language. So yes, why not ?
    • In what sense is Julia "officially an interpreted language"?

      Just because it has a REPL, doesn't make it interpreted.

      • Julia is a high-level dynamic programming language designed to address the requirements of high-performance numerical and scientific computing while also being effective for general purpose programming.Julia's core is implemented in C and C++, its parser in Scheme, and the LLVM compiler framework is used for just-in-time generation of machine code.

        Source: wikipedia entry on "Julia programming language" [wikipedia.org], called on May 15, 08h56m CEST

        Walks like an interpreted language, looks like an interpreted language, quacks like an interpreted language... could it be an interpreted language ?

        • I don't see any mention of the word "interpreted" on that page. It does say "dynamic" (which in this case means dynamically typed, though it has optional static typing with inference, as well), but that does not make it interpreted. In fact, the webpage is pretty specific about what they do:

          "Julia’s LLVM-based just-in-time (JIT) compiler combined with the language’s design allow it to approach and often match the performance of C."

          And no, "JIT-compiled" is not the same as "interpreted", either.

          O

          • Granted, it does not say "interpreted" on that page. Hence the "walks like...looks like... quacks like" analogy. In that sense, Java is "interpreted" as well, although a bytecode interpreter. The interesting thing about Julia is the assembler code generated after the first run of any routine: its performance is compared with that of later runs, and after a couple of runs you can see incredible performance gains.
  • When will implement the "unnecessary" things like proper support to integers, strings and etc.?
    • Probably never. All incentive to replace Javascript with a serious programming language were killed by giving the existing Javascript these more powerful capabilities (even though it is still painful to use).

  • Seems like we're trying awfully hard to not notice that this is an Apple project [infoworld.com].
  • ...doesn't mean you should.

    >> it's now possible to write sophisticated, high-performance applications

    Why couldn't we have gotten a real object oriented language with real classes instead of souping up this painful, nested function oddity that was original only meant to add a little eye candy to 90s era web pages?!?! OK, yeah I get it https://xkcd.com/927/ [xkcd.com]. But come on! Javascript sucks! Now it sucks with more power! And there is less incentive to ever replace it with something that doesn't suck!

    Could

    • No, but in 5 years NodeJS will likely be the most prevalent platform for WebApps -- why develop the backend in Ruby, Python, PHP or Java when you can create your whole stack with Javascript?

      I hated the choice of ECMAScript 3.1 (now called ECMAScript 5) over ECMAScript 4 [wikipedia.org] which had things like static typing, classes, modules and the like, but the ECMAScript 4 proposal is alive as Harmony which is slated to be dubbed ECMAScript 6 with it's draft finalized by EOY 2014, but who knows when or if it will ever be a

      • In 5 years, node.js will likely be in the same place as RoR is today relative to 5 years ago.

        • That's a possibility, but on the other hand, NodeJS has several advantages that RoR doesn't have
          -- Clusters across Cores
          -- Extremely light footprint
          -- Can be quickly downloaded, installed and deployed across a variety of devices
          -- Can write a full-stack in a single language
          -- Multiple form factors: create shell scripts and services with equal ease
          -- No webserver dependency, allowing a single instance to create an HTTP Service and a Socket service with Socket.io in a single app layer

          I'm not saying it's not p

          • Thing is, I think the present popularity of node has less to do with the things that you've listed, and more to do with it being perceived as "cool". This is further reinforced when you look at the kind of stuff that floats around in the npm registry.

            Also, I'm not particularly convinced of the utility of a single-language stack in the case where isolation boundaries are strong and strict, as is the case between server and client in web apps. About the only thing that I can think of being consistently sharea

    • by narcc ( 412956 )

      Why couldn't we have gotten a real object oriented language with real classes

      Because sometimes we win. You'll have a hard time finding anyone who will advocate classes over prototypes. Just google "classes vs prototypes" and you'll find more than enough to convince you.

      Stop trying to treat javascript like it's Java and and actually learn the language. You'll quickly discover that javascript doesn't suck.

      • You'll have a hard time finding anyone who will advocate classes over prototypes.

        You have a hard time finding 99% of developers that tried both things?

        Just google "classes vs prototypes" and you'll find more than enough to convince you.

        Your methodology is flawed. Vast majority of those articles online are written by proponents of prototype-based OO to justify its existence. Proponents of class-based OO don't need to justify its existence because it's a design proven by decades of experience, and there's no feeling of insecurity attached to it. Simply put, guys who like classes don't need to prove to the world that they are special little snowflakes who are just horribly

        • by narcc ( 412956 )

          Nice non-argument. The truth, of course, is that prototypes are simpler and more "powerful". You can try to argue against that, but you'll fail miserably.

          If you don't like the google search, please, provide something that makes the opposite case. You'll find it impossible.

  • The real question. (Score:4, Insightful)

    by MouseTheLuckyDog ( 2752443 ) on Wednesday May 14, 2014 @11:34AM (#46999501)

    What idiot would want JavaScript for application development?

    • Re: (Score:3, Informative)

      by ahoffer0 ( 1372847 )

      What idiot would want JavaScript for application development?

      Me. I am one of the idiots. I like prototypes over classes and like I like first-class functions. I'm in good company. Other idiots include Google, DataHero, Facebook, Dow Jones, and Uber.

      • by jez9999 ( 618189 )

        What about Firefox OS? "Everything's a web app."

      • What idiot would want JavaScript for application development?

        Me. I am one of the idiots. I like prototypes over classes and like I like first-class functions. I'm in good company. Other idiots include Google, DataHero, Facebook, Dow Jones, and Uber.

        Though this be madness, yet there is method in 't.

        Will you walk out of the air, my lord?

    • by MoronGames ( 632186 ) <cam@henlin.gmail@com> on Wednesday May 14, 2014 @01:12PM (#47000625) Journal
      I do. I work for a small company and we write all of our tablet apps in JavaScript and wrap them in Cordova. It lets us deploy on iOS, Android, and Windows all at once with minimal effort and cost, which is great for a small dev team. It even makes testing a breeze because I can write all of my functional tests in CasperJS and be reasonably sure that everything is working right prior to handing things off to the test team. Much easier than messing with some junk software like Appium and and writing tests for each mobile platform individually
    • The main problem with javascript is it's hard to debug. If you can get past that it's a pretty OK language.

      The runtimes are so variable that I'd think you'd need to really standardize on one runtime and stick with it forever.

      There are times where being able to pass functions around in objects, etc is kind of handy, especially since you can pass them in as json at runtime. Again, that makes debugging really hard. If you persist them you can have objects that look the same with totally different implementatio

  • would it have gurt the submitter to have spent a few seconds more expanding the first instance of the use of LLVM just like JIT was?

    sheesh...

    • LLVM is a popular project with many years of history by now, not some new fangled thing that appeared yesterday. It would be expected of a typical /. reader to know what it is without the need for the submitter to explain themselves, just like they don't explain what Linux or JVM is.

  • In Physics, FTL = Faster Than Light. Nice pun. However, the sheer horrendous complexity of the system they described in the blog post indicates all that is wrong with Javascript as "the assembly language of the Web". Why, oh why, haven't we replaced Javascript with something cleaner, more robust and more efficient? It's 2014, people.

E = MC ** 2 +- 3db

Working...