Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Firefox Mozilla Programming

Asm.js Gets Faster 289

mikejuk writes "Asm.js is a subset of standard JavaScript that is simple enough for JavaScript engines to optimize. Now Mozilla claims that with some new improvements it is at worst only 1.5 times slower than native code. How and why? The problem with JavaScript as an assembly language is that it doesn't support the range of datatypes that are needed for optimization. This is good for human programmers because they can simply use a numeric variable and not worry about the difference between int, int32, float, float32 or float64. JavaScript always uses float64 and this provides maximum precision, but not always maximum efficiency. The big single improvement that Mozilla has made to its SpiderMonkey engine is to add a float32 numeric type to asm.js. This allows the translation of float32 arithmetic in a C/C++ program directly into float32 arithmetic in asm.js. This is also backed up by an earlier float32 optimization introduced into Firefox that benefits JavaScript more generally. Benchmarks show that firefox f32 i.e. with the float32 type is still nearly always slower than native code, it is now approaching the typical speed range of native code. Mozilla thinks this isn't the last speed improvement they can squeeze from JavaScript. So who needs native code now?"
This discussion has been archived. No new comments can be posted.

Asm.js Gets Faster

Comments Filter:
  • by Anonymous Coward on Sunday December 22, 2013 @05:27PM (#45762631)

    Umm, anyone who wants their code to not run substantially slower. Seriously, do you front end programmers really think nobody does numerical simulations or other performance-sensitive work? In my line of work, I'd kill for the opportunity to make my code 1.5 times faster!

    • by gagol ( 583737 )
      Also, the major down point of JavaScript I found to hinder performance in ambitious projects, is memory management. In the cases where I need gigantic arrays, the performance slows down to a crawl as data grows. Anyone knowledgeable can bring some light about this aspect of asm.js?
      • If "as data grows" you mean the array itself is growing, then that will happen with any language: resizing an array requires allocating a new memory area, and copying the data over to the new one. No clue if javascript has any other limitation regarding big arrays.
        • Re: (Score:2, Informative)

          by Anonymous Coward

          Tracing garbage collected languages will always be slower because:

          1) The tracing adds a ridiculous amount of unnecessary work; and

          2) While allocation is at best O(N) for GCd and regular programs, deallocation can (and often is) made O(1) using memory pools in C and C++ programs, something that can't be done in GCd languages because the collector doesn't understand semantic interdependencies.

          For ref-counted collectors #2 still applies.

          Unless and until some unforeseen, miraculous breakthrough happens in langu

          • by dshk ( 838175 ) on Sunday December 22, 2013 @07:21PM (#45763371)

            deallocation can (and often is) made O(1) using memory pools in C and C++ programs, something that can't be done in GCd languages

            I believe current Java (not Javascript!) virtual machines do exactly this. They do escape analysis, and free a complete block of objects in a single step. This works out of the box, there is no need for memory pools or any other special constructs.

            • by Trepidity ( 597 )

              Good Lisp compilers have long done this as well. If the compiler can determine that certain variables don't escape the current execution context, it can even stack-allocate them rather than put it on the heap and have to GC it at all. You can help the compiler out by promising that's the case with the dynamic-extent declaration.

            • by gl4ss ( 559668 )

              and I keep hearing that you haven't needed to call System.gc(); in over a decade... yet, calling it has been essential in some programs so that things don't go to shit(apart from those vm's which crashed on it, ehe. I should say that this hasn't been on desktop java so ymmv).

              that being said, yeah, you don't need to free the memory yourself but if you're working in limited, real life apps running on limited hw and limited sw.. you're still better off knowing _when_ you have caused things to be so that t

        • This is one of those things that's not necessarily true (usually true in a managed memory environment, though). If you have control to the OS functionality of memory and know your memory usage pattern, then you can actually reserve a lot of virtual memory pages in advance. Only the committed ones will consume physical memory -- in other words, you can expand your array by committing subsequent pages. And since we haven't gotten to the point where computers actually have 2^64 bytes of memory, we don't need t
    • Re: (Score:3, Funny)

      by Anonymous Coward

      You mean I can't write that OS in JavaScript for my CS degree?

    • by Sycraft-fu ( 314770 ) on Sunday December 22, 2013 @06:25PM (#45763067)

      I get pissed when you hear programmers say "Oh memory is cheap, we don't need to optimize!" Yes you do. In the server world these days we don't run things on physical hardware usually, we run it in a VM. The less resources a given VM uses, the more VMs we can pack on a system. So if you have some crap code that gobbles up tons of memory that is memory that can't go to other things.

      It is seriously like some programmers can't think out of the confines of their own system/setup. They have 16GB of RAM on their desktop so they write some sprawling mess that uses 4GB. They don't think this is an issue after all "16GB was super cheap!" Heck, they'll look at a server and see 256GB in it and say "Why are you worried!" I'm worried because your code doesn't get its own 256GB server, it gets to share that with 100, 200, or even more other things. I want to pack in services as efficient as possible.

      The less CPU, memory, disk, etc a given program uses, the more a system can do. Conversely, the less powerful a system needs to be. In terms of a single user system, like maybe an end user computer, well it would always be nice is we could make them less powerful because that means less power hungry. If we could make everything run 1.5 times as fast, what that would really mean is we could cut CPU power by that amount and not affect the user experience. That means longer battery life, less heat, less waste, smaller devices, etc, etc.

      • I get pissed when you hear programmers say "Oh memory is cheap, we don't need to optimize!"

        Preach on, brother. I'd love to see how some of these guys would function in the embedded world, where you often get 1K of flash and 128 bytes of RAM to work with.
        • Re: (Score:3, Informative)

          Just fine, at least I do. Just different sets of optimizations to keep in mind, as well as different expectations. I don't think any reasonable person would approach the two problems the same way, but it all boils down to basic computer science.

          Light up pin 1 when the ADC says voltage is dropping which indicates that pressure is too low on the other side of the PPC. Compare that to indexing a few gigs of text into a search engine. Completely different goals, completely different expectations. I'm not master

          • Just fine, at least I do.

            That's great to hear. Seriously, I'm not being snarky. It's nice to see more guys that can deal with different environments well. You're absolutely right that it's not rocket science, but a lot of guys don't get past the abstract model of the machine that their dev environment/language provides for them.
      • by bberens ( 965711 ) on Sunday December 22, 2013 @09:25PM (#45763929)
        That depends a lot on what you're doing. It costs about $125/hr for me to optimize my code. Every 8 hours spent optimizing is $1k down the drain and you can buy an entry level server for that price. If it's running on one or two VMs it's probably not worth my company's time to optimize it vs buying more hardware. Conversely if I'm Google a 1% optimization could mean literally thousands of servers. I mean, don't get me wrong. I pay attention to memory allocation and CPU optimization as I'm coding, but only to the level of not making unnecessarily egregious use of resources. Pretty much anything beyond that is a waste for most of the projects I work on.
        • by bondsbw ( 888959 )

          And my stance is that I first make sure it is correct, then I worry about optimization.

          I know too many projects that were done in the reverse. Sure, we have a slick and fast system, but only when it works. It's kind of like saying /dev/null is a fast database [mongodb-is-web-scale.com].

        • You have to price out all the details. The question isn't what the server costs to buy. It is what it costs to buy, what support on it costs, both in terms of a warranty and sysadmin time, what physical space costs, or is available, what power and cooling cost, and what kind of reliability you need.

          A "cheap" server can end up being not so cheap in many cases. A cheap server is great right up until the point where it fails, and then there is no way to fix it or restore it in a reasonable amount of time.

          You c

      • by zippthorne ( 748122 ) on Sunday December 22, 2013 @09:56PM (#45764055) Journal

        Another question is why we need to duplicate an entire operating system to encapsulate applications. If you have 100 things that need to run on a machine why should you need to also run 100 entire operating systems? Something is wrong with the way we're designing servers.

        • by Pinhedd ( 1661735 ) on Monday December 23, 2013 @04:06AM (#45765261)

          Ideal virtual machines are indistinguishable from networked servers. Most x86 VMMs don't quite reach this level of isolation, but the VMMs used on IBM's PowerPC based servers and mainframes do.

          From the perspective of system security, a single compromised application risks exposing to an attacker data used by other applications which would normally be outside of the scope of the compromised application. Most of these issues can be addressed through some simple best practices such as proper use of chroot and user restrictions, but those do not scale well and do not address usability concerns. A good example is the shared hosting that grew dominant in the early 2000s while x86 virtualization was still in its infancy. It was common to see web servers with dozens if not hundreds of amateur websites running on them at once. For performance reasons a web server would have read access to all of the web data; a vulnerability in one website allowing arbitrary script execution would allow an attacker to read data belonging to other websites on the same server.

          From the perspective of users, a system designed to run 100 applications from 20 different working groups does not provide a lot of room for rapid reconfiguration. Shared resource conflicts, version conflicts, permissions, mounts, network access, etc... it gets extremely messy extremely quickly. Addressing this requires a lot of administrative overhead and every additional person that is given root privileges is an additional person that can bring the entire system down.

          Virtual machines on the other hand give every user their own playground, including full administrative privileges, in which they can screw around with to their hearts content without the possibility of screwing up anything else or compromising anything that is not a part of their environment. Everyone gets to be a god in their own little sandbox.

          Now, that doesn't mean that the entire operating system needs to be duplicated for every single application. Certain elements such as the kernel and drivers can be factored out and applied to all environments. Solaris provides OS level virtualization in which a single kernel can manage multiple fully independent "zones" for a great deal of reduced overhead. Linux Containers is a very similar approach that has garnered some recent attention.

      • I get pissed when I hear programmers act like raw performance is the sole measurement of the quality of a language.

    • by suy ( 1908306 )

      Umm, anyone who wants their code to not run substantially slower. Seriously, do you front end programmers really think nobody does numerical simulations or other performance-sensitive work? In my line of work, I'd kill for the opportunity to make my code 1.5 times faster!

      I'm surprised by your answer at so many levels. First, I thought the guys doing scientific calculations were scientists that many times (not always of course) are only used to Matlab, Mathematica or even Excel. Second, obviously we will need native code, as well as interpreted, functional, and lots of code in domain specific areas (what the heck... I spent this Sunday morning writing in VimL, a language so stupid that can't copy a file without reading the whole contents in memory or invoking system(), but I

      • by Shinobi ( 19308 )

        "I'm surprised by your answer at so many levels. First, I thought the guys doing scientific calculations were scientists that many times (not always of course) are only used to Matlab, Mathematica or even Excel. "

        That's why quite a few supercomputing facilities offer software porting/"translation" services... Some of the projects I've done over the years have been freelance contracts to rewrite a program from one stack to another.

        Because if they can get something from MatLab/mathematica into Fortran which e

    • by LWATCDR ( 28044 )

      Well computers are so fast you can just throw more hardware at the problem.
      Just kidding, people do not understand that one of the reasons that people still use Fortran for HPC apps it the fact there are a lot of really good optimized libraries and the fact that Fortran is really easy to optimize.
      What I do not get from the story is why JavaScript's use of floats is a problem? Last I heard Floating point using SSE or OpenCL is often as fast as integer code. I could be wrong because most of the projects I work

    • Remember kids - all programming is high-performance computing!

    • In my line of work, I'd kill for the opportunity to make my code 1.5 times faster!

      Then wait a year and buy a new computer.

  • Part of what I do is writing high performance code both in "native" and various scripting languages. I just completed yet another comparison for our product, looking for the best performing scripting language for a specific task. Lua with a "just in time" compiler came in first. It is *only* about x10 slower than native code produced by gcc with moderate optimization, when measured on variety of numeric benchmarks. It is considerably slower when dealing with strings and more complex data types, as expected.

    • asm.js is not javascript though, its a specific subset with restricted coding rules that allow the compiler to do its stuff. General js will be just as slow as any of the other script languages.

    • If LuaJIT 2.x was *ten times* slower than C for you, perhaps you were doing something wrong?
    • Re: (Score:2, Informative)

      by Anonymous Coward

      Asm.js is not JavaScript. It's Mozilla's way of hacking together a very specific optimization for JS-targeting compilers such as Emscripten, because they don't want to adopt the sane route of just implementing PPAPI and Google's Native Client sandbox, both of which don't work well with single-process browsers. From a developer perspective it's fairly trivial to target both Asm.js and PNaCl (Google's Native Client except with LLVM bytecodes), or target one and write a polyfill for the other. In either case,

  • Update the ecma standard to include a set of features that is meant to be used when compiling a better language (C#, C++, Java, Perl, Python, LISP, Haskell, x86 asm, MUMPS, pig-latin, etc) down to javascript so it will run in any browser at native code speed. Then I think everyone will be happy.

    • by sribe ( 304414 )

      Update the ecma standard to include a set of features that is meant to be used when compiling a better language (C#, C++, Java, Perl, Python, LISP, Haskell, x86 asm, MUMPS, pig-latin, etc) down to javascript so it will run in any browser. Then I think everyone will be happy.

      Fixed that for you ;-)

    • You want NaCL, not ECMA.

      The thing about ASM.js is that it's available. Just like Javascript is prevalent not because it's really any good (it's a horrible design from a performance view). ASM.js was created to improve Emscripten speed (it compiles C/C++ down to Javascript). The subset of JS used will run as regular javascript, and ridiculously sometimes running the code on the regular JS interpretor will execute it faster than the ASM.js code. I don't like Emscripten. A language divorced of the protot

      • by narcc ( 412956 )

        A language divorced of the prototype hogwash and weird 'this' scoping of JS which causes (OOP headaches) would be nicer.

        You're still trying to treat it like Java or C++. That "prototype hogwash" is undoubtedly the more powerful and flexible approach. The "weird 'this' scoping" makes perfect sense once you have a basic understanding of the language.

        This is what Javascript was meant to provide to Java, hence it's name.

        Wow, no. Not even close.

        I probably shouldn't hold it against you. From the remainder of your post, it appears that you've cracked. Get some sleep.

  • by ndykman ( 659315 ) on Sunday December 22, 2013 @05:45PM (#45762745)

    The idea of compiling to JavaScript has been done a lot. Microsoft Labs had a project to take CLR codes and compile down to JavaScript. It was abandoned as too slow. I'm sure improvements to the JavaScript engines have made it more feasible. But, as noted the lack of certain native types and immutable data types (i.e. DateTime) forces a ton of static analysis to be done just to figure out that hey, that variable could be an plain integer. And it has to be conservative. Much easier to just have integers and be done with it.

    If there is such an insistence on making the web an application platform for everything, then I think at some point you just have to standardize on a VM. Yes, yet another one. So, you can use Dart, JavaScript, Scheme, C#, Java, whatever there's a compiler for. Treat the DOM as core API and enjoy.

    Personally, I just hope people realize that operating systems have been doing this well for years, that sandboxing isn't unique to web browsers and that "native" applications are actually a good thing.

    The mobile app thing gives me a bit of hope, but it's sad that people seem unwilling to download a installer from the web from a trusted source and install it. I find it a bit strange that people are turning to the web to solve a problem that the web greatly expanded the widespread propagation of viruses and other malware.

    And what people are surprised by is a bit baffling as well. A web browser isn't magic. Being impressed that you can run Doom on a modern web browser is missing the forest for a tree. I could run Doom on my computer for ages now. Me having to visit a URL to do so isn't not a major actual change nor improvement, despite the technical accomplishments that went into it.

    • The "everything must be a web site" mentality has confused the hell out of me for ages as well.

      For some time I assumed it was probably for portability, but while writing a game I'm sure no one cares about [multiplayermapeditor.com] I found that portability actually isn't all that difficult. The number of #ifdef involved to make it compile for both Windows and Linux (and FreeBSD, though I haven't compiled for it in about a year) amount to a very minor part of the code, and probably wouldn't have been even that much if I were more of

  • by sribe ( 304414 ) on Sunday December 22, 2013 @05:45PM (#45762747)

    Anybody who writes performance-sensitive code other than trivial contrived benchmarks.

    • Actually this has nothing to do with javascript.

      ASM.js is a java VM like technology which compilies code in C++ and Java to javascript which then uses a JIT compiler.

      This could be very usefull for things like Citrix or crappy activeX like functions being made available to mobile devices and browsers. Useful yes, but like java and activeX a big fucking security hole if not done right.

      I think ASM.JS is dead out of the water as Google refuses to support it. Google wants you to use Dart and Google Chrome apps t

      • Emscriptem is a technology that compiles code in C++ to javascript. Asm.js is a restricted subset of javascript that is much easier for the compiler to handle.

        Google wants you to use quite a few different things, probably the 'best' is NaCl (native client) which runs practically native code in a sandboxed runtime.

      • Asm.js uses ahead-of-time compilation, not just-in-time.

    • by fermion ( 181285 )
      Theoretically no one with a well designed browser. When coding, one of the biggest hits is communication the human interface and file interface. This is basic in in computing. It is why the 6502 was fast even though it was slow, it was why the Mac. If you want a fast computer, keep the UI routines low level and the as much stuff as possible off the bus.

      The second rule is that often only a very small percentage of the code must be optimized. The rest is not going to run that often, maybe only once.

      S

    • Performance sensitive is not the issue here. The argument against JavaScript has always been no because it's slow. This at least opens the door for options other than Silverlight, ActiveX, Applets, and whatever other binary junk people want to run in my browser.

      If you even considered for one minute running number crunching like Seti@home or other distributed computing, or bitcoin mining, or anything else that would tax your cooling and cost you energy, in JavaScript, sign yourself up for the looney bin.

      In

  • Take a compact binary encoding of a CPU instruction set, convert it to text, run it through gzip, ungzip it, translate it back to binary instructions for a CPU.

    Am I the only one who is wondering what the hell is going on? Distribute a binary already!

  • Since asm.js isn't sugared predication [slashdot.org], there isn't a natural formalism to make assertions (declarations) about variables such as the set of values they can take on. Not all of these assertions need to be checked at run time. Some can be used at compile time to infer the most efficient implementation for certain use cases, and even generate different implementations for different use cases.
  • by K. S. Kyosuke ( 729550 ) on Sunday December 22, 2013 @05:50PM (#45762805)

    JavaScript always uses float64 and this provides maximum precision, but not always maximum efficiency.

    That doesn't make too much sense to me, I thought that typed arrays have always had 32-bit floats as an option. Why should 32-bit storage (on-heap typed arrays) and 64-bit registers (scalar values) be significantly slower than 32-bit storage and 32-bit registers? I thought the performance discrepancy between singles and doubles in CPU float units disappeared a decade and a half ago? (Or are simply single-to-double loads and double-to-single stores somehow significantly slower?)

    • More 32-bit floats will fit in your CPU's data cache than 64-bit floats.
      • That's irrelevant, because the caches still reflect the main memory's contents, which, as I outlined, contains 32-bit floats in my scenario. It's the load to registers that gives you those 64-bit in-register scalars. The worst thing that could happen to you is temporarily spilling a 64-bit register or two onto the stack, if you run out of them, but that still seems like a negligible issue, what with 16 FP scalar registers at your disposal on AMD64. The amortized overhead cost of the occasional 64-bit spills
    • Javascript defaults to C's 'double' (64-bit) instead of C's 'float' (32-bit). There 3 things that makes Javascript dog slow for floating-point operations:

      1) Load float64
      2) Does all calculations in 64-bit precision
      3) Write float64

      The loading & storing are not that much more then float32.

      However, most of the time an app can get away with float32 -- it doesn't need the full precision of float64. CPUs manipulate float64 slower then float32. i.e. sqrt(), trig functions, etc.

      The biggest strength Javascript

      • I understand what you're pointing to, but the loads and stores are 32-bit if you use a Float32Array.

        CPUs manipulate float64 slower then float32.

        Uhm, no. They don't. Not the vast majority of FP ops retired. For example, additions, subtractions, and multiplications have the same latency on recent AMD CPUs, while division is 27 vs. 24 cycles of maximum(!) latency for double vs. single (and even then, they seem to have the same minimum throughput anyway). Again, amortize the cost over the total execution time interleaved with other instructions. Don't te

    • I assume that it used to translating float64 javascript variables to float32 C variables (or vice versa) and now they get to keep them as-is. I think all the graphics calls use float32, so maybe that's where the performance boost comes from.

      it sounds as if what asm.js needs is strong typing for all variables. maybe it'd be easier for the compiler to convert the code then.

      But both the above are just guesses. Personally I'd like to see a standardised language designed for high-performance web browser code, o

    • by Pinhedd ( 1661735 ) on Sunday December 22, 2013 @07:32PM (#45763419)

      Take a look at the image at the following link

      http://www.anandtech.com/show/6355/intels-haswell-architecture/8 [anandtech.com]

      That's the backend of the Haswell microarchitecture. Note that 4 of the 8 execution ports have integer ALUs on them, allowing for up to 4 scalar integer operations to begin execution every cycle (including multiplication). Two of these are on the same port as vector integer unit, which can be exploited for an obnoxious amount of integer math to be performed at once. There are only two scalar FP units, one for multiplication on port-0 and one for addition on port-1.

      The same FP hardware is used to perform scalar, vector, and fused FP operations, but taking advantage of this requires a compiler that is smart enough to exploit those instructions in the presence of a Haswell microprocessor only and fast enough to do it quickly. Exploiting multiple identical execution units in a dynamically scheduled machine requires no extra effort on behalf of the compiler.

      Microprocessors used in PCs have always been very integer heavy for practical reasons (they're just not needed for most applications), and mobile devices are even more integer heavy for power consumption reasons.

      Using FP64 for all data types is obnoxiously lazy and it makes me want to strangle front end developers.

    • by reanjr ( 588767 )

      The only difference I can think of is that the generated code will be larger because 64-bit operations take more bytes. This may have an effect on caching and branching.

  • If the (admittedly stupid) last sentence of the submission hadn't been present, would you still be complaining? They're improving the speed of js by doing some interesting stuff - what's wrong with that?

  • by SoftwareArtist ( 1472499 ) on Sunday December 22, 2013 @06:33PM (#45763097)

    Let's correct that sentence:

    "Now Mozilla claims that with some new improvements it is at worst only 1.5 times slower than single threaded, non-vectorized native code."

    In other words, it's only 1.5 times slower than native code that you haven't made any serious effort to optimize. Hey, I think it's great they're improving the performance of Javascript. But this is still nowhere close to what you can do in native code when you actually care about performance.

  • I'd wish they'd stopping slowly and painfully going through the intermediate steps and standardize the Javascript Bytecode representation. Then javascript wouldn't be any slower than native code. Even faster in some situations ( due to runtime optimizations, if the Java folks are to be believed ).

    Why on earth are we still only transferring Javascript as text? It doesn't really help security. Is obfuscated Javascript any easier to read than decompiled bytecode?

  • Maximum precision? (Score:5, Informative)

    by raddan ( 519638 ) * on Sunday December 22, 2013 @06:48PM (#45763197)
    Let's just open up my handy Javascript console in Chrome...

    (0.1 + 0.2) == 0.3
    false

    It doesn't matter how many bits you use in floating point. It is always an approximation. And in base-2 floating point, the above will never be true.

    If they're saying that JavaScript is within 1.5x of native code, they're cherry-picking the results. There's a reason why people who care have a rich set of numeric datatypes [ibm.com].
    • That's only true for fractions.
      1+2 == 3 is always going to be true.
      As is 123456789 + 987654321 == 1111111110

      You can absolutely express a 32 bit integer in a double with no approximation ever.
      We rely on this in our lua scripting as a matter of fact.

      http://asmjs.org/spec/latest/ [asmjs.org] relies on this for the 32 bit integer parts of their spec.

      Actually, you can go a lot further than 2^32 - all the way up to 2^53-1.
      This is taken advantage of in non-asm.js places. See:
      http://bitcoinmagazine.com/7781/satoshis-genius-u [bitcoinmagazine.com]

      • Oh, and for asm.js (and, for the JS jits in general when they are sure of the types), floating point is not used if the number known to be int, so, there's another win outside of the main one from the article.

  • How about making the Canvas tag not suck?
    Seriously, you can give it the most simple test (draw an image every frame, or fill a rectangle), and it will run unacceptably slow. For comparison, a simple Win32 program that changes the background color of a maximized window once per frame has very low CPU usage, but a simple Javascript program that flashes a canvas red and blue every other frame uses 100% CPU usage if the canvas is 1024x768.
    Until the canvas isn't the bottleneck anymore, Javascript graphical prog

    • Sounds like the WebGL behind one's browser is using software rendering because hardware acceleration has been blacklisted. (Or at least that's the case for my onboard Intel GPU)

      A previous rich-client browser platform, AWT/Swing (which bit the dust for a variety of reasons), had OpenGL/DirectX accelerated drawing pipelines for nearly a decade for Java2D, Java3D & jogl.

      Yet in 2013 we're still stuck with unstable graphics drivers. Hmmm...

  • People implementing JavaScript engines. Among many other. You would instantly lament if every bit of machine code deployed in the world were suddenly to reimplement itself in your beloved JavaScript.

A committee takes root and grows, it flowers, wilts and dies, scattering the seed from which other committees will bloom. -- Parkinson

Working...