Asm.js Gets Faster 289
mikejuk writes "Asm.js is a subset of standard JavaScript that is simple enough for JavaScript engines to optimize. Now Mozilla claims that with some new improvements it is at worst only 1.5 times slower than native code. How and why? The problem with JavaScript as an assembly language is that it doesn't support the range of datatypes that are needed for optimization. This is good for human programmers because they can simply use a numeric variable and not worry about the difference between int, int32, float, float32 or float64. JavaScript always uses float64 and this provides maximum precision, but not always maximum efficiency. The big single improvement that Mozilla has made to its SpiderMonkey engine is to add a float32 numeric type to asm.js. This allows the translation of float32 arithmetic in a C/C++ program directly into float32 arithmetic in asm.js. This is also backed up by an earlier float32 optimization introduced into Firefox that benefits JavaScript more generally. Benchmarks show that firefox f32 i.e. with the float32 type is still nearly always slower than native code, it is now approaching the typical speed range of native code. Mozilla thinks this isn't the last speed improvement they can squeeze from JavaScript. So who needs native code now?"
"So who needs native code now?" (Score:5, Informative)
Umm, anyone who wants their code to not run substantially slower. Seriously, do you front end programmers really think nobody does numerical simulations or other performance-sensitive work? In my line of work, I'd kill for the opportunity to make my code 1.5 times faster!
Re: (Score:3)
Re: (Score:2)
Re: (Score:2, Informative)
Tracing garbage collected languages will always be slower because:
1) The tracing adds a ridiculous amount of unnecessary work; and
2) While allocation is at best O(N) for GCd and regular programs, deallocation can (and often is) made O(1) using memory pools in C and C++ programs, something that can't be done in GCd languages because the collector doesn't understand semantic interdependencies.
For ref-counted collectors #2 still applies.
Unless and until some unforeseen, miraculous breakthrough happens in langu
Re:"So who needs native code now?" (Score:5, Informative)
deallocation can (and often is) made O(1) using memory pools in C and C++ programs, something that can't be done in GCd languages
I believe current Java (not Javascript!) virtual machines do exactly this. They do escape analysis, and free a complete block of objects in a single step. This works out of the box, there is no need for memory pools or any other special constructs.
Re: (Score:2)
Good Lisp compilers have long done this as well. If the compiler can determine that certain variables don't escape the current execution context, it can even stack-allocate them rather than put it on the heap and have to GC it at all. You can help the compiler out by promising that's the case with the dynamic-extent declaration.
Re: (Score:2)
and I keep hearing that you haven't needed to call System.gc(); in over a decade... yet, calling it has been essential in some programs so that things don't go to shit(apart from those vm's which crashed on it, ehe. I should say that this hasn't been on desktop java so ymmv).
that being said, yeah, you don't need to free the memory yourself but if you're working in limited, real life apps running on limited hw and limited sw.. you're still better off knowing _when_ you have caused things to be so that t
Re: (Score:2)
Re: (Score:3, Funny)
You mean I can't write that OS in JavaScript for my CS degree?
Re: (Score:3)
Or anything running in a VM (Score:5, Informative)
I get pissed when you hear programmers say "Oh memory is cheap, we don't need to optimize!" Yes you do. In the server world these days we don't run things on physical hardware usually, we run it in a VM. The less resources a given VM uses, the more VMs we can pack on a system. So if you have some crap code that gobbles up tons of memory that is memory that can't go to other things.
It is seriously like some programmers can't think out of the confines of their own system/setup. They have 16GB of RAM on their desktop so they write some sprawling mess that uses 4GB. They don't think this is an issue after all "16GB was super cheap!" Heck, they'll look at a server and see 256GB in it and say "Why are you worried!" I'm worried because your code doesn't get its own 256GB server, it gets to share that with 100, 200, or even more other things. I want to pack in services as efficient as possible.
The less CPU, memory, disk, etc a given program uses, the more a system can do. Conversely, the less powerful a system needs to be. In terms of a single user system, like maybe an end user computer, well it would always be nice is we could make them less powerful because that means less power hungry. If we could make everything run 1.5 times as fast, what that would really mean is we could cut CPU power by that amount and not affect the user experience. That means longer battery life, less heat, less waste, smaller devices, etc, etc.
Re: (Score:3)
Preach on, brother. I'd love to see how some of these guys would function in the embedded world, where you often get 1K of flash and 128 bytes of RAM to work with.
Re: (Score:3, Informative)
Just fine, at least I do. Just different sets of optimizations to keep in mind, as well as different expectations. I don't think any reasonable person would approach the two problems the same way, but it all boils down to basic computer science.
Light up pin 1 when the ADC says voltage is dropping which indicates that pressure is too low on the other side of the PPC. Compare that to indexing a few gigs of text into a search engine. Completely different goals, completely different expectations. I'm not master
Re: (Score:2)
That's great to hear. Seriously, I'm not being snarky. It's nice to see more guys that can deal with different environments well. You're absolutely right that it's not rocket science, but a lot of guys don't get past the abstract model of the machine that their dev environment/language provides for them.
Re:Or anything running in a VM (Score:4, Insightful)
Re: (Score:2)
And my stance is that I first make sure it is correct, then I worry about optimization.
I know too many projects that were done in the reverse. Sure, we have a slick and fast system, but only when it works. It's kind of like saying /dev/null is a fast database [mongodb-is-web-scale.com].
Ya well (Score:2)
You have to price out all the details. The question isn't what the server costs to buy. It is what it costs to buy, what support on it costs, both in terms of a warranty and sysadmin time, what physical space costs, or is available, what power and cooling cost, and what kind of reliability you need.
A "cheap" server can end up being not so cheap in many cases. A cheap server is great right up until the point where it fails, and then there is no way to fix it or restore it in a reasonable amount of time.
You c
Re:Or anything running in a VM (Score:5, Insightful)
Another question is why we need to duplicate an entire operating system to encapsulate applications. If you have 100 things that need to run on a machine why should you need to also run 100 entire operating systems? Something is wrong with the way we're designing servers.
Re:Or anything running in a VM (Score:4, Informative)
Ideal virtual machines are indistinguishable from networked servers. Most x86 VMMs don't quite reach this level of isolation, but the VMMs used on IBM's PowerPC based servers and mainframes do.
From the perspective of system security, a single compromised application risks exposing to an attacker data used by other applications which would normally be outside of the scope of the compromised application. Most of these issues can be addressed through some simple best practices such as proper use of chroot and user restrictions, but those do not scale well and do not address usability concerns. A good example is the shared hosting that grew dominant in the early 2000s while x86 virtualization was still in its infancy. It was common to see web servers with dozens if not hundreds of amateur websites running on them at once. For performance reasons a web server would have read access to all of the web data; a vulnerability in one website allowing arbitrary script execution would allow an attacker to read data belonging to other websites on the same server.
From the perspective of users, a system designed to run 100 applications from 20 different working groups does not provide a lot of room for rapid reconfiguration. Shared resource conflicts, version conflicts, permissions, mounts, network access, etc... it gets extremely messy extremely quickly. Addressing this requires a lot of administrative overhead and every additional person that is given root privileges is an additional person that can bring the entire system down.
Virtual machines on the other hand give every user their own playground, including full administrative privileges, in which they can screw around with to their hearts content without the possibility of screwing up anything else or compromising anything that is not a part of their environment. Everyone gets to be a god in their own little sandbox.
Now, that doesn't mean that the entire operating system needs to be duplicated for every single application. Certain elements such as the kernel and drivers can be factored out and applied to all environments. Solaris provides OS level virtualization in which a single kernel can manage multiple fully independent "zones" for a great deal of reduced overhead. Linux Containers is a very similar approach that has garnered some recent attention.
Re: (Score:2)
I get pissed when I hear programmers act like raw performance is the sole measurement of the quality of a language.
Re: (Score:2)
Umm, anyone who wants their code to not run substantially slower. Seriously, do you front end programmers really think nobody does numerical simulations or other performance-sensitive work? In my line of work, I'd kill for the opportunity to make my code 1.5 times faster!
I'm surprised by your answer at so many levels. First, I thought the guys doing scientific calculations were scientists that many times (not always of course) are only used to Matlab, Mathematica or even Excel. Second, obviously we will need native code, as well as interpreted, functional, and lots of code in domain specific areas (what the heck... I spent this Sunday morning writing in VimL, a language so stupid that can't copy a file without reading the whole contents in memory or invoking system(), but I
Re: (Score:2)
"I'm surprised by your answer at so many levels. First, I thought the guys doing scientific calculations were scientists that many times (not always of course) are only used to Matlab, Mathematica or even Excel. "
That's why quite a few supercomputing facilities offer software porting/"translation" services... Some of the projects I've done over the years have been freelance contracts to rewrite a program from one stack to another.
Because if they can get something from MatLab/mathematica into Fortran which e
Re: (Score:2)
The facilities are not Stockholm University's CS departments, nor even Stockholm University's. They just rent time in these facilities, yet demand things, as if they owned them.
Re: (Score:2)
Well computers are so fast you can just throw more hardware at the problem.
Just kidding, people do not understand that one of the reasons that people still use Fortran for HPC apps it the fact there are a lot of really good optimized libraries and the fact that Fortran is really easy to optimize.
What I do not get from the story is why JavaScript's use of floats is a problem? Last I heard Floating point using SSE or OpenCL is often as fast as integer code. I could be wrong because most of the projects I work
Re: (Score:2)
Remember kids - all programming is high-performance computing!
Re: (Score:2)
In my line of work, I'd kill for the opportunity to make my code 1.5 times faster!
Then wait a year and buy a new computer.
Re:"So who needs native code now?" (Score:5, Insightful)
What a bizarre statement. Of course it's programming. It may not be very elegant programming, but then again, the bulk of C code I've seen in my years in the business isn't terribly elegant either.
Re: (Score:3)
Some of the most elegant code that I've seen has been with web scripting languages.
$('img').bind('mouseenter mouseleave', function() {
$(this).attr({
src: $(this).attr('data-other-src')
, 'data-other-src': $(this).attr('src')
})
});
Shameless plug [stackoverflow.com]
Try doing that in C
Re: (Score:3)
What makes you think this can't be done in C (at least C with apple's block extension)?
99% of the "elegance" of this is the particular framework in use, not the language.
Re: (Score:2)
It can obviously be done in C but I doubt it would be done with the same elegance.
The framework is written in the same language, JavaScript, which further emphasizes my point.
Re: (Score:3, Insightful)
Re: (Score:2)
Fail.
That's not valid Asm.js
jQuery doesn't work in Asm.js
Asm.js isn't Javascript. It's a statically typed language that looks like a subset of Javascript. There isn't even DOM support.
Re: (Score:2, Insightful)
Try doing WHAT in C? The idiot who wrote that code doesn't know how to comment!
Re:"So who needs native code now?" (Score:4, Interesting)
Some of the most elegant code that I've seen has been with web scripting languages. $('img').bind('mouseenter mouseleave', function() { $(this).attr({ src: $(this).attr('data-other-src') , 'data-other-src': $(this).attr('src') }) }); Shameless plug [stackoverflow.com]
Try doing that in C
In modern C++:
img.mouseenter = img.mouseleave = [] () { std::swap(img.attributes["src"], img.attributes["data-other-src"]); };
Ofcourse it requires the C++ programmer hasn't been too damaged by exposure to Java and tries to pointlessly ruin the language by using getters and setters instead of real property access.
Re:"So who needs native code now?" (Score:5, Insightful)
Correct this to any real programmer. I'm tired of web designers being colluded with actual programmers, scripting isn't programming.
Heh....typing isn't programming. If you aren't connecting patch wires on an accumulator bank, it's not worth doing. You get more efficiency that way too!
Re: (Score:2)
Except duh, websites are not software, "websites" are an experience made up of literally dozens of components on servers, clients, browsers, intermediate networking equipment, routers, modems, etc that all contain software - only one small piece of which is usually written in Javascript/HTML.
Re:don't we know it (Score:4, Informative)
Websites are no less than distributed applications. If you had been paying attention you would have noticed that website development has gotten a lot more rigorous than in the old days.
Re: (Score:2)
Wait. are you talking to *me* or just making a statement? Because that's pretty much exactly what I said...
Re: (Score:2)
I was referring to you. Pardon me if I misconstrued your meaning. Since websites/applications ARE software I am not sure what you meant.
Re: (Score:2)
Instead of overgeneralizing, let's specify the subject. As was said above, "scripting isn't programming". This ignores many of the huge libraries written in scripts, such as ASM.js, which were nowhere near trivial to crap out.
A few lines of scripting isn't programming, but it qualifies as "website development". Also included is a fully separated 3-tier, pre-compiled architecture with persistence and unit tests.
Websites are frequently, way less than distributed applications. Speaking of websites as if th
Re: (Score:2)
No, scripters are still scripters. It's the designers that are getting better at their jobs.
Suspect even at -O0 -g (Score:2)
Part of what I do is writing high performance code both in "native" and various scripting languages. I just completed yet another comparison for our product, looking for the best performing scripting language for a specific task. Lua with a "just in time" compiler came in first. It is *only* about x10 slower than native code produced by gcc with moderate optimization, when measured on variety of numeric benchmarks. It is considerably slower when dealing with strings and more complex data types, as expected.
Re: (Score:3)
asm.js is not javascript though, its a specific subset with restricted coding rules that allow the compiler to do its stuff. General js will be just as slow as any of the other script languages.
Re: (Score:3)
Re: (Score:2)
Chances are that for many interesting applications (graph-based stream and image processing, anyone?), this won't be slower than C, and perhaps faster in many cases
Benchmarks? I've heard lots of theories about why language X might beat C, but rarely have I seen it backed by benchmarks. Even the "for certain cases" thing rarely works out.
Re: (Score:2)
That's because:
Re: (Score:2)
Re: (Score:2, Informative)
Asm.js is not JavaScript. It's Mozilla's way of hacking together a very specific optimization for JS-targeting compilers such as Emscripten, because they don't want to adopt the sane route of just implementing PPAPI and Google's Native Client sandbox, both of which don't work well with single-process browsers. From a developer perspective it's fairly trivial to target both Asm.js and PNaCl (Google's Native Client except with LLVM bytecodes), or target one and write a polyfill for the other. In either case,
Re: (Score:2)
Re: (Score:2)
It is a lie, or maybe they don't know what "at worst" means.
When I first read the summary, I seriously wondered if they meant "at best".
Re: (Score:2)
Maybe they're writing from the perspective of C fans rather than javascript fans.
Re: (Score:2)
Javascript only has one number type. Floating point.
64bit floating point provides maximum precision, when you've only got the choice of 32 or 64.
Update the ecma standard (Score:2)
Update the ecma standard to include a set of features that is meant to be used when compiling a better language (C#, C++, Java, Perl, Python, LISP, Haskell, x86 asm, MUMPS, pig-latin, etc) down to javascript so it will run in any browser at native code speed. Then I think everyone will be happy.
Re: (Score:3)
Update the ecma standard to include a set of features that is meant to be used when compiling a better language (C#, C++, Java, Perl, Python, LISP, Haskell, x86 asm, MUMPS, pig-latin, etc) down to javascript so it will run in any browser. Then I think everyone will be happy.
Fixed that for you ;-)
Re: (Score:2)
You want NaCL, not ECMA.
The thing about ASM.js is that it's available. Just like Javascript is prevalent not because it's really any good (it's a horrible design from a performance view). ASM.js was created to improve Emscripten speed (it compiles C/C++ down to Javascript). The subset of JS used will run as regular javascript, and ridiculously sometimes running the code on the regular JS interpretor will execute it faster than the ASM.js code. I don't like Emscripten. A language divorced of the protot
Re: (Score:2)
A language divorced of the prototype hogwash and weird 'this' scoping of JS which causes (OOP headaches) would be nicer.
You're still trying to treat it like Java or C++. That "prototype hogwash" is undoubtedly the more powerful and flexible approach. The "weird 'this' scoping" makes perfect sense once you have a basic understanding of the language.
This is what Javascript was meant to provide to Java, hence it's name.
Wow, no. Not even close.
I probably shouldn't hold it against you. From the remainder of your post, it appears that you've cracked. Get some sleep.
Javascript as a Virtual Machine Representation (Score:5, Interesting)
The idea of compiling to JavaScript has been done a lot. Microsoft Labs had a project to take CLR codes and compile down to JavaScript. It was abandoned as too slow. I'm sure improvements to the JavaScript engines have made it more feasible. But, as noted the lack of certain native types and immutable data types (i.e. DateTime) forces a ton of static analysis to be done just to figure out that hey, that variable could be an plain integer. And it has to be conservative. Much easier to just have integers and be done with it.
If there is such an insistence on making the web an application platform for everything, then I think at some point you just have to standardize on a VM. Yes, yet another one. So, you can use Dart, JavaScript, Scheme, C#, Java, whatever there's a compiler for. Treat the DOM as core API and enjoy.
Personally, I just hope people realize that operating systems have been doing this well for years, that sandboxing isn't unique to web browsers and that "native" applications are actually a good thing.
The mobile app thing gives me a bit of hope, but it's sad that people seem unwilling to download a installer from the web from a trusted source and install it. I find it a bit strange that people are turning to the web to solve a problem that the web greatly expanded the widespread propagation of viruses and other malware.
And what people are surprised by is a bit baffling as well. A web browser isn't magic. Being impressed that you can run Doom on a modern web browser is missing the forest for a tree. I could run Doom on my computer for ages now. Me having to visit a URL to do so isn't not a major actual change nor improvement, despite the technical accomplishments that went into it.
Everything Must Be a Web Site (Score:2)
The "everything must be a web site" mentality has confused the hell out of me for ages as well.
For some time I assumed it was probably for portability, but while writing a game I'm sure no one cares about [multiplayermapeditor.com] I found that portability actually isn't all that difficult. The number of #ifdef involved to make it compile for both Windows and Linux (and FreeBSD, though I haven't compiled for it in about a year) amount to a very minor part of the code, and probably wouldn't have been even that much if I were more of
Re: (Score:2)
Agreed, the barriers are political, not technical. And yea, the DOM could be replaced as well.
So who needs native code now? (Score:4, Interesting)
Anybody who writes performance-sensitive code other than trivial contrived benchmarks.
Re: (Score:2)
Actually this has nothing to do with javascript.
ASM.js is a java VM like technology which compilies code in C++ and Java to javascript which then uses a JIT compiler.
This could be very usefull for things like Citrix or crappy activeX like functions being made available to mobile devices and browsers. Useful yes, but like java and activeX a big fucking security hole if not done right.
I think ASM.JS is dead out of the water as Google refuses to support it. Google wants you to use Dart and Google Chrome apps t
Re: (Score:3)
Emscriptem is a technology that compiles code in C++ to javascript. Asm.js is a restricted subset of javascript that is much easier for the compiler to handle.
Google wants you to use quite a few different things, probably the 'best' is NaCl (native client) which runs practically native code in a sandboxed runtime.
Re: (Score:2)
Asm.js uses ahead-of-time compilation, not just-in-time.
Re: (Score:2)
The second rule is that often only a very small percentage of the code must be optimized. The rest is not going to run that often, maybe only once.
S
Re: (Score:2)
Performance sensitive is not the issue here. The argument against JavaScript has always been no because it's slow. This at least opens the door for options other than Silverlight, ActiveX, Applets, and whatever other binary junk people want to run in my browser.
If you even considered for one minute running number crunching like Seti@home or other distributed computing, or bitcoin mining, or anything else that would tax your cooling and cost you energy, in JavaScript, sign yourself up for the looney bin.
In
Fail (Score:2)
Take a compact binary encoding of a CPU instruction set, convert it to text, run it through gzip, ungzip it, translate it back to binary instructions for a CPU.
Am I the only one who is wondering what the hell is going on? Distribute a binary already!
Portability and partitioning (Score:2)
Re: (Score:2)
Which CPU architecture do we decide that the web relies on?
Re: (Score:2)
Sugar predication to assert at compile time (Score:2)
64-bit computation vs. 64-bit storage (Score:4, Interesting)
JavaScript always uses float64 and this provides maximum precision, but not always maximum efficiency.
That doesn't make too much sense to me, I thought that typed arrays have always had 32-bit floats as an option. Why should 32-bit storage (on-heap typed arrays) and 64-bit registers (scalar values) be significantly slower than 32-bit storage and 32-bit registers? I thought the performance discrepancy between singles and doubles in CPU float units disappeared a decade and a half ago? (Or are simply single-to-double loads and double-to-single stores somehow significantly slower?)
Size of floats in cache (Score:2)
Re: (Score:2)
Re: (Score:2)
Javascript defaults to C's 'double' (64-bit) instead of C's 'float' (32-bit). There 3 things that makes Javascript dog slow for floating-point operations:
1) Load float64
2) Does all calculations in 64-bit precision
3) Write float64
The loading & storing are not that much more then float32.
However, most of the time an app can get away with float32 -- it doesn't need the full precision of float64. CPUs manipulate float64 slower then float32. i.e. sqrt(), trig functions, etc.
The biggest strength Javascript
Re: (Score:2)
CPUs manipulate float64 slower then float32.
Uhm, no. They don't. Not the vast majority of FP ops retired. For example, additions, subtractions, and multiplications have the same latency on recent AMD CPUs, while division is 27 vs. 24 cycles of maximum(!) latency for double vs. single (and even then, they seem to have the same minimum throughput anyway). Again, amortize the cost over the total execution time interleaved with other instructions. Don't te
Re: (Score:2)
I assume that it used to translating float64 javascript variables to float32 C variables (or vice versa) and now they get to keep them as-is. I think all the graphics calls use float32, so maybe that's where the performance boost comes from.
it sounds as if what asm.js needs is strong typing for all variables. maybe it'd be easier for the compiler to convert the code then.
But both the above are just guesses. Personally I'd like to see a standardised language designed for high-performance web browser code, o
Re:64-bit computation vs. 64-bit storage (Score:5, Informative)
Take a look at the image at the following link
http://www.anandtech.com/show/6355/intels-haswell-architecture/8 [anandtech.com]
That's the backend of the Haswell microarchitecture. Note that 4 of the 8 execution ports have integer ALUs on them, allowing for up to 4 scalar integer operations to begin execution every cycle (including multiplication). Two of these are on the same port as vector integer unit, which can be exploited for an obnoxious amount of integer math to be performed at once. There are only two scalar FP units, one for multiplication on port-0 and one for addition on port-1.
The same FP hardware is used to perform scalar, vector, and fused FP operations, but taking advantage of this requires a compiler that is smart enough to exploit those instructions in the presence of a Haswell microprocessor only and fast enough to do it quickly. Exploiting multiple identical execution units in a dynamically scheduled machine requires no extra effort on behalf of the compiler.
Microprocessors used in PCs have always been very integer heavy for practical reasons (they're just not needed for most applications), and mobile devices are even more integer heavy for power consumption reasons.
Using FP64 for all data types is obnoxiously lazy and it makes me want to strangle front end developers.
Re: (Score:2)
The only difference I can think of is that the generated code will be larger because 64-bit operations take more bytes. This may have an effect on caching and branching.
whiney bitches take note (Score:2)
If the (admittedly stupid) last sentence of the submission hadn't been present, would you still be complaining? They're improving the speed of js by doing some interesting stuff - what's wrong with that?
Not really true (Score:3)
Let's correct that sentence:
"Now Mozilla claims that with some new improvements it is at worst only 1.5 times slower than single threaded, non-vectorized native code."
In other words, it's only 1.5 times slower than native code that you haven't made any serious effort to optimize. Hey, I think it's great they're improving the performance of Javascript. But this is still nowhere close to what you can do in native code when you actually care about performance.
Standardize Javascript bytecode already (Score:2)
I'd wish they'd stopping slowly and painfully going through the intermediate steps and standardize the Javascript Bytecode representation. Then javascript wouldn't be any slower than native code. Even faster in some situations ( due to runtime optimizations, if the Java folks are to be believed ).
Why on earth are we still only transferring Javascript as text? It doesn't really help security. Is obfuscated Javascript any easier to read than decompiled bytecode?
Maximum precision? (Score:5, Informative)
(0.1 + 0.2) == 0.3
false
It doesn't matter how many bits you use in floating point. It is always an approximation. And in base-2 floating point, the above will never be true.
If they're saying that JavaScript is within 1.5x of native code, they're cherry-picking the results. There's a reason why people who care have a rich set of numeric datatypes [ibm.com].
Re: (Score:2)
That's only true for fractions.
1+2 == 3 is always going to be true.
As is 123456789 + 987654321 == 1111111110
You can absolutely express a 32 bit integer in a double with no approximation ever.
We rely on this in our lua scripting as a matter of fact.
http://asmjs.org/spec/latest/ [asmjs.org] relies on this for the 32 bit integer parts of their spec.
Actually, you can go a lot further than 2^32 - all the way up to 2^53-1.
This is taken advantage of in non-asm.js places. See:
http://bitcoinmagazine.com/7781/satoshis-genius-u [bitcoinmagazine.com]
Re: (Score:2)
Oh, and for asm.js (and, for the JS jits in general when they are sure of the types), floating point is not used if the number known to be int, so, there's another win outside of the main one from the article.
How about making the Canvas tag not suck (Score:2)
How about making the Canvas tag not suck?
Seriously, you can give it the most simple test (draw an image every frame, or fill a rectangle), and it will run unacceptably slow. For comparison, a simple Win32 program that changes the background color of a maximized window once per frame has very low CPU usage, but a simple Javascript program that flashes a canvas red and blue every other frame uses 100% CPU usage if the canvas is 1024x768.
Until the canvas isn't the bottleneck anymore, Javascript graphical prog
Re: (Score:2)
Sounds like the WebGL behind one's browser is using software rendering because hardware acceleration has been blacklisted. (Or at least that's the case for my onboard Intel GPU)
A previous rich-client browser platform, AWT/Swing (which bit the dust for a variety of reasons), had OpenGL/DirectX accelerated drawing pipelines for nearly a decade for Java2D, Java3D & jogl.
Yet in 2013 we're still stuck with unstable graphics drivers. Hmmm...
Here's who needs native code: (Score:2)
Re: (Score:2)
actually, COM was designed with Word and Excel in mind, so you could embed spreadsheets in documents. It could have been a lot better - being based on DCE RPC, but a) the DCE guys refused to licence it to Microsoft for anything resembling sense, and b) Microsoft dev division created their equivalent so it's naturally overcomplicated.
It worked well (as well as anything will) with VB, as long as you followed a couple of rules, I don't see that as a problem - everything has some restrictions, even VB :-)
Re: (Score:2, Funny)
The person writing the javascript engine.
Unless it's implemented in asm.js
It's JavaScript engines all the way down, man...
Re: (Score:2)
The person who needs to use threads.
Or the person needing access to special functions such as page fault interrupts.
Or the person porting python/Haskell/your favorite language.
Re: (Score:2)
Electrolysis (Score:2)
Re: (Score:2)
However even IE 8 supports multithreading and having a different thread for each tab. Chrome had this since 1.0. Webworkers require this and security requires this.
Uhm, no, security requires separate processes, not separate threads.
Shit on a 6 core cpu I had for 3 years now I should not have +30 tabs using one damn cpu?!
True, but on the other hand, Flash can only hose 1 core and the performance of the browser, leaving all other processes able to share 5/6 of your CPU power. With process-per-tab, now Flash can hose all your cores and bring the whole box to its knees.
Re: (Score:2)
Threading has nothing to do with security.
Running entirely difference processes for each tab is done for stability.
Re: (Score:2)
Yes it is, see strictfp
If you don't specify strictfp, the JVM is allowed to use higher precision data types in intermediate calculations, leading to greater precision and better performance at the expense of IEEE conformity.
If you do, you get IEEE 754 compliance.
Re: Cents as an integer (Score:2)
Re: (Score:2)
Until you get into the tens of millions of dollars, 32-bit integers are fine for counting currency down to the cent. What kind of "(monetary) transaction processing system" were you talking about?
Well, let's see, in 2007 there were over 200,000 companies in the U.S. with over $10,000,000 in receipts... (http://www.census.gov/econ/smallbus.html)
Re: Cents as an integer (Score:2)
Re: (Score:3)
Not mainframes, more clusters. All the stuff used for reconciliation and position management uses pure integer and fixed point maths. Market makers like pushing floating point numbers around, but that's just for quick profit calculations when doing the trades. At the end of the day, everything's reconciled with precise maths.
Re: (Score:2)
Jane Street runs OCaml! :-)