Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Java The Internet Technology

Mozilla, Cloudflare, Facebook and Others Propose BinaryAST For Faster JavaScript Load Times 125

Developers at Mozilla, Facebook, Cloudflare, and elsewhere have been drafting "BinaryAST" as a new over-the-wire format for JavaScript. From a report: BinaryAST is a binary representation of the original JavaScript code and associated data structures to speed-up the parsing of the code at the page load time compared to the JavaScript source itself. The binary abstract syntax tree format should lead to faster script loading across all web devices. Numbers related today by CloudFlare range from a 4% to 13% drop in load times compared to parsing conventional JavaScript source. Or if taking a "lazified" approach to skip unused functions, it can be upwards of 98% less time necessary. You can read more about it here.
This discussion has been archived. No new comments can be posted.

Mozilla, Cloudflare, Facebook and Others Propose BinaryAST For Faster JavaScript Load Times

Comments Filter:
  • NIH syndrome? (Score:5, Insightful)

    by Suren Enfiajyan ( 4600031 ) on Friday May 17, 2019 @11:54AM (#58609226)
    Isn't WebAssembly addressing this issue (among many others)?
    • Re:NIH syndrome? (Score:4, Informative)

      by Junta ( 36770 ) on Friday May 17, 2019 @12:05PM (#58609324)

      Webassembly would require a developer to do some sort of work to target it.

      This is simply parsing and reformatting existing javascript into a more machine-friendly format.

      WebAssembly should in theory give better performance beyond the parsing, but this will be easier to implement.

      I'm not 100% sure the parse performance will be noticeable much of the time. For me the page size and occasionally brain-dead algorithms are the only noticeable slowdowns.

      • Also considering that it's important only for the first few page loads (since browsers can cache the compiled JS/wasm codes) this makes it moot.
        • Re:NIH syndrome? (Score:4, Interesting)

          by Suren Enfiajyan ( 4600031 ) on Friday May 17, 2019 @12:14PM (#58609378)
          Also v8 v7.4 can interpret JS code without compiling it https://v8.dev/blog/jitless [v8.dev] . This makes the question even more controversial.
          • by Junta ( 36770 )

            The linked development is slower than current. It's intended as compatibility with environments that prevent applications from writing to executable memory.

            Whether the parsed syntax tree came from 'natural' javascript or a reformulated AST format, the compile or interpret decision is made after the phase this proposal intends to improve.

            Again I'm skeptical that the optimization will be noticeable, but on the other hand the concept as described could be very low effort to support by a web developer.

            • Not only, Ignition interpreter is also intended to boost page load speed, reduce memory consumption. https://v8.dev/blog/ignition-i... [v8.dev]
              • My point is that it isn't really necessary to fully compile a huge JS file to execute its tiny part. I guess if page load time becomes a real concern a new hint attribute might be supported in script elements instead.
                • by Junta ( 36770 )

                  Of course, if that code analysis happens before the file is even transferred, it could cut out some bandwidth causes for slow page loads.

                  • if that code analysis happens before the file is even transferred, it could cut out some bandwidth causes for slow page loads

                    https://v8.dev/blog/v8-release... [v8.dev]

                    • by Junta ( 36770 )

                      That is still transferring the files in their entirety. It's just saying that there was formerly a block between 'script transferred' and 'script parsed/compiled' as it needed a portion of the main threads time.

                      If something were parsing and analyzing enough to find dead paths and thus not only doing the parsing part, but discarding dead code it would save on bandwidth (e.g. someone pulls in a 3MB javascript file, but dead code analysis ascertains that only 400 bytes of it are actually in any calling path,

                    • I understand. Anyways, this is more like another topic. My opinion is that BinaryAST is closer to Wasm and its support is questionable since Wasm is quite young, immature and has a long way to improve in both speed and functionality [slashdot.org].
                    • by Junta ( 36770 )

                      While true, the question is whether or not W3C would ever dare let wasm access the DOM. Currently wasm's reach is more restricted than javascript, so a technology to optimize javascript may hold value against the background of wasm being so limited in access.

                      Anyway, to each their own.

                    • That is still transferring the files in their entirety.

                      Is it? Generally bytecode is more compact than source code, and that goes for even bytecode preserving the structure of source code almost completely, like Smalltalk bytecode, for example.

                    • by Junta ( 36770 )

                      I was referring to the V8 improvements on javascript parsing, not to this BinaryAST initiative.

                      The suggestion was that V8's improvements eliminated the potential upside of a re-formatted BinaryAST, or at least closed enough of the gap that JS+wasm renders the suggestion pointless.

                      I tend to agree that the parse benefits may be small, but if code analysis results in chucking a lot of the superfluous content, could have value.

      • I'm not 100% sure the parse performance will be noticeable much of the time.

        Apparently it's a 4% drop in load time. So if your page takes 300 milliseconds to load, you can expect a speedup of 12 milliseconds.

        12 milliseconds is less than the natural variance in IP/TCP transportation and connection, so essentially the difference will be a rounding error. Seems like something that is worth adding a bunch of extra code to every browser that increases binary size and provides and extra interface for hacking, and complicates the web and makes source code more closed. Seems like a reaso

        • Exactly: 4% - 13% translates to "insignificant to marginal" in the real world.

          So the real question is "why is there this push to promote an obfuscation technique?"

          I wonder?

      • ...

        For me the page size and occasionally brain-dead algorithms are the only noticeable slowdowns.

        Don't forget latency while the page is doing Ajax stuff.

      • Gnome extensions do some of the "ahead of time* parsing. Read the JS once, execute many times.
    • by mlyle ( 148697 )

      No. Webassembly can't touch the DOM, etc. It just exists for high-performance lumps within page source.

      Indeed, asm.js which became Webassembly is from Mozilla, and this is a Mozilla proposal.. so it's hardly NIH-syndrome. :P

      • Webassembly can't touch the DOM

        Not yet, but it doesn't mean that it will never be. There is actually a proposal of host bindings [scottlogic.com].

        • Which will over time keep on adding features which it will not have any advantage over say the BinaryAST in performance, just because there is too much stuff to support.
          Webassembly is good for when you have to just do a lot of calculations. However DOM is more saying to the OS and the browser. OK I need you pick these controls which the Browser will tell the OS that it needs to write these this, and the OS overwrites some memory elements, which the video card picks up and displays on the screen.

          My two bigg

          • Which will over time keep on adding features which it will not have any advantage over say the BinaryAST in performance, just because there is too much stuff to support.

            Well, by this logic adding a new separate bytecode format like BinaryAST might also slow down the browser. This also might cause extra maintenance costs.

            Just an example, JavaScript had significantly less features say 15 years ago and nowadays it would be compiled much faster in modern browsers with their modern optimizations than today's JS. But a thing to consider is that JS compilation/execution wasn't perfectly optimized back then, today's parsing algorithms are more improved and are improving. Of course

    • Re:NIH syndrome? (Score:5, Informative)

      by larkost ( 79011 ) on Friday May 17, 2019 @12:11PM (#58609358)

      No, WebAssembly is essentially a whole different runtime than JavaScript and is much more limited in what it can do. Usually you have a JavaScript program that calls into a WebAssembly portion to handle calculation-heavy sections. The results from those are then acted on in JavaScript (e.g.: updating what the user sees).

      This will pre-digest the JavaScript, so it is not flowing over-the-wire as pure text but rather closer to what the JavaScript engine is actually going to use. That pre-digestion also has some chance of involving some optimizations (e.g.: dead code stripping) that could be very useful in some cases.

      This would not really affect what you would do in WebAssembly at all.

      • "That pre-digestion also has some chance of involving some optimizations (e.g.: dead code stripping) that could be very useful in some cases."

        How does this work with obfuscation where the library purposely injects dead code to help deter reverse engineering?

        • Well, I guess you'll be able decompile the code into readable format, and if the optimization is that good, reverse engineering will be that much easier.

        • by larkost ( 79011 )

          Obfuscation in JavaScript is annoying at best, and this is not going to meaningfully change that. Someone who decides to reverse-engineer an obfuscated JavaScript library is just going to go through the code method-by-method replacing with a refactoring tool as they figure each one out. Relatively quickly (hours? at worst days) you are going to have pretty much gotten everything meaningful done. Dead code, by its very nature, will just not be important.

      • No, WebAssembly is essentially a whole different runtime than JavaScript and is much more limited in what it can do.

        Eventually WebAssembly will be able to access the DOM.

    • by Luthair ( 847766 )
      No, Mozilla did both.
    • Isn't WebAssembly addressing this issue (among many others)?

      As others have said, WA can't touch the DOM, so it's for other things.

      I worked on a project a couple of days ago that's here:

      https://github.com/mdchaney/qu... [github.com]

      This is based on quirc by Daniel Beer of New Zealand, and a modification done by Joshua Koo of Singapore called "quirc.js". Koo's version made quirc work with Emscripten, which compiles C code to Javascript. It's really fast, and can decode a QR code in an image in under 50ms. Note that's not a pretty QR code - it can be skewed.

      Koo wrote very little

    • Not necessarily; WebAssembly is more like IBM AS/400 / iSeries / System i intermediate code; this sound much more like Smalltalk bytecode. WebAssembly is a Bring Your Own Runtime type of thing.
  • And here was me thinking Mozilla was all about the open web, someone needs to go kill Berners-Lee, so we can get him a-rolling.

    • That stop with 57.
      They have been chasing Google's coat tails ever since.
    • I'm not sure what you mean by that statement. BinaryAST is just a way to by-pass cold start of the JS engine by passing a binary representation of the abstract syntax tree that's ready to be used by the engine right away rather than attempting to parse it and create an AST on the client side. You'd still need the JavaScript to process everything that would happen post-startup.

      In an incredibly simple way of thinking about it. When JS gets to your machine it gets compiled and cached to speed up the next ne

  • How long will it take for the first coin miner to be written in AST?
    • by bussdriver ( 620565 ) on Friday May 17, 2019 @01:12PM (#58609792)

      Binary does not mean EXE. Way too many "nerds" on here confusing the word binary with machine language compiled binary executables.

      This is simply the output of the JS parser compressed into binary, not far from the GZ compressed HTTP data much of your JS is sent in binary to you already.

      Just as you have to process JS source to make it pretty again after they uglyfy the source this will be likely be easier to make the AST into pretty formatted JS again than it currently is to fix compacted JS source.

      WHY IT IS NOT NEEDED: the JS parser is fast, optimized and machine native already. This is a small constant speed up at best which would have mattered more in the 90s when computers were so much slower. By the time this is widely supported CPU speeds will have gone beyond this.

      REAL PROBLEM: JS bloat which will grow at the tolerance level of users. A 200% speed boost won't matter for long at all.

      How can a CS major not see that this optimization is a waste of manpower and adds more complexity just for LOAD time.

      • by Anonymous Coward on Friday May 17, 2019 @01:24PM (#58609880)

        There's actually a decent chance it would be less efficient at compression than GZipping the source, since it's just the syntax tree, and certain tokens that would be the same in source represent different syntactical elements.

        The real purpose of this isn't to speed anything up. Even the stated speedups are pretty ridiculous.

        The real reason is provide another layer of obfuscation to prevent people (and tools) from poking into the JavaScript and either pulling out how code works (to "protect" IP) and, more importantly as far as the people pushing this are concerned, to make it harder for ad blockers to strip out ad and tracking code. When all the ad code is merged together into a single blob of binary AST with symbol names stripped, your ad blocker can no longer tell the difference between ad and tracking code and the code required to remove the divs that hide the actual page content until the ads have loaded.

        This is another tool in the war against ad blockers, nothing more.

        • 1) A binary AST could compress better because it's context related compression and GZ would also be done at the HTTP level on the binary. Minimized JS wouldn't benefit much, possible even gain. Why somebody would skip HTTP GZ if they set it up for JS already I'd like to know...

          2) An AST would actually be easier to work with than minimized Javascript. The ones I've used rename all symbols possible into a few characters already! Code is obfuscated already.

          Pretty code tools in browsers like Firefox already ha

      • isn't that kind of pointless? Isn't most data is already compressed in a variety of ways before it's sent down the wire?
    • Or hacking the client-side BinaryAST p-code interpreter. Just another attack surface.
  • Other Option (Score:5, Insightful)

    by chill ( 34294 ) on Friday May 17, 2019 @11:59AM (#58609266) Journal

    Or websites could, you know, stop spewing over 2 Mb of javascript to render 5k of text.

    • The operators of those websites would argue that the sponsor message delivery functionality of the 2 MB of JavaScript covers the cost of researching and writing the 5 kB of text. How would you propose to cover that cost instead?

      • by deKernel ( 65640 )

        So you are trying to justify laziness by including vast amounts of unused code in hopes that things just work with no unintended consequences?
        I really hope I misread your statement.

        • Say you have 100 different advertisers interested in evaluating a viewer's interests, and you want to award each ad unit to the advertiser that will pay the most to reach that viewer. These advertisers' scripts average 20 kB. Add them up and you're already to 2 MB.

          • by deKernel ( 65640 )

            Or...or....are you ready for this.....decide on the server just which advertiser you are "awarding" this click and only to send down that script only. That way, you don't bloat the crap out of the download because of laziness and ease.

            • decide on the server just which advertiser you are "awarding" this click and only to send down that script only.

              In the 2010s, most advertising on major websites is based on an interest profile collected by tracking a user across multiple websites. This is because ads based on a viewer's cross-site interest profile pay out roughly three times the cost per thousand views (CPM) or click (CPC) compared to purely context-based advertising. (Source: "An Empirical Analysis of the Value of Information Sharing in the Market for Online Content" [politico.com] by Howard Beales and Jeffrey A. Eisenach)

              This cross-site interest profile isn't sto

              • by deKernel ( 65640 )

                And now we get to the real cause of this: the advertising model is broken at it's core. No wonder the internet is a pile of crap compare to 20 years ago and sites have slowed to a crawl even though bandwidth is greater and browsers have orders of magnitude CPU and RAM available.

      • by chill ( 34294 )

        I use an ad blocker. If I can't get into a site without having to disable it, I do without. Sites I can't do without, or really want their content, I pay for. I currently pay for subscriptions/donations to: The Intercept, The Guardian, ProPublica, The New York Times, Ars Technica, Bleeping Computer, and Wired, as well as half-a-dozen service sites like mynoise.com.

        Considering how many web sites making money by delivering sponsor messages don't produce original content, and only aggregate or repose other con

        • by tepples ( 727027 )

          I currently pay for subscriptions/donations to: The Intercept, The Guardian, ProPublica, The New York Times, Ars Technica, Bleeping Computer, and Wired, as well as half-a-dozen service sites like mynoise.com.

          But what steps do you take to determine, for any given article, whether the publisher of that article deserves to be added to this list of currently thirteen? I doubt most readers can justify putting in a credit card number to begin a $10 per month recurring subscription just to read a single article on a given site.

          • by chill ( 34294 )

            Mostly it is if I find myself going back to a site for more articles. I'm not going to pay for one-offs, but if they consistently have content I want then I'll subscribe.

            • by tepples ( 727027 )

              How would you determine whether sites with a harder paywall "consistently have content I want"? In the case of (say) The Wall Street Journal, would it involve having seen several such articles in your news aggregator over the past month?

              • by chill ( 34294 )

                Yes, or a free trial period (Washington Post), or partial articles like the WSJ does. Consistently hook me with the first two paragraphs, or three free articles a month, and I'll pay.

                That being said, I used to have a WSJ subscription but cancelled it. While they have good articles, I go to WSJ looking for business news and information. Their Editorial Board kept delving into partisan political bullshit, so I cancelled. If I want politics, I'm not going to look for it in a business newspaper, I'll go to the

      • Shut down and go out of business. "Web developer" should be a punchline we can all laugh at.

      • by Junta ( 36770 )

        I don't think ad frameworks are the lion's share of his grievance.

        As far as I've seen, it's the 'import a half dozen frameworks in their entirety, just so you can use a simple 2-3 line function in each one' sort of behavior.

        Of course now that I think about it, a tech that encourages a site to do a processing step that may detect and omit so much of those crap frameworks in transfer, I guess I could see how it could help some.

        • I agree. Look at an ad-free site like a college, you'll still need to get more than a few mb of data to see the front page. Granted, most of it is in images, but the various js libraries in use add up quick.

      • by Anonymous Coward

        How would you propose to cover that cost instead?

        Capitalist found.

        You do realize that there was a time on the internet before Google, Facebook, Twitter, and the like turned it into a mass-marketing frenzy correct? A time when the server admins paid the costs out of pocket, out of a love for their readers and the community that they led. You know a time before the Eternal September and all of it's vitriol was unleashed upon it. Good times.

        Simply stated: If it's worth keeping around, it will remain regardless

        • by tepples ( 727027 )

          A time when the server admins paid the costs out of pocket, out of a love for their readers and the community that they led.

          For the purposes of this comment, I will refer to such websites as "hobby sites."

          The Internet existed before ADs were the vast majority of it's content, and it will exist long after the ADs are gone.

          The Internet existed before September. The home Internet did not. Before the introduction of advertising-supported websites, Internet access was primarily found in universities, not at home. The only exposure to "Internet" was that users of AOL, CompuServe, and the like could exchange email with university Internet users.

          If all advertising-supported websites were to cease to exist, this would leave hobby sites, subscription sit

    • by Luthair ( 847766 )
      How are 9000 advertising networks going to track you if they don't load their javascript.
    • The from-the-same-site javascript code, or all the third-party-site advertisements and web trackers?

      Speeding up a site's javascript code is like protecting an overburdened table by brushing the sawdust off but leaving the cast-iron anvil in place.
       

    • Those are all libraries, loaded once and cached on the client. Sloppy and inefficient sure, but not that bad.

      what you were thinking of is the 2mb of cookies that sites load up and pass with every request these days.

    • There are alternative sites for Facebook that require almost no memory or CPU time. m.facebook.com is the best known but there is also another prefix that is a bit broken but uses around 100k. I forgot what it is, though.

      www.facebook.com can easily use several hundred megabytes and be exceptionally slow. I know it does a lot of things eg inbuilt Messenger.

      But I'm guessing that people here could make an equivalent that uses 95% less memory and and be a lot faster too.

    • by Anonymous Coward

      I use Decentraleyes to help with that a little, but yeah it's ridiculous. These shit-for-brains millennial web "developers" (most likely using drag and drop components that plug together) don't know that JavaScript isn't a necessity to display text, images, audio, video, etc. or for page formatting. Web pages that work without JavaScript are faster, more efficient, and respects visitors' time and security.

  • How about just using last Javascript.

    I have an old computer that I still use from time to time to access the web. It's a Pentium IV with 2 GB of RAM. Many webpages, even very complex ones work quite well. But every once in a while I'll run across a web page that brings the entire machine to a crawl as it runs copious amounts of Javascript. There's really no need for so much javascript in most of these pages. Developers have just forgotten how to do anything without including every framework under the sun.

  • Yeah, because the "first" thing we want is some random binary downloaded from some shady site. What could possibly go wrong? /s

    Are we going to get AntiVirus scanning for BinaryAST now?

    • by vux984 ( 928602 ) on Friday May 17, 2019 @12:05PM (#58609328)

      Near as I can tell, there is zero functional difference between this and downloading 'regular' javascript.

      If you reject both, that's fine, but its silly to draw a distinction here.

      • I can at least read 'regular' javascript without having to de-compile it first.

        UnknownSoldier is right that something is going to have to scan it prior to letting it lose on a system, which most likely will make it slower than just plain-jane javascript.

        • by Junta ( 36770 )

          Yep, perfectly readable regular javascript, like:

          var c6=ck(cF(c7,c9,c5,c8),db,"event");bw(c6,bA,da)}function bZ(c5,c8,c6,c9){var c7=ck("search="+t(c5)+(c8?"&search_cat="+t(c8):"")+(J(c6)?"&search_count="+c6:""),c9,"sitesearch");bw(c7,bA)}function cJ(c5,c8,c7){var c6=ck("idgoal="+c5+(c8?"&revenue="+c8:""),c7,"goal");bw(c6,bA)}function cQ(c8,c5,dc,db,c7){var da=c5+"="+t(bT(c8));var c6=cm(c7,"click",c8);if(c6){da+="&"+c6}var c9=ck(da,dc,"link");bw(c9,bA,db)}function bL(c6,c5){if(c6!==""){return c6+c5.charAt(0).toUpperCase()+c5.slice(1)}return c5}function b8(da){var c9,c5,c8=["","webkit","ms","moz"],c7;if(!a7){for(c5=0;c50){c6+="&"}else{c6+="?"}var c7=bj();c6=F(c6,aq,c7);ab.setAnyAttribute(c5,"href",c6)}function av(c8){var c9=ab.getAttributeValueFromNode(c8,"href");if(!c9){return false}c9=String(c9);var c6=c9.indexOf("//")===0||c9.indexOf("http://")===0||c9.indexOf("https://")===0;if(!c6){return false}var c5=c8.pathname||cc(c8.href);var c7=(c8.hostname||d(c8.href)).toLowerCase();if(an(c7,c5)){if(!cx(cK,L(c7))){return true}return false}return false}function cw(c5){var c6=cZ(c5);if(c6&&c6.type){c6.href=p(c6.href);cQ(c6.href,c6.type,undefined,null,c5);return}if(cD){c5=ap(c5);if(av(c5)){ca(c5)}}}function cn(){return G.all&&!G.addEventListener}function cL(c5){var c7=c5.which;var c6=(typeof c5.button);if(!c7&&c6!=="undefined"){if(cn()){if(c5.button&1){c7=1}else{if(c5.button&2){c7=3}else{if(c5.button&4){c7=2}}}}else{if(c5.button===0||c5.button==="0"){c7=1}else{if(c5.button&1){c7=2}else{if(c5.button&2){c7=3}}}}}return c7}function bK(c5){switch(cL(c5)){case 1:return"left";
          case 2:return"middle";case 3:return"right"}}function aU(c5){return c5.target||c5.srcElement}function aw(c5){return function(c8){c8=c8||T.event;var c7=bK(c8);var c9=aU(c8);if(c8.type==="click"){var c6=false;if(c5&&c7==="middle"){c6=true}if(c9&&!c6){cw(c9)}}else{if(c8.type==="mousedown"){if(c7==="middle"&&c9){aK=c7;bs=c9}else{aK=bs=null}}else{if(c8.type==="mouseup"){if(c7===aK&&c9===bs){cw(c9)}aK=bs=null}else{if(c8.type==="contextmenu"){cw(c9)}}}}}}function am(c7,c6){var c5=typeof c6;if(c5==="undefined"){c6=true}aj(c7,"click",aw(c6),false);if(c6){aj(c7,"mouseup",aw(c6),false);aj(c7,"mousedown",aw(c6),false);aj(c7,"contextmenu",aw(c6),false)}}function bu(c7,c9){al=true;var c8,c6=aT(br,"ignore"),da=G.links,c5=null,db=null;if(da){for(c8=0;c8

          Which is but a small bit of slashdot served javascript. A parsed AST of the above would be about the same level of readability and both require some software to make it vaguely human friendly again. Since this proposal is simply to trim and reformat the AST of the *exact* same javascript, it's hard to imagine security risks in what the code is allowed to do (though one could imagine a format having fields overrun or preablems overstate their boundaries, but in this day and age one would hope the software injesting it would be guarded against such shenanigans.

  • There are lots of advantages to keep javascript readable and ASCII format. It is easier to parse for malicious stuff. Similar than with QR codes, where the address is opaque, one has to trust more. Look at the history to see how much trouble has come with binary unless it is a very transparent compression scheme which can be reversed. Many image file formats, PDF, Flash swf suffered from that. Yes, there were decoders which would render back swf into action script but not everything was caught and lots of
  • Downloading mystery-binaries from the interweb, what could possibly do wrong with that?

    We should really just compile every web site into its own giant binary blob and download that. If a character on a page changes then you could just download it again.

    Stay tuned for "index.exe" coming to a browser near you! (1.2Tb, approx load time 71 minutes; thank you for your patience.)

    • No problems with "index.exe", if I can store it in the Registry.
    • we'll just helpfully pre-load the index.exe on all internet connected computers with our Writing Online Redundant Multitasker, they'll thank us later

    • but its not a binary, its a super-minified JS text that is packed into binary format rather than ascii.

      And with gzip encoding, you are already downloading 'binaries'...

      • but its not a binary, its a super-minified JS text that is packed into binary format rather than ascii.

        So in other words, a binary. When you take something and pack it into a binary format, that's a binary, hello?

  • Now website-based code will be running on my computer, and i will not be able to have any idea what it is doing [to me].
  • ... trying to figure out more efficient malware delivery mechanisms?
  • Seems like this is proposing mapping symbols and structures to binary tokens, which is pretty much what gzip and the like will pretty much in effect do in a content-agnostic manner.

    Is it really better to specifically transform JavaScript into a manner that needs a special decoder?

    • by Anonymous Coward

      The point is that the parser does a lot of work which can be done beforehand. For example, if a parser comes across 100+i, it turns the characters 1, 0, 0 into a binary representation of 100 that is suitable for calculation. This conversion from text to number takes time. It's not primarily about the space. The binary representation of 100 is likely bigger than three bytes.

  • by Anonymous Coward

    They still have developers over there?

  • by Anonymous Coward

    What we need are more frameworks!

    a webpage should need dribble, to interface with drabble, in order to enable frabble to use grabble, to interface with druffle, which works with puffle, to enable huffle to fluffle the socksifier.

    we have the memory, we have the disk space, we have the cores, lets use them!

    If text isn't dancing, being highlighted, traced, tracked, then the program or webpage is just trivial.

    with just a few GB of memory, and a few GB of disk space, a web page can be rendred in beautiful 32bit

  • ... the problem of a website's code that pegs my CPU at 100%> No? Then ... don't care. Does it fix the problem of lazy web site designers that don't specify image sizes in tags to pre-alllocate space on the page to avoid making make the page content jump all over the place as images slowly dribble in? No? Then ... don't care. (And don't get me started on the developers who take huge images directly from the DSLR and jam them into small image spaces on the page instead of properly thumbnailing them to cut

    • ... the problem of a website's code that pegs my CPU at 100%> No?

      Yes, actually. The main thing they're saving here is Javascript parsing time.

      • by rnturn ( 11092 )

        ``The main thing they're saving here is Javascript parsing time.''

        So how long is the Javascript parsing supposed to take? On some sites, all four cores are pegged at 100% until I leave the site. Sure seems to me that it's more like it's the Javascript itself and nothing about the size of the code that's being run. Making the code smaller so it will load faster only reduces the length of time before my CPUs start heading for 100% utilization.

  • How about using less Javascript?
  • The solution to webpages bloated with advertising and tracking scripts is not figuring out how to allow for even more bullshit to be loaded.

You know you've landed gear-up when it takes full power to taxi.

Working...