Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming Businesses Google The Internet IT Technology

Google To Host Ajax Libraries 285

ruphus13 writes "So, hosting and managing a ton of Ajax calls, even when working with mootools, dojo or scriptaculous, can be quite cumbersome, especially as they get updated, along with your code. In addition, several sites now use these libraries, and the end-user has to download the library each time. Google now will provide hosted versions of these libraries, so users can simply reference Google's hosted version. From the article, 'The thing is, what if multiple sites are using Prototype 1.6? Because browsers cache files according to their URL, there is no way for your browser to realize that it is downloading the same file multiple times. And thus, if you visit 30 sites that use Prototype, then your browser will download prototype.js 30 times. Today, Google announced a partial solution to this problem that seems obvious in retrospect: Google is now offering the "Google Ajax Libraries API," which allows sites to download five well-known Ajax libraries (Dojo, Prototype, Scriptaculous, Mootools, and jQuery) from Google. This will only work if many sites decide to use Google's copies of the JavaScript libraries; if only one site does so, then there will be no real speed improvement. There is, of course, something of a privacy violation here, in that Google will now be able to keep track of which users are entering various non-Google Web pages.' Will users adopt this, or is it easy enough to simply host an additional file?"
This discussion has been archived. No new comments can be posted.

Google To Host Ajax Libraries

Comments Filter:
  • nothing new here (Score:5, Informative)

    by Yurka ( 468420 ) on Wednesday May 28, 2008 @10:32AM (#23570411) Homepage
    The DTD files for the basic XML schemas had been hosted centrally at Netscape and w3.org since forever. No one cares or, indeed, notices (until they go down [slashdot.org], that is).
  • how about removing 70% of the code which just encodes/decodes from XML

    Done [json.org]. What's next on your list?

    (For those of you unfamiliar with JSON, it's basically a textual representation of JavaScript. e.g.

    {
    name: "Jenny",
    phone: "867-5309"
    }
    If you trust the server, you can read that with a simple "var obj = eval('('+json+')');". If you don't trust the server, it's still easy to parse with very little code.)
  • by Anonymous Coward on Wednesday May 28, 2008 @11:21AM (#23571067)

    Couldn't you design a modular AJAX system that would bring in functions as they are needed? That way, someone visiting just a couple pages wouldn't have to download the entire library.
    Qooxdoo [qooxdoo.org] does this. While developing you download the entire framework, but when you are ready to release you run a makefile which creates a streamlined .js file with only the methods/classes your application UI needs; it also trims whitespace, renames variables to save space, etc.

    Fun package to work with too.
  • by dalmaer ( 725759 ) on Wednesday May 28, 2008 @11:32AM (#23571195)
    I understand that people like to jump onto privacy, but there are a couple of things to think about here: - We have a privacy policy that you can check out - There isn't much information we can actually get here because: a) The goal is to have JavaScript files cached regularly, so as you go to other sites the browser will read the library from the cache and never have to hit Google! b) If we can get browsers to work with the system they can likewise do more optimistic caching which again means not having to go to Google c) The referrer data is just from the page itself that loaded the JavaScript. If you think about it, if you included prototype.js anyway then we could get that information via the spider... but it isn't of interest. We are a for profit company, but we also want to make the Web a better faster place, as that helps our business right there. The more people on the Web, the more people doing searches, and thus the better we can monetize. Hopefully as we continue to roll out services, we will continue to prove ourselves and keep the trust levels high with you, the developers. Cheers, Dion Google Developer Programs Ajaxian.com
  • by maxume ( 22995 ) on Wednesday May 28, 2008 @11:36AM (#23571241)
    They encourage use of the loader, but they aren't requiring it, there are direct urls for accessing the libraries:

    http://code.google.com/apis/ajaxlibs/documentation/index.html#AjaxLibraries [google.com]
  • by thrillseeker ( 518224 ) on Wednesday May 28, 2008 @11:45AM (#23571373)
    Because the hundred other pages the visitor went to that session is also demanding their own copy of the library be downloaded. It's not your bandwidth this saves (only trivially it is) - it's the end user's download, and parse, of the same code for each of the dozen sites he visits that use the same library. The libraries Google has initially chosen are extremely popular - i.e. there are good odds that you have a dozen copies in your browser cache right now - each download of which made your browsing experience that much slower.
  • by Tumbarumba ( 74816 ) on Wednesday May 28, 2008 @11:47AM (#23571409) Homepage

    The file is indeed Javascript and it's called "urchin.js" (nice name eh?).
    "urchin.js" is the old name for the script. Google encourages webmasters to upgrade to the new ga.js, which has pretty much the same functionality, but some other enhancements. Both those scripts feed data into the same reports. If you're interested, you can see what the scripts is doing by looking at http://www.google-analytics.com/ga.js [google-analytics.com]. It's pretty compact JavaScript, and I haven't gone through it to work out what it's doing. Personally, I use it on the website for my wife's children's shoe shop [lillifoot.co.uk]. From my point of view, the reports I get out of Google Analytics are excellent, and really help me optimise the website for keywords and navigation. I will admit though, that it is a little creepy about Google capturing the surfing habits of people in that way.
  • by lemnar ( 642664 ) on Wednesday May 28, 2008 @11:47AM (#23571415) Homepage
    AJAX systems are modular - at least some of them are somewhat. Scriptaculous, for example, can be loaded with with only certain functions.

    "With Google hosting everything," you get to use exactly the version you want - google.load('jquery', '1.2.3') or google.load('jquery', '1.2'), which will get you the highest version '1'.2 available - currently 1.2.6. Furthermore, you can still load your own custom code or modifications after the fact.

    For those concerned about privacy: yes they keep stats - they even do some of it client side - after libraries are loaded, a call is made to http://www.google.com/uds/stats [google.com] with a list of which libraries you loaded. However, the loader is also the same exact loader you would use if you were using other Google JavaScript APIs anyways. It started out as a way to load the Search API and the Maps API: google.load('maps', '2') and/or google.load('search', '1').

    Google's claim to providing good distributed hosting of compressed and cachable versions of the libraries aside, the loader does a few useful things in its own right. It deals with versioning, letting you decide to which granularity of versions you want to load, and letting them deal with updates. Also, it deals with setting up a callback function that actually works after the DOM is loaded in IE, Safari, Opera, and Firefox, and after the entire page is load for any other browesers. They also provide convenience functions. google_exportSymbol will let you write your code in a non-global scope, and then put the 'public' interfaces into the global scope.

    Finally, you can inject your own libraries into their loader. After the jsapi script tag, include your own, set google.loader.googleApisBase to point to your own server, and call google.loader.rpl with a bit of JSON defining your libraries' names, locations, and versions. Subsequent calls to google.load('mylib', 'version') will work as expected.
  • by richardtallent ( 309050 ) on Wednesday May 28, 2008 @12:10PM (#23571819) Homepage
    These are static releases of library code written by others.

    Google would only be able to execute Javascript on your user's page if they modified the source code of the library you were loading from them. Which would be a BIG no-no.

    (Google does have a "loader" function available, but also just allows you to include the libraries via a traditional script tag to a static URL.)

    Otherwise, cookies are NOT cross-domain and wouldn't be passed with the HTTP request, unless you were silly enough to CNAME your "js.mysite.com" to "ajax.googleapis.com".
  • by mckinnsb ( 984522 ) on Wednesday May 28, 2008 @12:18PM (#23571957)
    Being a Javascript programmer myself, I was wondering which post to reply to. I guess this one suits. There are a lot of issues I'd like to tackle, here.

    Yeah, but what if Google decides that nobody is using these -- or they can't legally host them for whatever reason -- or they just decide that they don't want to do this anymore?

    Then you go back to including a script tag in your header of your HTML document. All of these frameworks are free. They will likely remain free even though some are sponsored (mooTools). The speed improvement exists, but is moderate-to-minimal, since a packed full-inclusion library of mooTools runs you about 32kb. That's 32kb the user wont need to download again and again, but its just 32kb.

    Think broader. What happens when: * Google decides to wrap more than just the promised functionality into it? For example, maybe "display a button" turns into "display a button and report usage stats"?

    Even if the code is compressed and obfuscated, this "wrapped functionality" would become *glaringly* obvious to any javascript programmer using Firebug. The news would most likely spread like wildfire. Especially on /. The only thing they could track without really being monitored are which IP addresses are requesting the mooTools.js (for example) file. They could measure its popularity, but not *necessarily* what sites they are visiting. Granted- I haven't looked at the API yet, but if its just a .js file hosted on a Google server, there isn't really too much they can monitor that they don't already. Analytics provides them with *tons* more information. To be honest, this just seems like a professional courtesy.

    There is a key point here I hope was visible - JavaScript is totally Load on Delivery, Executed Client Side. They can't really sneak much past you , and any efforts to do so would still remain visible in some way (you would see NET traffic).

    * Google gets hacked and malicious Javascript is included?

    Interesting, but I haven't seen Google hacked yet - not saying it can't happen, but I've not seen it. There is more of a chance of someone intercepting the expected .js file and then passing a different one- however, keep in mind that if you are doing *anything* that requires any level of security what so ever in JS, well, you have other, deeper fundamental problems.

    But, yes- you're right. This is a scary new dependency. For a company full of PhD geniuses supposedly Doing No Evil, nobody at Google seems to understand how dangerous they are to the health of the web. In fact, I'd suggest they do, and they don't care- because they seem hell-bent on making everything on the web touch/use/rely upon Google in some way. This is no exception. A lot of folks don't even realize how Google is slowly weaning open-source projects into relying on them, too (with Google Summer of Code.)

    It is a dependency, but its not that scary. Open source projects have always been, for better or worse, more or less reliant on outside momentum to keep them going. Joomla has conferences (that you have to pay to see), MySQL is powered by Sun, Ubuntu has pay-for-support. The fact that Google is willing to pay for kids to work on open source projects of their choosing (and granted, Google's selection), is not necessarily a form of control above and beyond the influence of capital. If I had millions of dollars, I would probably donate to open source projects myself - and they may be ones of my choosing - but I probably couldn't consider myself controlling them as much as helping them grow.

    This is really nothing more than a professional courtesy offered by Google. They are right - its dumb for everyone to download the same files over and over again.

    Furthermore, someone made a post earlier talking about JS programmers needing to learn how to use "modularity". We

  • Re:Dependency hell? (Score:3, Informative)

    by Spaceman40 ( 565797 ) <[gro.mca] [ta] [sknilb]> on Wednesday May 28, 2008 @12:37PM (#23572261) Homepage Journal
    Read, and be enlightened:

    The versioning system allows your application to specify a desired version with as much precision as it needs. By dropping version fields, you end up wild carding a field. -- google.load() versioning [google.com]
  • by joelwyland ( 984685 ) on Wednesday May 28, 2008 @01:52PM (#23573451)

    No, seriously: as far as I can tell, nothing Google does is unrelated to data mining. What am I missing?

    How about Sketch Up [google.com]? It's a 3d modeling application they offer for free. You don't have to interact with Google at all to use it (apart from the actual download).
  • Re:Umm, no (Score:5, Informative)

    by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Wednesday May 28, 2008 @02:53PM (#23574371) Homepage Journal

    Some of us are still stuck on, gasp, dial-up. And loading an external JavaScript file is STILL far slower than inlining it.

    Your grasp of the web sucks. Here's what happens on the second page you load on that site:

    1. Send request for the page.
    2. Read the page, sees it needs a JavaScript file.
    3. See that the JavaScript file was already cached locally.
    4. You're finished loading the page, so let people interact with it.

    I use maybe 20KB of JavaScript in parts of my site. Why tack an extra 20KB onto each and every pageload, meaning that each takes about another 4 seconds for someone on dialup? To satisfy the screwed-up sense of purity for some premature optimization fan who doesn't really understand the issues involved? No thanks. My site is optimized for real conditions.

  • Re:Umm, no (Score:5, Informative)

    by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Wednesday May 28, 2008 @04:10PM (#23575543) Homepage Journal

    Step 3 is always "send a new request".

    Nope. You're flat-out, demonstrably wrong. Try watching an Apache log sometime. You see a visitor load a page, all its JavaScript, and all of its images. Then you see them load another page, and this time they only fetch the new HTML. There are no other GETs or HEADs - just the HTML.

    Inlining script isn't hard, either:

    Of course not. The issue is whether it's a good idea (it's not), not whether it's easy (it is).

  • Re:Umm, no (Score:3, Informative)

    by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Wednesday May 28, 2008 @05:35PM (#23576931) Homepage Journal

    However, I do have this piece of software called a "web browser" and another called a "packet sniffer".

    You have another one called "spyware", or perhaps "rootkit". Your experiment, conducted here on Ubuntu 8.04 with Wireshark 1.0.0, Firefox 3.0b5, and Konqueror 3.5.9, shows exactly the results I described and nothing resembling the results you invented to "prove" your point.

    Oh, hey, look, it DOES make a request for external JavaScript files EVERY SINGLE PAGE LOAD, just like I said it does! Sure, it gets a 304 response, but you've still got extra overhead and latency for NOTHING.

    304s would show up in my Apache logs, but they don't. Of course not! My browsers aren't making them.

    And as someone else pointed out, it's faster to just pull it from the page than to load it from the cache in the first place.

    As they incorrectly pointed out. Let's add some more facts to the discussion.

    First, this (almost never incurred) overhead is much smaller than inlining proponents want to claim:

    $ echo 'HEAD / HTTP/1.1\nHost: example.com\n\n' | nc -q1 house.example.com 80 | wc -c
    610

    Second, given an average HTML size of 20KB, an average JavaScript size of 20KB, and a 56K modem (which will get about 5KB/s on a good day), loading n pages will take:

    time(inline) = 40 * n / 5, or about 80 seconds for 10 pages
    time(external) = ((20 * n) + 20) / 5, or about 44 seconds for 10 pages
    time(external + fictional 304 overhead) = (((20 + .6) * n) + 20) / 5, or about 45.2 seconds per 10 pages

    Care to explain which part of that makes inline JavaScript faster, particularly for the dialup users y'all are claiming to save from the evils of external files?

  • by Kalriath ( 849904 ) * on Wednesday May 28, 2008 @07:32PM (#23578651)

    The real bullshit is that you used to be able to BUY urchin fairly reasonably, and host it on your own server, and get nearly the same reports, and analytics, and without having to give up your data, and letting google track all your visitors from site to site.
    You still can. They're also working on Urchin 5 beta, which is essentially the new Google Analytics in a box. It's still frigging expensive unless you get it via a hosting provider though (I get the older version of Urchin for $5/month through my DC)

  • by MisterBlueSky ( 1213526 ) on Wednesday May 28, 2008 @08:15PM (#23579165)

    It's pretty compact JavaScript,
    urchin.js is plain ole' human written javascript. ga.js is created using google's Java-to-Javascript compiler (http://code.google.com/webtoolkit/ [google.com]), which creates javascript from Java.

    and I haven't gone through it to work out what it's doing.
    Good. Safe yourself the trouble, because you won't succeed. It's, to put it mildly, not optimized for human readers. :)

The hardest part of climbing the ladder of success is getting through the crowd at the bottom.

Working...