Google To Host Ajax Libraries 285
ruphus13 writes "So, hosting and managing a ton of Ajax calls, even when working with mootools, dojo or scriptaculous, can be quite cumbersome, especially as they get updated, along with your code. In addition, several sites now use these libraries, and the end-user has to download the library each time. Google now will provide hosted versions of these libraries, so users can simply reference Google's hosted version. From the article, 'The thing is, what if multiple sites are using Prototype 1.6? Because browsers cache files according to their URL, there is no way for your browser to realize that it is downloading the same file multiple times. And thus, if you visit 30 sites that use Prototype, then your browser will download prototype.js 30 times.
Today, Google announced a partial solution to this problem that seems obvious in retrospect: Google is now offering the "Google Ajax Libraries API," which allows sites to download five well-known Ajax libraries (Dojo, Prototype, Scriptaculous, Mootools, and jQuery) from Google. This will only work if many sites decide to use Google's copies of the JavaScript libraries; if only one site does so, then there will be no real speed improvement.
There is, of course, something of a privacy violation here, in that Google will now be able to keep track of which users are entering various non-Google Web pages.' Will users adopt this, or is it easy enough to simply host an additional file?"
nothing new here (Score:5, Informative)
Re:No good reason for this... (Score:3, Informative)
Done [json.org]. What's next on your list?
(For those of you unfamiliar with JSON, it's basically a textual representation of JavaScript. e.g. If you trust the server, you can read that with a simple "var obj = eval('('+json+')');". If you don't trust the server, it's still easy to parse with very little code.)
Re:Only a partial solution (Score:1, Informative)
Fun package to work with too.
Re:solution in search of a problem (Score:5, Informative)
Re:solution in search of a problem (Score:2, Informative)
http://code.google.com/apis/ajaxlibs/documentation/index.html#AjaxLibraries [google.com]
Re:Speaking as a JQuery user... (Score:3, Informative)
Re:solution in search of a problem (Score:4, Informative)
Re:Only a partial solution (Score:3, Informative)
"With Google hosting everything," you get to use exactly the version you want - google.load('jquery', '1.2.3') or google.load('jquery', '1.2'), which will get you the highest version '1'.2 available - currently 1.2.6. Furthermore, you can still load your own custom code or modifications after the fact.
For those concerned about privacy: yes they keep stats - they even do some of it client side - after libraries are loaded, a call is made to http://www.google.com/uds/stats [google.com] with a list of which libraries you loaded. However, the loader is also the same exact loader you would use if you were using other Google JavaScript APIs anyways. It started out as a way to load the Search API and the Maps API: google.load('maps', '2') and/or google.load('search', '1').
Google's claim to providing good distributed hosting of compressed and cachable versions of the libraries aside, the loader does a few useful things in its own right. It deals with versioning, letting you decide to which granularity of versions you want to load, and letting them deal with updates. Also, it deals with setting up a callback function that actually works after the DOM is loaded in IE, Safari, Opera, and Firefox, and after the entire page is load for any other browesers. They also provide convenience functions. google_exportSymbol will let you write your code in a non-global scope, and then put the 'public' interfaces into the global scope.
Finally, you can inject your own libraries into their loader. After the jsapi script tag, include your own, set google.loader.googleApisBase to point to your own server, and call google.loader.rpl with a bit of JSON defining your libraries' names, locations, and versions. Subsequent calls to google.load('mylib', 'version') will work as expected.
Re:Cross-Site Scripting by Definition (Score:3, Informative)
Google would only be able to execute Javascript on your user's page if they modified the source code of the library you were loading from them. Which would be a BIG no-no.
(Google does have a "loader" function available, but also just allows you to include the libraries via a traditional script tag to a static URL.)
Otherwise, cookies are NOT cross-domain and wouldn't be passed with the HTTP request, unless you were silly enough to CNAME your "js.mysite.com" to "ajax.googleapis.com".
Re:dependence on Google is but one problem (Score:5, Informative)
Then you go back to including a script tag in your header of your HTML document. All of these frameworks are free. They will likely remain free even though some are sponsored (mooTools). The speed improvement exists, but is moderate-to-minimal, since a packed full-inclusion library of mooTools runs you about 32kb. That's 32kb the user wont need to download again and again, but its just 32kb.
Even if the code is compressed and obfuscated, this "wrapped functionality" would become *glaringly* obvious to any javascript programmer using Firebug. The news would most likely spread like wildfire. Especially on /.
The only thing they could track without really being monitored are which IP addresses are requesting the mooTools.js (for example) file. They could measure its popularity, but not *necessarily* what sites they are visiting. Granted- I haven't looked at the API yet, but if its just a .js file hosted on a Google server, there isn't really too much they can monitor that they don't already. Analytics provides them with *tons* more information. To be honest, this just seems like a professional courtesy.
There is a key point here I hope was visible - JavaScript is totally Load on Delivery, Executed Client Side. They can't really sneak much past you , and any efforts to do so would still remain visible in some way (you would see NET traffic).
Interesting, but I haven't seen Google hacked yet - not saying it can't happen, but I've not seen it. There is more of a chance of someone intercepting the expected .js file and then passing a different one- however, keep in mind that if you are doing *anything* that requires any level of security what so ever in JS, well, you have other, deeper fundamental problems.
It is a dependency, but its not that scary. Open source projects have always been, for better or worse, more or less reliant on outside momentum to keep them going. Joomla has conferences (that you have to pay to see), MySQL is powered by Sun, Ubuntu has pay-for-support. The fact that Google is willing to pay for kids to work on open source projects of their choosing (and granted, Google's selection), is not necessarily a form of control above and beyond the influence of capital. If I had millions of dollars, I would probably donate to open source projects myself - and they may be ones of my choosing - but I probably couldn't consider myself controlling them as much as helping them grow.
This is really nothing more than a professional courtesy offered by Google. They are right - its dumb for everyone to download the same files over and over again.
Furthermore, someone made a post earlier talking about JS programmers needing to learn how to use "modularity". We
Re:Dependency hell? (Score:3, Informative)
Re:solution in search of a problem (Score:3, Informative)
No, seriously: as far as I can tell, nothing Google does is unrelated to data mining. What am I missing?
Re:Umm, no (Score:5, Informative)
Your grasp of the web sucks. Here's what happens on the second page you load on that site:
I use maybe 20KB of JavaScript in parts of my site. Why tack an extra 20KB onto each and every pageload, meaning that each takes about another 4 seconds for someone on dialup? To satisfy the screwed-up sense of purity for some premature optimization fan who doesn't really understand the issues involved? No thanks. My site is optimized for real conditions.
Re:Umm, no (Score:5, Informative)
Nope. You're flat-out, demonstrably wrong. Try watching an Apache log sometime. You see a visitor load a page, all its JavaScript, and all of its images. Then you see them load another page, and this time they only fetch the new HTML. There are no other GETs or HEADs - just the HTML.
Of course not. The issue is whether it's a good idea (it's not), not whether it's easy (it is).
Re:Umm, no (Score:3, Informative)
You have another one called "spyware", or perhaps "rootkit". Your experiment, conducted here on Ubuntu 8.04 with Wireshark 1.0.0, Firefox 3.0b5, and Konqueror 3.5.9, shows exactly the results I described and nothing resembling the results you invented to "prove" your point.
304s would show up in my Apache logs, but they don't. Of course not! My browsers aren't making them.
As they incorrectly pointed out. Let's add some more facts to the discussion.
First, this (almost never incurred) overhead is much smaller than inlining proponents want to claim:
Second, given an average HTML size of 20KB, an average JavaScript size of 20KB, and a 56K modem (which will get about 5KB/s on a good day), loading n pages will take:
time(inline) = 40 * n / 5, or about 80 seconds for 10 pages .6) * n) + 20) / 5, or about 45.2 seconds per 10 pages
time(external) = ((20 * n) + 20) / 5, or about 44 seconds for 10 pages
time(external + fictional 304 overhead) = (((20 +
Care to explain which part of that makes inline JavaScript faster, particularly for the dialup users y'all are claiming to save from the evils of external files?
Re:solution in search of a problem (Score:2, Informative)
Re:solution in search of a problem (Score:2, Informative)