Google To Host Ajax Libraries 285
ruphus13 writes "So, hosting and managing a ton of Ajax calls, even when working with mootools, dojo or scriptaculous, can be quite cumbersome, especially as they get updated, along with your code. In addition, several sites now use these libraries, and the end-user has to download the library each time. Google now will provide hosted versions of these libraries, so users can simply reference Google's hosted version. From the article, 'The thing is, what if multiple sites are using Prototype 1.6? Because browsers cache files according to their URL, there is no way for your browser to realize that it is downloading the same file multiple times. And thus, if you visit 30 sites that use Prototype, then your browser will download prototype.js 30 times.
Today, Google announced a partial solution to this problem that seems obvious in retrospect: Google is now offering the "Google Ajax Libraries API," which allows sites to download five well-known Ajax libraries (Dojo, Prototype, Scriptaculous, Mootools, and jQuery) from Google. This will only work if many sites decide to use Google's copies of the JavaScript libraries; if only one site does so, then there will be no real speed improvement.
There is, of course, something of a privacy violation here, in that Google will now be able to keep track of which users are entering various non-Google Web pages.' Will users adopt this, or is it easy enough to simply host an additional file?"
solution in search of a problem (Score:5, Insightful)
Re:solution in search of a problem (Score:5, Interesting)
This is very similar to the purpose of the already-existing google-analytics.com. I block this site in my hosts file (among others) and I take other measures because I feel that if a corporation wants to take my data and profit from it, they first need to negotiate with me. Since Google is not going to do that, I refuse to contribute my data. To the folks who say "well how else are they supposed to make money" I say that I am not responsible for the success of someone else's business model, they are free to deny me access to their search engine if they so choose, and I would also point out that Google is not exactly struggling to turn a profit.
The "something of a privacy violation" mentioned in the summary seems to be the specific purpose.
Re: (Score:3, Interesting)
Re: (Score:2)
Re:solution in search of a problem (Score:5, Interesting)
The file is indeed Javascript and it's called "urchin.js" (nice name eh?). Personally, I use the hosts file because I don't care to even have my IP address showing up in their access logs. This isn't necessarily because I think that would be a bad thing, but it's because I don't see what benefit there would be for me and, as others have mentioned, the additional DNS query and traffic that would take place could only slow down the rendering of a given Web page.
I also use NoScript, AdBlock, RefControl and others. RefControl is nice because the HTTP Referrer is another way that sites can track your browsing; before Google Analytics it was common for many pages to include a one-pixel graphic from a common third-party host for this reason. Just bear in mind that some sites (especially some shopping-cart systems) legitimately use the referrer so you may need to add those sites to RefControl's exception list in order to shop there, as the default is to populate the referrer with the site's own homepage no matter what the actual referrer would have been.
Re: (Score:2)
Re:solution in search of a problem (Score:4, Informative)
Re: (Score:3, Interesting)
However, it's not just Google that's grabbing that kind of information. Anyone
Re: (Score:3, Interesting)
Everything from your age, race, language, skin colour, religion, marital status, where you work, what you drive, your education level, your income level, where you went to school, who your friends are, where you vacation, where you shop, your taste in movies, your taste in porn, your taste in books, your politics, whether you have kids... they might even know your face.
Its a surveillance wet dream.
It's not that they are specifically tracking *you*, they are tracking your 'type'. Its even more insidious. Ever think about the case where it *is* that true? That you are that predictable and actually do fit pretty close to one of a handful of templates? It's a truism that we are individuals, but what if that is less the case than instinct and pride wants to allow?
Re:solution in search of a problem (Score:5, Interesting)
We use analiytics. We use it almost exclusively to improve the experience of our customers. We don't care how many people come to our site. We care how many buy... and we have internal reports for that. What we do care about is:
How many people are not using IE. (We found it was worth making sure all pages worked on most all.
How many people are at 1280*1024 or over.
We dropped the notion that we needed to program for 800*600, thereby letting people use more of those big ass screens they buy.
Where are most of the people located?
We now have an east coast and west coast server.
What pages are most viewed?
We made them easier to get to.
Who doesn't have flash?
It was 2.08%, but I'm still not going to let them put flash on my site.
Re: (Score:2)
Re:solution in search of a problem (Score:5, Insightful)
The bandwidth cost should be small since Google uses these libraries already and the whole idea is to improve browser caching. The maintenance cost of hosting static content shouldn't be that high, either. I mean, really.
Since the labor, hardware, and bandwidth costs all seem to be low, Google wouldn't be under pressure to make the investment pay. Google hosts lots of things that don't benefit them directly and from which they gain no real advantage except image.. Despite being a data-mining machine, Google does a lot of truly altruistic stuff.
Re:solution in search of a problem (Score:5, Insightful)
Low cost != no cost. While you definitely have a point about their corporate image, I can't help but say that recognizing a company as a data-mining machine as you have accurately done, and then assuming (and that's what this is, an assumption) an altruistic motive when they take an action that has a strong data-mining component, is, well, a bit naive. I'm not saying that altruism could not be the case and that profit must be the sole explanation (that would also be an assumption); what I am saying is that given the lack of hard evidence, one of those is a lot more likely.
Re: (Score:2)
Like what?
No, seriously: as far as I can tell, nothing Google does is unrelated to data mining. What am I missing?
Re: (Score:3, Informative)
No, seriously: as far as I can tell, nothing Google does is unrelated to data mining. What am I missing?
Re: (Score:2)
Google bought Sketch Up in order to drive people to Google Maps. Although it's technically true that you don't have to use Google Maps with Sketch Up, Google's goal is to make you want to.
Re:solution in search of a problem (Score:5, Insightful)
You may refuse to give them your data, but if I had the ability, Apache would refuse to give you my data until you eased off on the attitude.
Re: (Score:3, Funny)
Re: (Score:2, Interesting)
Re:solution in search of a problem (Score:5, Insightful)
When you visit a website, the site owner is well within their rights to record that visit. To assert otherwise is an extremist view that needs popular and legislative buy-in before it can in any way be validated. The negotiation is between Google and website owners.
If you want to think of your HTTP requests as your data, then you'd probably best get off the Internet entirely. No one is every going to pay you for it.
Also:
Red herring. No one says that. No one even thinks about that. Frankly there are far more important privacy concerns out there than the collection of HTTP data.
Re: (Score:2)
In what way was I dramatic? I dislike the side-effects of a business model, so I choose not to contribute to what I dislike. I wasn't claiming that any of this is the end of the world or some significant threat to society (which would constitute being dramatic), I was explaining how and why I choose not to participate.
There's less difference than you think (Score:2)
We are not talking about the HTTP access logs of a site that I visit. We are talking about data shared with third parties for marketing purposes. This data does not materialize out of thin air; it requires my participation. So long as this is the case, I am well within my rights to decline to participate. To he who claims I used a red herring, please do explain what's wrong with that?
The red herring is whether you're actually preventing any use of data. You're concerned that GA shares the data with Google as well as the site owner. But once your visit is logged at the server, the site owner can share that data with whoever the heck they feel like. The data in HTTP server logs are widely understand to be the property of the site owner and they are free to do whatever they want with them.
The only difference is that it would not be obvious to you when it happens. If you think otherwise yo
Re: (Score:3, Insightful)
Yup. I have no way to stop them, afterall.
Nope. If the website owner wanted to transmit information to Google, he can do so by having his server contact Google, or by dumping his logs to Google.
Instead, if the website owner sends code to my browser to give information to Google, I am within my rights to refuse to do so.
Alternatively, the website owner in qu
Re:solution in search of a problem (Score:5, Informative)
Re: (Score:2)
How valuable is one data point per year? (Score:4, Insightful)
In fact, the cache headers specify that the JS libs don't expire for A YEAR, so Google will only see the first site you visit with X library Y version in an entire year.
Is this information really that valuable?
Mind you, this assumes you're hard-coding the google-hosted URLs to the JS libs, instead of using http://www.google.com/jsapi [google.com] -- but that's a perfectly valid and supported approach.
If you use their tools to wildcard the library version, etc. etc. then they get a ping each time that JSAPI script is loaded (again, no huge amount of info for them, but still you can decide whether you want the extra functionality or not).
Network Caching Makes this Moot (Score:2, Interesting)
Moreover, if this idea catches on, WebBrowsers will begin shipping with these well know URIs preinstalled, perhaps even with optimized versions of the scripts that cut out all the IE6 cruft. What is really needed to make this work is a high bandwidth, high availabil
Re: (Score:2)
The idea, as far as I can tell, is to improve browser caching, not just distribution.
If a lot of sites that use prototype.js all refer to it by the same URL, chances are greater that a given client will already have cached the file before it hits your site. Therefore, you don't have to serve as much data and your users don't have to keep dozens of copies of the same file in their caches, and sites load marginally faster for everyone on the first hit only.
Plus Google gets even more tracking data with
Re: (Score:2)
If you're a web site worried about javascript library hosting, caching and such, this will help, a bit. Mostly to banish an annoyance.
If, on the other hand, you're a famous search engine who'd love to know more about who uses what javascripting libraries on which sites
Re:solution in search of a problem (Score:5, Insightful)
Someone could even put up a site called googlenoise.com or whatever, with the sole purpose of loading the useful versions of the library into the cache from the same place.
Re: (Score:2)
Re: (Score:2, Informative)
http://code.google.com/apis/ajaxlibs/documentation/index.html#AjaxLibraries [google.com]
Re: (Score:2)
One thing I did notice -- that API JS has headers to avoid caching, so I think it will be reloaded on each page.
So I certainly have no plans to use it... but the straight-access URLs are easy to find and the concept is very good, so I'll probably adopt that!
http://code.google.com/apis/ajaxlibs/documentation/#AjaxLibraries [google.com]
Re: (Score:2)
Sure, but this is *cached* content (Score:2)
This is different from analytics and ad servers for that reason -- those are NEVER cached because they want every browser to hit them, every time.
Re: (Score:2, Insightful)
No good reason for this... (Score:2, Insightful)
If you want to improve the speed of downloading, how about removing 70% of the code which just encodes/decodes from XML and start using simple and efficient delimiters? I was a fan of Xajax, but I had to re-write it from scratch... XML is too verbose when you control both endpoints.
It is not a problem to host an additional file, and this only gives Google more information than they need... absolutely no good reason for this.
Re: (Score:2)
Re: (Score:3, Informative)
Done [json.org]. What's next on your list?
(For those of you unfamiliar with JSON, it's basically a textual representation of JavaScript. e.g.
If you trust the server, you can read that with a simple "var obj = eval('('+json+')');". If you don't trust the server, it's still easy to parse with very little code.)
Re: (Score:2)
And if you still want to use jQuery for other JavaScript interface joy, it can handle JSON natively [jquery.com]. (Other frameworks probably do too, I just happen to be a fan of jQuery.)
Re: (Score:2)
Actually this [jquery.com] is a better example.
Don't use "eval" in Javascript for input (Score:2)
This was a dumb feature in Javascript. In LISP, there's the "reader", which takes in a string and generates an S-expression, and there's "eval", which runs an S-expression through the interpreter. The "reader" is safe to run on hostile data, but "eval" is not. In Javascript, "eval" takes in a string and runs it as code. Not safe on hostile data.
JSON is a huge security hole if read with "eval". Better libraries try to wrap "eval" with protective code that looks for "bad stuff" in the input. Some such
Well doh (Score:4, Insightful)
Re: (Score:2)
Re: (Score:2)
A bit of code (unless I'm missing something) is going to be smaller than your average image. What's the gain?
Other than for google of course.
Only a partial solution (Score:5, Insightful)
Re: (Score:2)
That isn't too much of a problem. You can include the Google version first and then override any function or object by simply redeclaring it.
Re:Only a partial solution (Score:5, Insightful)
Maybe it is possible to get TOO modular. Several problems with that:
1. With many little files comes many little requests. If the web server is not properly set up, then the overhead these individual requests causes really slows the transmission of the page. Usually, it's faster to have everything in one big file than to have the same number of kilobytes in many smaller files.
2. From a development point of view, I use several JS bits that require this or that library. I don't know why or what functions it needs. And I really don't care; I have my own stuff I want to worry about. I don't want to go digging through someone else's code (that already works) to figure out what functions they don't need.
3. If I do custom work where file size is a major factor or if I only use one function from the library, I guess then I'll just modify as I see fit and host on my own site.
I think what Google is doing is great, but I can't really use it for my sites (they're all secure). So unless I want that little warning message to come up, I won't be using it.
Re: (Score:3, Interesting)
But more specifically, my sites are military and all the content must come from trusted military web servers (better if it's the same one for all the content).
Beware the overhead. (Score:5, Insightful)
Additionally, if you're using compression, it is likely that one large file will compress more effectively than a collection of smaller files. (You *are* using compression, aren't you?)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Informative)
"With Google hosting everything," you get to use exactly the version you want - google.load('jquery', '1.2.3') or google.load('jquery', '1.2'), which will get you the highest version '1'.2 available - currently 1.2.6. Furthermore, you can still load your own custom code or modifications after the fact.
For those concerned about privacy: yes they keep stats - they e
Re: (Score:2)
It exists. It's called mooTools. The Javascript programmer can decide which functions/objects/classes he needs on any individual web page and download a packed version of the library that only suits their particular needs.
nothing new here (Score:5, Informative)
Yabbut (Score:3, Interesting)
I like Google too -- and this is nice of them -- but I like the idea of a website being as self-sufficient as possible (not relying on other servers, which introduce extra single-points-of-failure into the process.)
At the risk of sounding like an old curmudgeon, whatever happened to good ol' HTML?
dependence on Google is but one problem (Score:5, Interesting)
Yeah, but what if Google decides that nobody is using these -- or they can't legally host them for whatever reason -- or they just decide that they don't want to do this anymore?
Think broader. What happens when:
But, yes- you're right. This is a scary new dependency. For a company full of PhD geniuses supposedly Doing No Evil, nobody at Google seems to understand how dangerous they are to the health of the web. In fact, I'd suggest they do, and they don't care- because they seem hell-bent on making everything on the web touch/use/rely upon Google in some way. This is no exception.
A lot of folks don't even realize how Google is slowly weaning open-source projects into relying on them, too (with Google Summer of Code.)
Re: (Score:3, Insightful)
I'm *so* scared.
Google is supporting web apps and offering to host the nasty boring bits that need strong caching. How very evil of them.
And if Google is hacked, we're ALL screwed a hundred different ways. The average web developer *using* these libraries is more likely to have a vulnerable server than Google.
Re:dependence on Google is but one problem (Score:5, Informative)
Then you go back to including a script tag in your header of your HTML document. All of these frameworks are free. They will likely remain free even though some are sponsored (mooTools). The speed improvement exists, but is moderate-to-minimal, since a packed full-inclusion library of mooTools runs you about 32kb. That's 32kb the user wont need to download again and again, but its just 32kb.
Even if the code is compressed and obfuscated, this "wrapped functionality" would become *glaringly* obvious to any javascript programmer using Firebug. The news would most likely spread like wildfire. Especially on /.
The only thing they could track without really being monitored are which IP addresses are requesting the mooTools.js (for example) file. They could measure its popularity, but not *necessarily* what sites they are visiting. Granted- I haven't looked at the API yet, but if its just a .js file hosted on a Google server, there isn't really too much they can monitor that they don't already. Analytics provides them with *tons* more information. To be honest, this just seems like a professional courtesy.
There is a key point here I hope was visible - JavaScript is totally Load on Delivery, Executed Client Side. They can't really sneak much past you , and any efforts to do so would still remain visible in some way (you would see NET traffic).
Interesting, but I haven't seen Google hacked yet - not saying it can't happen, but I've not seen it. There is more of a chance of someone intercepting the expected .js file and then passing a different one- however, keep in mind that if you are doing *anything* that requires any level of security what so ever in JS, well, you have other, deeper fundamental problems.
It is a dependency, but its not that scary. Open source projects have always been, for better or worse, more or less reliant on outside momentum to keep them going. Joomla has conferences (that you have to pay to see), MySQL is powered by Sun, Ubuntu has pay-for-support. The fact that Google is willing to pay for kids to work on open source projects of their choosing (and granted, Google's selection), is not necessarily a form of control above and beyond the influence of capital. If I had millions of dollars, I would probably donate to open source projects myself - and they may be ones of my choosing - but I probably couldn't consider myself controlling them as much as helping them grow.
This is really nothing more than a professional courtesy offered by Google. They are right - its dumb for everyone to download the same files over and over again.
Furthermore, someone made a post earlier talking about JS programmers needing to learn how to use "modularity". We
Re: (Score:3, Insightful)
If Google suddenly decides to stop hosting these, or touches them in some fashion, it's going to get discovered and fixed in well under 24 hours. Google hosting a file like this means that there are going to be a *lot* of eyes on the file.
Google is, as it currently stands, far from "dangerous to the health of the web". Outside of using their webmail fairly heavily, I could avoid using google quite easily- as could any other
I won't adopt (Score:5, Insightful)
Comment removed (Score:3, Insightful)
Re: (Score:3, Informative)
You mean like YUI does? (Score:3, Interesting)
I didn't see no slashdot article when yahoo put up hosted YUI packages [yahoo.com] served off their CDN.
I guess it's because google is hosting non-google libraries?
Yahoo does this already... (Score:3, Interesting)
Host it yourself, add meta-tag (Score:3, Interesting)
Eg:
script type="javascript" src="prototype.js" origin="http://www.prototype.com/version/1.6/" md5="..............."
When another user want to use the same lib, he can the use the origin, and the browser will not download it from the new site. It's crucial to use the md5 (or other method), which the browser must calculate the first time it download it. Or else it would be easy to create a bogus file and get it run on another site.
Of course this approach is only as secure as the hash.
The web needs content addressable links! (Score:2)
The web really needs some sort of link to a SHA-256 hash or something. If that kind of link were allowed ubiquitously it could solve the Slashdot effect and also make caching work really well for pictures, Ajax libraries and a whole number of other things that don't change that often.
Re: (Score:2)
I wish I could go back and my post...
It would also solve stupid things like Netscape ceasing to host the DTDs for RSS.
Probably a pretty cool idea (Score:2)
Google, of course, gets even more information about everyone.
win win, except for us privacy people. I guess we have to true "do no evil," huh?
what a piece of nonsense (Score:2)
Thanks, but I prefer that my site works even if some other site I have nothing to do with is unreachable today. Granted, Google being unreachable is unlikely, but think about offline copies, internal applications, and all the other perfectly normal things that this approach suddenly turns into special cases.
Re: (Score:2)
Umm, no (Score:4, Interesting)
Second, I've always had this complaint with the whole external javascript files. When you're already downloading a 50K html page, another 10K of javascript code in the same file inline downloads at full-speed. The external file requires yet another hit to the server, and everything involved therein. It almost never makes any sense. Even as a locally cached file, on a broadband connection, downloading the extra 10K is typically faster than opening and reading the locally cached file!
But still, hosting a part of your corporate web-site with google simply breaches most of your confidentiality and non-disclosure agreements that you have with your clients and suppliers. It's that simple. Find the line that reads "shall not in any way disclose Confidential Information to any third party at any time, including consultants and contractors, copy and/or merge the Confidential Information/business relationship with any other technology, software or materials, except contractors with a specific need to know . .
Simply put, if your Confidential client conversations go over gmail, you're in breach. If google tracks/monitors/sells/organizes/eases your business with your clients or suppliers, you're in breach -- i.e. it's illegal, and your own clients/suppliers can easily sue you for giving google their trade secrets.
Obviously it's easier to out-source everything and do nothing. But there's a reason that google and other such companies offer these services for free -- it's free as in beer, at the definite cost of every other free; and it's often illegal for businesses.
Re: (Score:2)
Second, I've always had this complaint with the whole external javascript files. When you're already downloading a 50K html page, another 10K of javascript code in the same file inline downloads at full-speed. The external file requires yet another hit to the server, and everything involved therein. It almost never makes any sense. Even as a locally cached file, on a broadband connection, downloading the extra 10K is typically faster than opening and reading the locally cached file!
That's not the reason that I generally use external JavaScript files. The reason is code reuse, pure and simple. Generally speaking it's far easier to just link to the file (especially for static HTML pages) than it is to try and inline it. That way when you fix a bug that effects Internet Explorer 6.0.5201 on Windows XP SP1.5 or whatever you don't have to copy it to all your static files as the code is in a single location.
Sure, you could use server-side includes, but then you need to make sure that yo
Re:Umm, no (Score:5, Informative)
Your grasp of the web sucks. Here's what happens on the second page you load on that site:
I use maybe 20KB of JavaScript in parts of my site. Why tack an extra 20KB onto each and every pageload, meaning that each takes about another 4 seconds for someone on dialup? To satisfy the screwed-up sense of purity for some premature optimization fan who doesn't really understand the issues involved? No thanks. My site is optimized for real conditions.
Re:Umm, no (Score:5, Informative)
Nope. You're flat-out, demonstrably wrong. Try watching an Apache log sometime. You see a visitor load a page, all its JavaScript, and all of its images. Then you see them load another page, and this time they only fetch the new HTML. There are no other GETs or HEADs - just the HTML.
Of course not. The issue is whether it's a good idea (it's not), not whether it's easy (it is).
Re: (Score:3, Informative)
However, I do have this piece of software called a "web browser" and another called a "packet sniffer".
You have another one called "spyware", or perhaps "rootkit". Your experiment, conducted here on Ubuntu 8.04 with Wireshark 1.0.0, Firefox 3.0b5, and Konqueror 3.5.9, shows exactly the results I described and nothing resembling the results you invented to "prove" your point.
Oh, hey, look, it DOES make a request for external JavaScript files EVERY SINGLE PAGE LOAD, just like I said it does! Sure, it gets a 304 response, but you've still got extra overhead and latency for NOTHING.
304s would show up in my Apache logs, but they don't. Of course not! My browsers aren't making them.
And as someone else pointed out, it's faster to just pull it from the page than to load it from the cache in the first place.
As they incorrectly pointed out. Let's add some more facts to the discussion.
First, this (almost never incurred) overhead is
Re: (Score:3, Insightful)
Something that may be affecting the differing results you two are seeing is that the call to check if a file has been modified is browser and user settings dependent.
In fairness to AC, he may also be connecting to some ancient or broken server that doesn't support the "Cache-Control: max-age" or "Expires:" headers. If that's the case, or if he's running a noncompliant browser that improperly handles those, then it's possible that he's making a lot more requests than necessary.
Either way, it's still a problem between his browser and that server, and not a problem with HTTP in general.
Re: (Score:2)
The external file requires yet another hit to the server, and everything involved therein. It almost never makes any sense.
To the end user, you are right. However, from a web developer/programming standpoint, it actually does make sense. It is all about modular use of code -- when you write C/C++ programs, do you incorporate all of the code in-line, or do you reference standard libraries to handle some of the common functions? You use standard libraries of course, because it makes your code easier to maintain. If you need to update a function, you change it in one place (the external library) and voila! All of your code
Re: (Score:2)
Re: (Score:3, Interesting)
It almost always makes sense. The external file only requires another hit to the server the first time you see it. From that point on, every page hit is smaller in size b
What is confidential about HTTP GET? (Score:2)
But still, hosting a part of your corporate web-site with google simply breaches most of your confidentiality and non-disclosure agreements that you have with your clients and suppliers. It's that simple. Find the line that reads "shall not in any way disclose Confidential Information to any third party at any time, including consultants and contractors, copy and/or merge the Confidential Information/business relationship with any other technology, software or materials, except contractors with a specific need to know . . ."
There is a legal definition of "confidential information" to be satisfied if one were to actually pursue a breach of contract. I do not see how HTTP GET requests could possibly satisfy that. Google would see only a requesting IP and what file was served (a standard UI library). This is not substantially different from what any intermediary on the public Internet would see as the packets passed through.
If you and I have an NDA, and I place a call to you from my cell phone, the mere existence of that call do
I gotta ask.. (Score:2)
google or not-- emails pass through third parties all the time.
Cross-Site Scripting by Definition (Score:4, Insightful)
For many common sites that aren't processing sensitive information however, sharing this code is probably a very good idea. Even better would be if google provided a signed version of the code so that you could see if it has been changed.
Re: (Score:3, Informative)
Google would only be able to execute Javascript on your user's page if they modified the source code of the library you were loading from them. Which would be a BIG no-no.
(Google does have a "loader" function available, but also just allows you to include the libraries via a traditional script tag to a static URL.)
Otherwise, cookies are NOT cross-domain and wouldn't be passed with the HTTP request, unless you were silly enough to CNAME your "js.mys
Re: (Score:2)
Re:Cross-Site Scripting by Definition (Score:4, Insightful)
The library developers would have their hide if they attempted such a thing!
And I'm NOT wrong about cookies. Your site's cookies are not sent in the HTTP request, they would only be accessible via JavaScript--and again, without Google modifying the source code of these libraries to be malicious, they wouldn't be privy to those cookies.
Not that cookies store anything useful these days... almost everyone with serious server-side code uses server-side persisted sessions, so the only cookie is the session ID.
Re: (Score:2)
The difference between this and urchin, adsense, etc, is that the specific scripts you use are defined ahead of time. If they serve anything other than jQuery or whatever, then they are almost certainly in breach of many laws across the world, e.g. the Computer Misuse Act in the UK. When you reference jQuery on their systems, y
This is a great idea... (Score:2)
http://www.tallent.us/blog/?p=7
This will enable web developers to support richer, cross-browser apps without the full "hit" of additional HTTP connections and bandwidth.
Users gain the benefit of faster rendering on every site that uses these libraries--both due to proper caching, and because their browser can open more simultaneous HTTP connections.
If Google goes down, change your header/footer scripts. BFD.
In an age where Flash/Silverlight/etc. are supposed to be t
Dependency hell? (Score:4, Insightful)
As a commenter there noted, it's a much better idea to use version-specific URIs, allowing users to choose the versions they wish to use -- otherwise version mismatches will occur between user apps and the Google-hosted libs, creating bugs and the classic 'dependency hell' that would be familiar to anyone who remembers the days of 'DLL hell'.
Re: (Score:3, Informative)
Summary has tone of innovation (Score:2)
Re:Couldn't be... (Score:5, Insightful)
Yes, you've gotta be careful with those incompetant sysadmins that Google are hiring.
After all, they're constantly getting the servers hacked.
Re: (Score:3, Insightful)
You are not thinking this through!
Re: (Score:2, Insightful)
Re: (Score:2, Interesting)
http://www.radaronline.com/from-the-magazine/2007/09/google_fiction_evil_dangerous_surveillance_control_1.php
Re: (Score:3, Funny)
Re: (Score:2)
Re: (Score:3, Interesting)
And, frankly, they could do a lot more dangerous (and easy pay-off) things than just redirect requests for a Javascript library--such as redirecting ebay, paypal, etc. to a phishing site.