Yahoo's YSlow Plug-in Tells You Why Your Site is Slow 103
Stoyan writes "Steve Souders, performance architect at Yahoo, announced today the public release of YSlow — a Firefox extension that adds a new panel to Firebug and reports page's performance score in addition to other performance-related features. Here is a review plus helpful tips how to make the scoring system match your needs.
/. gets a D (Score:5, Funny)
Re:/. gets a D (Score:5, Funny)
Re:/. gets a D (Score:5, Interesting)
Re: (Score:2)
That seems silly. Isn't one of the advantages of having a separate CSS file that you reduced redundancy across multiple pages? Sure, it's an additional file to load - the first time.
Re:/. gets a D (Score:4, Interesting)
Something I would love to see are some of the headers condensed by the browser and server. For instance, on first request the browser sends the full headers. In the reply headers, the server would set a X-SLIM-REQUEST header with a unique ID that represents that browser configuration's set of optional headers (Accept, Accept-language, Accept-encoding, Accept-charset, User-agent, and other static headers). Further requests from that browser would then simply send the X-SLIM-REQUEST header and unique ID and the server would handle unpacking it -- if the headers are even needed. Servers that don't supply the header would continue to receive full requests, preserving full backward and forward compatibility.
There are a few things to reduce request sizes for web applications. MOD_ASIS is one of the best ones. We use it as one of the last steps of our deployment process. All images are read in via script, compressed if they are over a certain threshold, and minimal headers are added. Apache then delivers them as-is -- reducing load on Apache as well as the network (the only thing Apache adds is the Server: and Date: lines). ETags and last-modified dates are calculated in advance. Also certain responses such as simple HTTP Moved (Location:) responses, GZip isn't used -- GZiping the response actually *adds* to the size due to their very small document size.
Re: (Score:2)
Re: (Score:2)
*Your* site got a 'D' and *therefore* that seems to be the standard grade?
I think I see a flaw in your logic there, batman.
Re: (Score:2)
Re:/. gets a D (Score:5, Interesting)
I skipped the actual link and score on sites that are pretty much just representative of the sites around them. I wanted to include them by name, though, to show where they fall. I've stuck mostly to main index pages, and I've noted where I've gone deeper.
A: Google [google.com] (99%), Altavista main page [altavista.com] (98%), Altavista Babelfish [altavista.com] (90%) (including upon doing a translation from English to French), Craigslist [craiglist.org] (96%), Pricewatch [pricewatch.com] (93%), Slackware Linux [slackware.com], OpenBSD [openbsd.org], Led Zeppelin site at Atlantic [ledzeppelin.com] (100%), supremecommander.com, w3m web browser site [w3m.org] (96%)
B: Apache.org [apache.org] (87%), the lighttpd web server [lighttpd.net] (84%), Google Maps, which also got a C once [google.com] (84% in most cases), Perlmonks [perlmonks.org] (84%), Dragonfly BSD [dragonflybsd.org] (85%), Butthole Surfers band page [buttholesurfers.com] (81%), 37 Signals [37signals.com]
C: One Laptop Per Child, [olpc.com], ESR's homepage [catb.org], the Open Source Initiative [opensource.org] (78%), Google News [google.com] (73%), Lucid CMS [lucidcms.net] (74%), Perl.org [perl.org] (75%), lucasfilm.com, Charred Dirt game [charreddirt.com]
D: gnu.org, The Register [theregister.co.uk], A9 [a9.com] (66%), kernel.org [kernel.org], Akamai [akamai.com] (64%), kuro5hin.org, freshmeat.net, linuxcd.org, Movable Type [movabletype.org] (61%), Postnuke [postnuke.com], blogster.com, Joel on Software [joelonsoftware.com] (67%), Fog Creek Software [fogcreek.com], metallica.com, gaspowered.com, Scorched 3D [scorched3d.co.uk] (68%), id software [idsoftware.com] (64%), ISBN.nu book search [isbn.nu]
F: MS IIS [microsoft.com] (49%), microsoft.com, msn.com, linux.com, fsf.org, discovery.com, newegg.com, rackspace.com, the Simtel archive [simtel.net] (26%), CNet Download [download.com] (29%), Adobe [adobe.com] (58%), savvis.com, mtv.com, sun.com, pclinuxos.com, freebsd.org, phpnuke.org, use.perl.org, ruby-lang.org, python.org, java.com, Rolling Stones band page [rollingstones.com] (56%), powellsbooks.com, amazon.com, barnesandnoble.com, getfirefox.com
My site for my company (96%) gets an A (no, I'm not going to get it slashdotted) which is pretty simple but has a pic and some Javascript on it. Several sites I have done or have helped design with someone else get C or D ratings.
Re: (Score:1)
Re: (Score:2, Informative)
My site gets a D too (Score:2)
Re: (Score:3, Insightful)
Re:My site gets a D too (Score:4, Informative)
I doubt moving them above title makes any noticeable difference in the real world though.
Expires header (Score:2)
Not too impressed
Re: (Score:1)
Right, right, "for accounting purposes". Shut up, you "anti-advertising" frauds.
Re: (Score:1)
Especially for "indie" sites with small audiences, responsiveness can be a big selling point because you don't have that brand "power" to draw people in, but a snappy site will be noticed.
Sure but (Score:4, Funny)
Another tool (Score:3, Informative)
Re: (Score:3, Insightful)
Re: (Score:3, Informative)
Re: (Score:2)
Re:Another tool (Score:5, Funny)
Re: (Score:1, Interesting)
Re: (Score:2)
wondeful. except that's not why it's slow (Score:3, Insightful)
Re: (Score:2)
Why not? Couldn't a browser-side plugin simply measure the wallclock seconds it takes for the http request to complete? It could figure out what's being dynamically generated and what's being served from static by comparing all the requests for the same host and comparing the transfer rates.
Re: (Score:2)
Re: (Score:2)
Unless you're dynamically ALL of the content, some things will load faster than others. Odds are most of your images are static, so when your 10 KB HTML page takes longer to transfer than your 30 KB images, you blame the server side scripting. If you design a site in ColdFusion, it won't send any page data until the script finishes running. In scenarios like that, the delay before receiving data is an indication that somet
Re: (Score:1)
Dynamic server side performance is very rarely the main cause of speed problems - http latency from too many objects and poor placement of scripts and CSS are usually the problem.
Even if it takes two whole seconds for the server to generate the page, that's still a small fraction of the fifteen seconds it takes to completely download and render some more complicated sites.
nice plugin (Score:1)
Firebug not Firefox (Score:2)
Re:Firebug not Firefox (Score:4, Informative)
Re: (Score:2)
Re: (Score:2)
I cannot install YSlow as a browser extension unless I also have the Firebug extension enabled.
And since Firebug for some reason causes my browser to climb to 100% CPU and become unresponsive if I leave it enabled too long, I guess I won't be giving YSlow a try.
Web site optimization for dummies (Score:1, Insightful)
Since 9/10 web developers can't even be bothered using a validator, I predict great success for this tool.
Why is this a troll? (Score:5, Insightful)
The Anonymous Coward here is spot on. This thing gives awful, awful advice. Some of these in particular I really hated as a dialup user.
This is only a win if your images are tiny. Why are you optimizing for this? Tiny images do not take long to download, even on dialup, because they are tiny. Frankly I would prefer to have all the site's little icons progressively appear as they become available than have to wait while a single image thirty times the size of any one of them loads. Or, perhaps, fails to load, so that I have to download the whole thing again instead of just the parts I have.
This is hands down the stupidest idea I have ever heard. Ignoring for the moment that it won't even work for the 70% of your visitors using IE, sending the same image multiple times as base64-encoded text will completely swamp any overhead that would have been introduced by the HTTP headers.
Less egregious than suggesting CSS Sprites, but it still suffers from the same problems. These are not large files, and if they are large files, the headers are not larger.
What, seriously? Are you really optimizing for your visitors who load one and only one page before their cache is cleared? Even though you "measured... and found the number of page views with a primed cache is 75-85%"?
And if you ever change something but forget to change the file name, your visitors will have to reload everything on the damn page to get the current version of the one thing you changed. Assuming, of course, they even realize there should be a newer version than the one they're seeing. And assuming that they actually know how to do that.
Um. Duh? link elements are not valid in the body. style elements are
Re: (Score:2)
Inline Images - agreed. Dead on. Quite stupid.
Combined Files - I've flip-flopped a great deal about this one myself. While a single file can greatly reduce data transfer overhead by eliminating headers and ensuring packets are their f
Re: (Score:2)
And because they are tiny and numerous, the overhead from the HTTP headers is huge. Headers can easily be a few hundred bytes. Looking at the default 'icons' that come with Apache, the majority are little GIF's under 400 bytes. So if you go and download them with individual HTTP requests, you're throwing away 30-50% of your bandwidth just in HTTP overh
Re: (Score:1)
Re: (Score:2)
Perhaps the problem of lots of little images vs a single 'sprite' is more psychological. Perhaps it just appears fast seeing lots of individual images load.
True. You'd really only want to use 'sprites' on site graphics that don't change very often.
Re: (Score:1)
I would agree with this. As I said, loading a big image tended to make the content itself take longer. If I'm reading while the images load, I'll not notice or honestly even care if the page as a whole is 100% larger. Conversely, if you've done something to cut the load time in half, but I have to wait for the entire thing before I can actually use an
Re: (Score:1)
Googling isn't
Both numbers can be true... (Score:2)
As described in Tenni Theurer's blog Browser Cache Usage - Exposed!, 40-60% of daily visitors to your site come in with an empty cache. Making your page fast for these first time visitors is key to a better user experience.
What, seriously? Are you really optimizing for your visitors who load one and only one page before their cache is cleared? Even though you "measured... and found the number of page views with a primed cache is 75-85%"?
Daily Visitors != Page Views
Making up random numbers and fudging to a perfect caching system for convenience:
10 people hit your site on a given day.
3 have never been there before, have an empty cache, say, "Damn, this shit's slow," and leave.
2 have never been there before, have an empty cache but endure, surfing 5 pages each.
The other five are regular users and have files cached. They surf the same 5 pages.
Total: 3x1 + 2x(1+4) + 5x5 = 38 total pages.
(5 out of 10) 50% of daily visitors had an empty cache.
(
Re: (Score:1)
Sorry, I didn't mean to suggest that their numbers didn't add up, just that small optimizations that service half your visitors don't make sense when they are something that only has any impact on the first request. The disadvantages of aggregating files together in the manner they are suggesting just outweigh that small benefit.
OT: A right device for everything (Score:2)
You can't read (much) from /dev/null, and your numbers don't look like they come from /dev/zero either — those would be rather repetative.
I think, you meant /dev/random...
Re: (Score:1)
Re: (Score:1)
Well, let me put it to you this way. Let's say your little icons are 400B apiece. And say your headers are another 400B each way, bringing the effective size of the file up to just shy of 1.2KB. It's an ideal application for a CSS sprite, because you'll no question get huge wins in file size. On lousy 28.8 dialup I can still download three of them every second. Alternately, if the page is still loading, the bandwidth can be divided between loading the images and the page without much perceptible impact on e
Re: (Score:2)
The point is to reduce the number of HTTP connections, and thus avoid pointless latency. A TCP connection takes time to set up because there's a back-and-forth, and if the client is far from the server this can introduce a significant delay in loading static resources. Not to mention that the browser may have to reflow the page as the new
Re: (Score:1)
It's significant in r
Re: (Score:1)
Connection: Keep-Alive mitigates that somewhat.
Re: (Score:1)
For one, having the Exp
Re: (Score:1)
Why sites are slow (Score:2, Interesting)
If your site has 10 different affiliate links/sponsors, all hosted on different providers, your site will be slow.
Similarly, if your site has 100 different java/javascript crapplets,widgets, your site will be even slower.
Here is a simple guide for site creators:
1. Don't overload on ads, I'm not going to view them anyway
2. Put some actual content I'm interested in on your site
3. Don't overload me with java/javascript crap, I don't care what my mouse
Re: (Score:2)
When building a photogallery (sig), I thought it'll be pretty much like a photo in the center, and then two buttons to view the previous/next one. Yet, when I f
Re: (Score:2)
You have to. (Score:1)
You have to build up your resume some how in order to keep your job or to get a better one. What better way than to develop shit that the project really doesn't need but will sure look great on a resume!
And it's not just techies. Back in the mid nineties, it seemed that every CIO was moving his system from mainframe to distributed architecture.
Re: (Score:2)
Re: (Score:2)
He means having banners or other content that is actually retrieved from
the affiliate/sponsor's site, thereby insuring your page will load at
the response rate of the *slowest* of those ten sites.
Chris Mattern
Re: (Score:2)
Re: (Score:1)
1) Throw out the baby with the bathwater and pretend it's still 1996 . . . so that you can increase the number of impossible-to-please-anyways slashdot ACs that visit your site.
Yeah - that sounds like a real good plan.
Re: (Score:1)
F: You are co-located at 365 Main. (Score:5, Funny)
hmmm... (Score:5, Insightful)
For example "use CDN" (aka Akamai, etc.) - yeah, right. For Yahoo.com that's an idea. For my private website, that's bullshit. If they really use this internally to rate sites, their rating sucks by definition.
So in summary there are a couple good points there, and a couple that are not really appropriate. Expires: Headers are a nice idea for static webpages. But YSlow still gives me an F for not using one on a PHP page that really does change every time you load it.
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
Yeah, many of these are stupid.
Not only do they recommend CDNs, which is absurd for any page that gets less than a million hits a day, they also complain about ETags, despite all the stuff I want cached actually having Etags. They whine that 'different servers can produce different etags' or something, like my site is random distributed over a dozen servers where images and CSS randomly get sent from different ones. Um, nope, just one server, as you apparently figured out when complaining about not using C
Re: (Score:2)
Well, from the YSlow web page itself...
YSlow analyzes web pages and tells you why they're slow based on the rules for high performance web sites.
This criteria can be subjective (as to what a high performance web site is). I would certainly expect that Yahoo's tool would likely grade sites that have the same magnitude of number of hits that they would get. I don't even think that slashdot.org would even qualify in that category.
Their tips definitely do make sense if you have a site in the "millions of
Web optimization made clear (Score:2)
Finally, someone tells what web developers have known for years [yahoo.com]: optimizing the site is not a matter of splitting your content into as many images as possible over an enterprise app, but good clean design and code.
For years, as a web designer, every time I got ready to deploy I encountered some nitwit who would say, "You're going to break up that giant image, aren't you? We can put it on nine servers!" -- creating organizational havoc, a completely unmanageable asset mess of a project, and driving everyone
Re: (Score:2)
And yes... (Score:1)
Maybe Yahoo should use it themselves... (Score:2)
Re: (Score:3, Insightful)
and that damn Flash is about the worst there is now.
The Firefox plugin Flashblock [mozilla.org] is quite wonderful. Flash items are replaced with a clickable surface. You get the option to click on the very few Flash items that you do want to view.
To me, that is the sign of real professional web developers.
More like a professional organization. If it were up to us developers, pages would be much better than they are.
Friendlier Reporting (Score:3, Funny)
Re: (Score:2)
Just the start of their new plugin scanners (Score:4, Funny)
YMe - translates your site into emo-speak.
Re: (Score:1)
Source code of the YSlow tool (Score:2)
load order effects perceived slowness (Score:2, Insightful)
Lets say you visit, oh, dilbert.com (just to pick on a geeky site) to get your daily dose of dilbert. If the first thing that is rendered on your screen is the actual comic, you don't really care that it takes another 10-20 seconds to display the buttons, menus, sidebars, topbars, bottombars, animations, ads and ads for ads. I
Re: (Score:1)
Re: (Score:2)
Nice utility (Score:2)
YSpy? (Score:2)
So when Yahoo trundles along offering me neat tracking software, umm, no thanks. There's no telling where you might end up reading about it. Now sure, in the U.S. you don't get locked up for criticizing the government, but things do ge
... Re: YSpy? (Score:2)
Re: (Score:2)
If the Nazi Party bought out Nazi-brand Milk(TM), even if it's perfectly good milk, nahh... Same with Yahoo and privacy. The brand is tainted.
Yahoo giving advise on web pages? (Score:1)
Re: (Score:1, Funny)