Is RSS Doomed by Popularity? 351
Ketchup_blade writes "As RSS is becoming more known to the mainstream users and press, the bandwidth issue reported by many sites (Eweek, CNet, InternetNews) related to feeds is becoming a reality. Stats from sites like Boing Boing are showing a real concern regarding feeds bandwidth usage. Possible solutions to this problem are emerging slowly, like RSScache (feed caching proxy) and KnowNow (even-driven syndication). RSScache seems to offer a realistic solution to the problem, but can this be enough to help RSS as it reaches an even bigger user base in the upcoming year?"
Push (Score:5, Insightful)
How much bandwidth does Slashdot's RSS feed use?
It looks like the RSS feed on my home page has a small handful of subscribers. Neat.
Re:Push (Score:5, Insightful)
Most of the problem come from a few older RSS readers that don't support Conditional GET, gzip, etc. With modern readers, there's essentially no problem (I've measured it on a few sites I run). Yes, they poll every hour or two, but the bandwidth is a tiny, tiny fraction of what we get from say, putting up a small QuickTime.
There seem to be lots of people who freak out way to quickly about a few bytes. RSS sends to unnecessary data, but if you've configured things correctly, it's much smaller than lots of other things we do on our networks...
Actually, this is a more general xml problem (Score:3, Interesting)
See Roedy Greens (one time comp.java.lang FAQ maintainer)excellent essay [mindprod.com] on why XML causes these problems.
Re:Push (Score:3, Insightful)
Re:Push (Score:2)
And, did I forget to mention that IPv6 should be implemented ASAP?
There are sometimes reasons besides DRM and user control for new protocols, standards and formats
Re:Push (Score:3, Interesting)
Re:Push (Score:2)
For instance,
They could use a dynamic dns entry. The client would poll the ip of some domain until the ip changed. After a change, the client would go get the new article from some other ip. This wouldn't be very good for small timespans between articles since, looking at my no-ip domain, it takes about 5 mins between updates.
Or, the RSS client gets the current "waiting" ip at the first poll. Then, it tries to co
Re:Push (Score:2, Interesting)
There is almost always a DNS cache at the ISP so the polling interval can be completely controlled by the TTL of the record. Using the existing distributed caching of DNS versus the large percentage of users who are not behind HTTP caches.
I see two potential problems with this idea:
1. A lot of people are stuck behind HTTP proxies with limited or no DNS. This isn't too bad as they could fallback to the
Re:Push (Score:3, Interesting)
On the other hand it seems like everyone and their dog can do P2P.
A P2P-ish RSS system that:
* Attempts to make each client capable (but not always used) of functioning as a caching server for the feed
* Has a top-level owner of a feed who has sole rights to update the feed. Perhaps passing public/private keys with the feed to ensure no tampering. Anyone who wanted to subscribe to the fee
Re:Push (Score:2, Interesting)
They just need to follow ./'s lead (Score:5, Insightful)
Re:They just need to follow ./'s lead (Score:3, Insightful)
I don't know much about RSS, but it seems kind of silly to have the user refresh. Doesn't that defeat the purpose? Why not just have the server send out new news as it gets it?
Re:They just need to follow ./'s lead (Score:4, Informative)
The server would also need to have a list of clients to send the refresh to, which means you'd need to "sign up" so the server puts you on the list.
Nevermind the difficulties that dynamic IP addresses would cause. It's generally easier if the user initiates things.
Re:They just need to follow ./'s lead (Score:2)
Re:They just need to follow ./'s lead (Score:5, Informative)
Re:They just need to follow ./'s lead (Score:3, Informative)
But the gist of it is that push-media and multicast are either a thankfully-dead-fad, or are a technology whose time has yet to come. Push media, in particular, was salivated over quite a bit in the late 90's (eg. see Wired's 1997 cover article on it [wired.com]), so it's not as if it's a new idea. Despite this, push and multicast haven't gained wide success yet. Lots of people have various reasons why, and some of them are actu
Re:They just need to follow ./'s lead (Score:2)
Re:They just need to follow ./'s lead (Score:5, Informative)
Re:They just need to follow /.'s lead (Score:5, Funny)
Slashdot's RSS blocking policy (Score:5, Informative)
Every complaint about this that I've investigated has turned out to be either a broken RSS reader or an IP that's proxying a ton of traffic (which we usually do make an exception for).
Oh, and if you want to read sectional stories in RSS, then:
Slashdot's RSS traffic, like Boing Boing's, is huge, and blocking broken readers has saved us a ton of bandwidth, which of course means money. We were one of the first sites to do this but (as this story suggests) you'll see a lot more sites doing it in the future. I think our policy is fair.
Re:Slashdot's RSS blocking policy (Score:2)
OK, that's completely cool. Kudo to whoever implemented that. Now I don't have to bitch about it on this thread.
But when I follow your link, I get
http://developers.slashdot.org/index.rss
and if I go to the normal homepage I still have
http://slashdot.org/index.rss
I'd expect there to be a ?user= or something. How does the RSS generator know it's me? My R
Re:Slashdot's RSS blocking policy (Score:3, Informative)
Re:Slashdot's RSS blocking policy (Score:3, Interesting)
That's OK, I'm a subscriber... still don't see how the custom RSS works. From my RSS reader how does Slashdot know I'm a subscriber? Special URL?
Slashdot's RSS blocking policy-$$$$ Kaching. (Score:4, Insightful)
So's using correct HTML, and CSS.
Re:Slashdot's RSS blocking policy (Score:2)
So, to me, it looks like there is no need for a RSS proxy. RSS readers just need to learn to use regular web proxies and users need to be convinced that using such proxy servers is to their benefit. Good luck given the low number of users tha
Re:Slashdot's RSS blocking policy (Score:4, Insightful)
Not really. Our cache hit rate would be about zero. We update the homepage about once a minute, and the same goes for any page that any reader would be likely to reload within a reasonable time.
Re:Slashdot's RSS blocking policy (Score:2)
I had a Slashdot RSS feed live bookmark in Firefox (supposedly gets checked once an hour, or when the browser is started up), and that got me temporarily banned (perhaps I had restarted the browser several times in an hour for some reason, but it certainly wasn't 50 times!).
Like I said, hopefully you have upped the ba
Welcome to the internet (Score:2, Informative)
Whee!
And the funny thing here is, if RSS had-- at its conception-- included caching and push-based update notification and all the other smart features that would have prevented this sort of thing from becoming a problem now, [i]it would never have been adopted[/i], because the only reason RSS succeeded where the competing standards to do the sa
Re:Welcome to the internet (Score:2, Insightful)
you're interpreting it from the client perspective, which is not where the name came from.
You're talking application-level (Score:5, Interesting)
If the web started with HTTP 1.1, it would never have gone anywhere because it's too complicated. There are parts of 1.0 that probably aren't implemented very well.
If you want to improve things, adopt an RSS reader project and add those features.
RSS readers don't cache! (Score:5, Insightful)
Re:RSS readers don't cache! (Score:2)
I am interested in which aggregators you think are the worst offenders. the only true solution to the bandwidth problem, even with well behaved aggregators, is moving away from a polling framework. Syndication should be pubsub event based to solve the problem. Q.E.D.
----
Dynamic DNS [thatip.com] from ThatIP.
Mod parent up (Score:2)
Following along the same line of reasoning, why not have the RSS reader send one request, and then changes are pushed to the reader after that? The reader can cache the change so if the user hits reload they get the most recent cache rather than hitting the server again.
Re:Mod parent up (Score:2)
Well, they tried this way back when. I think they called it web casting. RSS is really just a lo-fi form of webcasting. You dont need to have any open ports on your machine, no special service running on the web server, just a flat file in the RSS format.
Webcasting may replace RSS, but then we would probably have the opposite problem. "Why is slashdot slashdotting me!!"
Re:Mod parent up (Score:2)
The problem I see with that is network users who are behind firewalls. You can't very well push RSS data to them now, can you?
Re:RSS readers don't cache! (Score:5, Insightful)
The problem, is of course, server-side. For instance, the GPL blog software Word Press doesnt do ANY cacheing. Its RSS is a php script. So if you get 10,000 requests for that RSS, then you're running a script 10,000 times. That's ridiculous and poor planning. Other RSS generation is guilty of this crime.
Yes, there is a plug in (which doesnt work at nerdfilter nor at the other wordpress site I run) and a savvy person could just make a cron job and redirect RSS requests to a static file, but that's all besides the point. This should all be done "out of the box." This is a software problem that should be addressed server side first, client side later.
Not to mention, a lot of these RSS readers are big sites like bloglines, newgator, etc who should be respecting bandwidth limits, but really have no incentive to do so. RSS really doesnt scale too well for big sites. What they should be doing is denying connections for IPs that hit it too often or change the RSS format to give server instructions like "Dont request this more than x times a day" in the header for the clients to obey. x would be a low number for a site not updated often and high for asite updated very often.
Re:RSS readers don't cache! (Score:5, Informative)
Technically true but misleading. WordPress allows user agents to cache the RSS/Atom feeds, and will only serve a newer copy if a post has been made to the blog since the time the user agent says it last downloaded the feed. Otherwise it sends a 304. This is in 1.3-alpha5. I dunno what 1.2.1 does.
Not coincidentally, these are the egregious worst offenders I mentioned. Bloglines grabs my RSS2 and Atom feeds hourly, and doesn't cache or even pretend to. Firefox Live Bookmarks appears to cache feeds, but your aggregator plugins might not. I can't (yet) tell the difference from the server logs between Firefox and the various aggregator plugins.
The best ones are the syndication sites that only grab my feeds after being pinged. Too bad I can't ping everybody. That could solve the problem if there was some way to do that.
Re:RSS readers don't cache! (Score:2)
One subscriber!
Re:RSS readers don't cache! (Score:2)
Anyway, you're right, it's not a bandwidth issue, for the most part its a software issue. I'm tracking some weblogs for research and crawl the RSS feeds once a day. Most sites only update their fe
Re:RSS readers don't cache! (Score:5, Informative)
When you request the feed, you first get sent your normal HTTP header. If properly configured, it will return a 304 if you have the most recent version -- however, as many feeds are generated in PHP[1], this header is defaulted off, and you'll end up with your standard 200, or go ahead, code. This single handedly wastes a metric tonne of bandwidth needlessly.
Even if you're trying to rape a feed, you'll only be wasting a few hundred bytes at most every half hour, than the whole 50K or whatnot size it is.
See here [pastiche.org] for a more detailed explanation.
[1] This is not a PHP specific issue; a lot of dynamic content, and even static content, fails to do this properly. But this is what it's there for, after all.
Here's how (Score:2)
I imagine even more bandwidth could be saved if the next version of the RSS or ATOM standards mandated rsync support.
rsstorrent will solve it all (Score:4, Interesting)
Re:rsstorrent will solve it all (Score:2)
Doomed? It's barely got off the ground... (Score:5, Insightful)
Take the BBC News website for example. On September 11th 2001 its traffic was way beyond anything it had experienced to that point. Within a year or so, it was comfortably serving more requests and seeing more traffic every day. Proof if it was needed that capacity isn't the issue when it comes to Internet growth, and won't be for the foreseeable future.
RSS is in its infancy. Just because people didn't anticipate it being adopted as fast as it has been that doesn't make it "doomed". By that rationale, the Internet itself, DVDs, digital photography, etc are all "doomed" too.
Re:Doomed? It's barely got off the ground... (Score:2)
And *BSD... you forgot to mention that *BSD is dying too!
Limit download to new content (Score:5, Interesting)
Re:Limit download to new content (Score:2)
Everyone admits (now) that RSS was a really stupid protocol even as protocols go.
Oh, and the timestamp thing adds far less processing overhead then the reduction of packets would save.
Re:Limit download to new content (Score:2)
Re:Limit download to new content (Score:2)
If you previously had a copy of the feed with items A through X, and now A has dropped off and Y has been added, you'll pull the entire thing, B through Y, when all you really need is:
remove A
add Y, data follows
Think more along the lines of "diff since $last_timestamp".
About time for asynchronous (Score:3, Informative)
Duh... Simple solution (Score:2)
There we go. You now have version control.
Keep copies of the RSS on the server for 30 days.
http://www.mysite.com/requestfeed?myversion=200
diff the new version from the old version. Send whats changed.
How fucking hard is that people?
Re:Duh... Simple solution (Score:2, Interesting)
Not a problem with RSS.. just humans. (Score:5, Interesting)
RSS feeds are meant as a way to strip all the nonsense from a site and offer easy syndication, right? Basically, present the relevent news from a full-fledged webpage in a smaller file size? If such is the case, this isn't an RSS issue, really. I see it more as a bandwidth issue. I mean, people are going to get their news one way or the other.. either with a bunch of images and lots of markup via HTML or with just the bare minimum of text and markup via RSS. I would prefer RSS over HTML any day of the week! But perhaps RSS makes syndication TOO simple. Thus everyone does it and that eats additional bandwidth that normally would be reserved for those browsing the HTML a site offers.
And you could implement bans on people who request the RSS feed more than X times per hour as someone suggested (Doesn't /. do this?), but I don't think that gets around the bandwidth issue. I mean, those who want the news will either go with RSS or simply hit the site. Again, RSS is the preferred alternative to HTML.
So here's my suggestion.. go to nothing but RSS and no HTML!
Re:Not a problem with RSS.. just humans. (Score:2)
And excuse me, but lose HTML? The whole web as RSS feeds? You must be kidding. There's way too much content out there that simply can't be put into an RSS feed. It's static, it represents downloadable files, or documentation, or useless marketing hype, or whatever.
Re:Not a problem with RSS.. just humans. (Score:2)
Re:Not a problem with RSS.. just humans. (Score:3, Insightful)
I have to point out how much I love "Sage", the Mozilla Firefox plugin for RSS - you can even rightclick on that XML thing that tries to tell you to save the page and bookmark it under "Sage Feeds" and then Alt-S and you have your RSS.
I started using Sage for
Re:Not a problem with RSS.. just humans. (Score:2)
Except it'll be an hour before someone implements images in RSS feeds, and then it's 1990 all over again.
Pop Fly (Score:5, Funny)
"Is Instant Messaging Doomed by Popularity?"
"Is E-Mail Doomed by Popularity?"
"Is Usenet Doomed by Popularity?"
"Is The Internet Doomed by Popularity?"
"Is Linux Doomed by Popularity?"
"Is Apple Doomed by Popularity?"
"Is Netcraft Doomed by Popularity?"
"Is Sex with Geeks Doomed by Popularity?"
Solutions (Score:5, Informative)
OT: Re:Solutions (Score:2)
By far the best RSS experience I've had has been with the Konfabulator RSS widget, which pops up when it finds a new entry and hides away when there's nothing new. It's elegant and simple. Blog
Re:OT: Re:Solutions (Score:2)
What I'd like is to have one, centralized server-based clearinghouse (of my links; not talking about the clearinghouse from your post) that would keep track of the feeds I'm interested in, and, based on some sort of flag, fire off SMS, IM, or some other proprietary alert to me wherever I am. In-browser is where it gets tricky for me as a user: I don't want my aggregator to take up a whole window. I'd much rather have a sidebar or floating window interface than one that fills the sc
A simple fix (Score:3, Informative)
New subscribers would receive the initial copy of the feed via traditional unicast TCP, because that would be the least CPU-intensive way of handling a few requests at a time.
A caching system won't work for the same reason web caches have never caught on in the US - people are terrified of being sued to smithereens for potential copyright infringement. Even if any case would be thrown out of court instantly (by no means certain in the US) the costs would be prohibitive and malicious plaintifs rarely ever get asked to pay costs.
The main problem with the multicast solution is that although multicasting is enabled across the backbone, most ISPs disable it - for reasons known only to them, because it costs nothing to switch it on. Persuading ISPs to behave intelligently is unlikely, to say the least.
Web caching hasn't caught on? News to me... (Score:2)
A caching system won't work for the same reason web caches have never caught on in the US - people are terrified of being sued to smithereens for potential copyright infringement. Even if any case would be thrown out of court instantly (by no means certain in the US) the costs would be prohibitive and malicious plaintifs rarely ever get asked to pay costs.
Maybe the caching is just so damn good that you don't notice it. UIUC has a transparent web cache that I doubt 90% (of ~8,000) of the dorm students rea
Aggregator writers are to blame (Score:2)
During our product's [mesadynamics.com] development, our debugging refresh interval was 5 minutes and hardcoded to Slashdot. As you can imagine, it didn't take us long to discover Slashdot's unique banning mechanism -- it woke us up to
Re:Aggregator writers are to blame (Score:2)
Solved, move on (Score:4, Informative)
As another poster has pointed out, banning users who check too frequently is an excellent fallback. A tiny site won't know to install the software, but it won't be an issue for a tiny site.
RSS + Bittorrent -- works for Podcasts... (Score:3, Interesting)
The Podcasters need it too. I'm subscribed to a couple dozen feeds and have well over 4GB of files in my cache right now.
The biggest problem with Bittorrent and podcasts is that the RSS aggregators needs to be Bittorrent aware. Unfortunately, few are.
Re:RSS + Bittorrent -- works for Podcasts... (Score:3, Insightful)
Bittorrent (Score:3, Insightful)
Doesn't sound that useful (Score:2)
By the time you've told a client to "go an ask these other clients" you may as well have just sent it the RSS file.
Coral and planning for feed growth (Score:2)
Ex: Slashdot RSS via Coral [nyud.net]
what's wrong with the old subscription model? (Score:3, Interesting)
An alternate approach would be to do the same thing with a news server. Why keep refreshing a feed for updates instead of letting it notify you when it has updates?
This issue was previously discussed elsewhere (Score:5, Insightful)
Slashdot user GaryM [slashdot.org] posted a related question elsewhere [advogato.org] about 20 months ago. At that time, in that forum, commenters dismissed his proposed solution, the use of NNTP, on the grounds that NNTP is deficient, but others continue to see NNTP as a possible solution [methodize.org] nevertheless.
Solution! (Score:3, Funny)
Now if only they'd bring back the $$$ from the mid 90s too.... :)
Re:Solution! (Score:2)
Re:Solution! (Score:2)
Solution: RSS over Usenet news (Score:5, Interesting)
Create a new first level domain ( like alt, comp, talk etc ) named "rss" and use an extra header to identify the originating rss feed URL. The latter header could be used by the RSS/NNTP reader to select which article bodies to download and to verify each RSS entry to identify fake posts.
just zip it (Score:2)
Swarming (Like BitTorrent) is the answer (Score:5, Interesting)
A content creator (say Slashdot) has webpages and it has an RSS feed. They create a torrent for each page. They sign the RSS file and each torrent (and its content) with a private key. They post their public key on their homepage.
Now, you can cache the RSS file on other sites that support you yet the users can still be confident that it really came from you. Inside the RSS file, users can try to get the webpage (and all its images, etc.) through the torrent first. When the page loads locally in your browser, it could still go out and get ads if you are an ad sponsored site.
If you are a popular site and have a "fan base", you should have no problem implementing something along these lines. If you are a site that has these problems, you are probably popular and have a fan base. Given the right software and the buy-in from users, the problem solves itself.
Re:Swarming (Like BitTorrent) is the answer (Score:4, Informative)
No, it "can't". Or at least, it can't serve it with any benefit. Tracker overhead swamps any gains you might make. BitTorrent is unsuitable for use with small files, unless the protocol has radically changed since I last looked at it. In the limiting case, like 1K per file, it can even be worse or much worse than just serving the file over HTTP.
Inside the RSS file, users can try to get the webpage (and all its images, etc.) through the torrent first.
Oh, here's the problem, you don't know what you're talking about or how these technologies work. When an RSS file has been retrieved, there is nothing remotely like "get the webpage" that takes place in the rendering. The images are retrieved but those are typically too small to be usefully torrented too.
Regretably, solving the bandwidth problem involves more than invoking some buzzwords; when you're talking about a tech scaling to potentially millions of users you really have to design carefully. Frankly, the best proof of my point is that it was as easy as you say it is, it'd be done now. But it's not, it's hard, and will probably require a custom solution... which is what the article talks about, coincidentally.
ban abusive clients (Score:2)
Smaller Feeds (Score:2)
Why does everybody seem to feel the need to have their last 20-25 posts in their feed? It's just going to mean wasted bandwidth, especially for websites that update infrequently. I'd say the last five posts would be sufficient for most weblogs and 10 for news sites like Slashdot and The Register.
Feed readers are the other issue. Many set their default refresh to an hour. I use SharpReader which has an adequate 4 hour default. I adjust that on a per feed basis. Some update once per day, and that's all I nee
howe is this any different than HTML? (Score:2)
If-Modified-Since, User-Agent (Score:4, Insightful)
Perhaps particularly offending User-Agents should be denied access to feeds. If I saw particular User-Agents consistently sending requests without If-Modified-Since, I'd ban them.
HTTP? (Score:2)
Wouldn't a client include a If-Modified-Since HTTP header in the GET request?
We're talking 200 bytes for a not-modified query.
Is it these 200-some-odd byte requests that people are complaning about?
Atom's bandwidth usage? (Score:2)
Bandwidth concerns? (Score:2)
corporate caching (Score:3, Insightful)
Chip H.
Simple solution: pregenerate RSS feed (Score:2)
This gets a lot of caching behavior automatically.
one thing that would help (Score:2)
Compression (Score:3, Insightful)
51894b boingboing.rss.xml
17842b boingboing.rss.xml.gz
Comment removed (Score:3, Insightful)
Re:Usenet? (Score:2)
Re:Shouldn't it be just the opposite?? (Score:2)
The only thing I can think of is perhaps RSS makes syndication TOO simple. Perhaps fewer people would be requesting updates on the news (or whatever) from a site if they didn't offer the feeds. I mean, without RSS, many users would rather go without the updates than access the site and get the images and everything.
That is the only argument I can come up with and I think it's fallous.
Seems somewhat reasonable. (Score:2)
This seems like a fair method of reducing the amount of throughput... only permitting a certain number of requests per hour per user, or whatever time period one wishes.
I'm pretty sure there are other ways of going about it, though.
1. Send a header which specifies when the feed was last downloaded from this location. If I downloaded the feed an hour ago, I don't need the feed to contain articles which occurred half a day ago.
2. Include less articles in the RSS.
3. Push the RSS updates to users, using XM
That's one possibility, yes. (Score:2)
If I load a page through my Firefox, all the advertisements get blocked. So they surely aren't getting any revenue from my downloading of the entire HTML.
Meanwhile, if I load the story page via the RSS reader in Thunderbird, I can't block the ads. :-)
So clearly it can work both ways.
Re:Should all sites... (Score:2)
Re:RSS rules. (Score:2)
I agree with you. (Score:3, Informative)
HTTP compression will work even better here than it does for regular pages - RSS is basically all text so every response is going to be compressible. Looking at a handful of my feeds, some
Re:New idea? (Score:2)