Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Media Music

Napster and Gnutella Measurements 113

belswick writes "UW has posted a paper titled "Measuring and Analyzing the Characteristics of Napster and Gnutella Hosts" at Washington in PDF form. Interesting reading for those who implement P2P software, with actual measurements, tools, and topologies. You 3l33t H4x0rz are ACM members, R1gh4?" You can get a cache of the PDF and view it online as well.
This discussion has been archived. No new comments can be posted.

Napster and Gnutella Measurements

Comments Filter:
  • ...here's the HTML Version [216.239.53.104] courtesy of Google [google.com].
    • My company's intranet has all of our insurance claim forms on the web. They had recently redone the site, and I needed a claim form. I tried to do it from work, but seeing how I have an Ultra 5 and a client with a very restrictive software policy, I couldn't view the Word document posted. I figured it was the form itself, so I waited until I got home. When I did, I opened up the document, and was astonished to find that the document contained one line: a link to the insurer's website.

      I would like to ge
    • ...here's the HTML Version courtesy of Google.

      And here's the text summary from the researcher:

      Stop using P2P clients you fscking pirates, you're wasting all my pr0n bandwidth at the university.

    • The editors changed the story and didn't mark it as an update OR give credit! Thanks a lump, guys.
      • Humm... Whats even more funny is the fact that if i'd had mod points and seen that one you'd gotten a "redundant" from me :)
  • by MattRog ( 527508 ) on Monday November 10, 2003 @11:16AM (#7434465)
    The numbers are all from early 2001. Napster has since been killed and reborn with an entirely different model, and gnutella (KaZaA et al) have exploded. What's the point of this report given the ancient data?

    Given those changes wouldn't it be more valuable to see if their hypotheses and conclusions hold up with the new data?
    • academia. (Score:5, Informative)

      by rebelcool ( 247749 ) on Monday November 10, 2003 @11:25AM (#7434536)
      paper publishing takes a long time. gathering, analyzing data and making sure you're coming to proper conclusions takes lots and lots of research and double checking. And then theres peer review, which takes months as the paper gets submitted to academic peers who read and analyze and comment on it - when they've got the time.

      In any case, the data points themselves arent as relevant as the topology and structure of growth. doesn't matter if the data is from 2001, theres plenty to be learned from.
    • by molarmass192 ( 608071 ) on Monday November 10, 2003 @11:27AM (#7434548) Homepage Journal
      Not to nitpick but Kazaa isn't based on Gnutella, it's based on FastTrack. They're both P2P but FastTrack is a closed system while Gnutella is an open one.
  • by boogy nightmare ( 207669 ) on Monday November 10, 2003 @11:17AM (#7434472) Homepage
    Do you alter the phrase from 'peer to peer' to 'idiots to me' in the privacy of your own head :)

  • mutella (Score:3, Insightful)

    by jargoone ( 166102 ) on Monday November 10, 2003 @11:17AM (#7434476)
    Anyone else a big mutella fan? I always run it in a screen session, with the web interface enabled. I love that I can use the same session, from a terminal at home, ssh'ed in, over the web interface on my LAN, or through an https tunnel. Great piece of software, highly recommended.
    • Yeah, I was one. Not anymore, since I've learned the existence of giFT [giftproject.org] I'm not using it anymore.

      giFT works with a small server, which clients can connect. So I can control it graphically on my home (using giFToxic [sourceforge.net]) or remotely (using ssh and giFTcurs [nongnu.org]).

      Also, giFT turns all that reseach into garbage, since it can connect on several servers of several different types. Tt currently comes with OpenFT (giFT original protocol) and Gnutella by default but you can also find FastTrack network plugin for it. There i
  • by Sanity ( 1431 ) * on Monday November 10, 2003 @11:21AM (#7434499) Homepage Journal
    Freenet's next-gen routing [freenetproject.org] algorithm does detailed analysis of node performance and incorporates this into its routing decisions. In effect, Freenet already implements their proposal, neatly integrating it into the Freenet routing algorithm.
    • I have tried Freenet numerous times over the years, but every time it has proven to be dog slow. If they have implemented said alorithms, why is the performance still so bad?

      • from http://freenetproject.org/index.php?page=download

        "Download Freenet
        Important note for first time users
        When you first start Freenet your node will know very little about the network - it could take up to several minutes or longer to open a website. Keep trying, because the more you use Freenet, the faster it will get. "
      • If they have implemented said alorithms, why is the performance still so bad?

        It's written in Java. As much as Java zealots deny it, the fact remains that Java apps are all really bloated and slow.
    • Freenet network has been HORRIBLE lately, whereas you used to be able to download videos at quite a good speed, now it's nearly impossible to fetch 5k text documents.
    • by Anonymous Coward
      Unfortunately, even with NGR, Freenet suffers from the fact that you can never be sure what is actually there and up-to-date.

      You can start downloading a splitfile, it'll successfully start...and then half-way through (or even 90% through), decide that it can't find the rest of the blocks required. Retrying may help, or it may not. All the blocks might not even exist on the network any longer. Then again, Freenet's purpose is quite different compared to other P2P systems.

      If Freenet sites were kept up-to
    • Have you tried integrating a virgin node recently?

      I have, and it's not gone well.
  • Temporary Mirror (Score:5, Informative)

    by arvindn ( 542080 ) on Monday November 10, 2003 @11:24AM (#7434529) Homepage Journal
    http://theory.cs.iitm.ernet.in/msj.pdf
    http://theory.cs.iitm.ernet.in/msj.txt
  • outdated (Score:5, Insightful)

    by milamber.net ( 188526 ) on Monday November 10, 2003 @11:25AM (#7434537)
    This is a very old (in internet terms) report. Its results are taken from when gnutella and napster were two "popular" p2p architecuters (report refers to data gathering in May of 2001). Since then napster has died and been reborn in a new form and gnutella has adopted a 2-tier topology such as kazaa has. Companies such as clip2, who are long dead and buried, are referenced and bandwidth usage is stated from 2001 making the results useless and the references impossible to find.

    I assume its an old report resubmitted by somebody who doesn't know better otherwise research like this is worse than useless because it provides completely inaccurate results.
    • it's not totally useless.

      in fact, it's pretty useful comparision of the _techniques_ used(wouldn't matter if it's from year 1023 or from year 4034).

      you just shouldn't treat as your usual "hottest gfx/cpu/mem/hd/usb-device" comparision.

      .

  • by AKAImBatman ( 238306 ) <akaimbatman@gmaiBLUEl.com minus berry> on Monday November 10, 2003 @11:27AM (#7434545) Homepage Journal
    27 comments and not one actually on topic. Does anyone care about these statistics? Or know what to do with them? For that matter, has anyone successfully read the paper without their eyes glazing over? I'm sure its a fascinating paper to someone, but I can't get past the first two pages without loosing all concentration. And I'm weird enough that I usually like this stuff!

  • by Adam Fisk ( 536262 ) on Monday November 10, 2003 @11:27AM (#7434553)
    This study is based on extremely old data and is not particularly relevant for today's Gnutella. The Gnutella crawl data is from 2001, a time when Gnutella was a vastly different network with a completely different searching architecture. Gnutella at the time was a very young protocol. Since then, the search architecture has moved beyond the flooding model, now using a combination of distributed indexing and "dynamic querying." These techniques are specified in detail here [yahoo.com].

    The data on average number of shared files and uptime is interesting, but there's really not a lot in here that is actually useful for peer to peer development. There's a lot of active, very useful research being done elsewhere. The folks at Stanford have done a great deal of work in this area, much of it very applicable. Their work is here [stanford.edu].
  • Encrypted P2P ... (Score:4, Informative)

    by bigwavejas ( 678602 ) on Monday November 10, 2003 @11:29AM (#7434563) Journal
    Slashdot ran an article a while back regarding, "a secure, distributed mesh-like networking protocal and platform called Waste." See article:
    at:http://slashdot.org/articles/03/05/29 /0140241.shtml?tid=126&tid=93

    I've been using the software to send files securely to trusted friends, I wonder if this isn't the direction sharing mp3s will go in the future, in order to avoid the RIAA.

    In any case... Nullsoft has since banned using the software, but its still available under the GPL at sites like:
    http://grazzy.mjoelkbar.net/waste/mirror/

    Snarf on!
    F the RIAA

    • Well at least it was available until you slashdotted it you insensitive clod!
    • It works fine and is up on sourceforge. The problem is that it is for small nets only, i.e. >50 people and the performance is not the best.

      What it is really good for is as a mini-groupware application. You can be in a hostile environment (the internet) and your shared files and messages are relatively secure.

  • by smd4985 ( 203677 ) on Monday November 10, 2003 @11:32AM (#7434585) Homepage
    can be found here [limewire.com].
  • The name Napster didn't seem like a catchy name to begin with. It also got associated with "song theft" or "Geek/Techie rights fighters"

    The Napster of today isn't NEARLY as fast due to the fact it's no longer a peer to peer system. It can't even begin to compete (bandwidth wise) with Gnutella. On a 56k modem I can get a song of of LimeWire (which partially uses the Gnutella network) in about 3 to 3.5 minutes. On Napster, it takes about 8 minutes. Neither is a problem with broadband.

    That said, I could also
    • Grammar corrections:

      The name Napster didn't seem like a catchy name to begin with. It also got associated with "song theft" or "Geek/Techie rights fighters"

      The Napster of today isn't NEARLY as fast due to the fact it's no longer a peer to peer system. It can't even begin to compete (bandwidth wise) with Gnutella. On a 56k modem I can get a song of of LimeWire (which partially uses the Gnutella network) in about 3 to 3.5 minutes. On Napster, it takes about 8 minutes. Neither is a problem with broadband.

  • BaH! (Score:5, Informative)

    by vDave420 ( 649776 ) on Monday November 10, 2003 @11:38AM (#7434635)

    As a major developer of one of the world's leading Gnutella clients [bearshare.com] this data is old, untimely, and really not "new news" to anyone involved in Gnutella.

    Much of this data is based upon estimates & reported crawler (ha!) data.

    Want some real, hardcore data about Gnutella (or at least the BearShare portion of it)?

    I invented a revolutionary distributed stats system that is in place in the latest versions of BearShare. No more guessing about p2p network information, like transfer bandwidth, etc. Try checking out [bearshare.com] some of my results. [bearshare.com]

    This data is collected from the network, in a brand new, distributed, 'polled-not-crawled' scheme with remarkably fast turnaround times on data (new data points every 5 mins, on average).

    Much, if not all, of this in the above report information is actively being summarrized for Gnutella (again, the BearShare portion at least) and some early (non-automated graphing) of the results can be found in the above links.
    Expect (some of) this data (like node count, shared files/bytes, etc) to be available on our website (in real time) soon.

    Kinda interesting...
    In any case , story data is not novel any more, certainly not timely. =)

    I like my data collections much better.

    -dave-

    • Re:BaH! (Score:2, Insightful)

      by GigsVT ( 208848 )
      You admit to being responsible for installing spyware on thousands of people's computers?

      I hope they catch you some day. You are no better than any other virus writer.
  • by EvilTwinSkippy ( 112490 ) <yoda@nOSpAM.etoyoc.com> on Monday November 10, 2003 @11:42AM (#7434669) Homepage Journal
    I have a friend who is a history major. He always says that History isn't history until everyone who was there has died.

    I'm starting to get the sense that Science really should stick with the timeless concept. 2 years is a blink of an eye when preparing a paper on particle physics, or mathematics. 2 years is at least 8 lifetimes on the internet. By the time you write about it, it's obsolete.

  • When you see leetspeak in the summary, you know to keep your distance from the actual article.
  • I once heard of a weather prediction system, based on a set of mainframes working in parallel, that once it had sufficient data, could predict the weather for the next day to 99.9% accuracy.

    The problem was that the process ran for 5 days, so if it started on Sunday, you could know what Monday's weather would be by the following Friday.

    This study (and I do understand it takes time to pull together this kind of comprehensive usage data in an organized format) falls along the same lines. It would have bee

  • Conclusions (Score:4, Informative)

    by TuringTest ( 533084 ) on Monday November 10, 2003 @12:37PM (#7435160) Journal
    Thanks to the structure of scientific papers, you don't have to actually RTFA in order to know what is all about:

    5 Conclusions

    In this paper, we presented a measurement study performed
    over the population of peers that choose to participate in the
    Gnutella and Napster peer-to-peer file sharing systems. Our
    measurements captured the bottleneck bandwidth, latency,
    availability, and file sharing patterns of these peers.
    Several lessons emerged from the results of our measure-
    ments. First, there is a significant amount of heterogeneity in
    both Gnutella and Napster; bandwidth, latency, availability,
    and the degree of sharing vary between three and five orders
    of magnitude across the peers in the system. This implies that
    any similar peer-to-peer system must be very deliberate and
    careful about delegating responsibilities across peers. Second,
    even though these systems were designed with a symmetry of
    responsibilities in mind, there is clear evidence of client-like
    or server-like behavior in a significant fraction of systems'
    populations. Third, peers tend to deliberately misreport in-
    formation if there is an incentive to do so. Because effective
    delegation of responsibility depends on accurate information,
    this implies that future systems must either have built-in in-
    centives for peers to tell the truth or systems must be able to
    directly measure and verify reported information.
  • by Anonymous Coward
    Looks like these guys also have some data about FastTrack-based systems that is much more recent:

    Measurement, Modeling, and Analysis of a Peer-to-Peer File-Sharing Workload at http://www.cs.washington.edu/homes/gribble/papers/ p118-gummadi.pdf [washington.edu]

    and

    An Analysis of Internet Content Delivery Systems at http://www.cs.washington.edu/homes/gribble/papers/ p2p_osdi.pdf [washington.edu]

  • by Jerf ( 17166 ) on Monday November 10, 2003 @12:58PM (#7435353) Journal
    Y'all are missing the point, thanks once again to the many-headed beast that is the word "P2P".

    In this case, the academics are strictly concerned with P2P as a network organization, with little regard to what apps are built on top of it. This has nothing to do with "Napster" or "Gnutella" as "file sharing systems". Instead, Napster and Gnutella are being studied by the academics because they are the only things you can get hard numbers for, because few-to-none of the academic P2P systems have been implemented on such a wide scale. They do not perfectly implement what the academics are studying but they are close enough to provide some data about how other systems might behave in the real world.

    Academic P2P systems tend to concentrate on "pure" P2P, where there are no servers and ideally no "supernodes" (though they'll settle for dynamic organization that emerges from the protocol itself with no human intervention). This is a much different and much harder problem then "Let's share music!".

    The closest to a wide-scale academic P2P system that has been actually deployed that I know of is Freenet; for ideological reasons (pure P2P, no servers) it shoots for the same goals that the academics shoot for for other reasons (mostly that pure P2P systems are hard enough to be interesting, whereas Napster's organization could be created by one teenager without much difficulty; no disrespect to Fanning but it's basically another varient of client-server). Note how much trouble it has had scaling up, just as Gnutella has had trouble; "pure P2P" is friggen' hard in the real world.

    This is "old news" as a couple others have noted because of the peer review process, but to the academics this is valuable to have such peer-reviewed hard data, because you can model and simulate your network to your heart's content, but until you see it in the real world on a large scale you can't be sure it works. Without this kind of hard data they're adrift in a sea of pure theory.

    This paper isn't for "you", so the fact that "you" don't understand what it's for or that "you" think this is useless is rather uninteresting. This paper is for academic P2P practicianers; if you don't know about academic P2P theory, you can ignore this safely. Academic P2P and what "you" think of as P2P are quite different.

    (The "you" here is the "average Slashdot poster to this article. Apply it to yourself (or not) as appropriate.)

    Note that in this paper "academic" is not used as a perjoritive; it's just that as I said, there's such a huge disconnect between academic and non-academic P2P goals that they hardly deserve to be lumped under the same name.
  • by voodoo1man ( 594237 ) on Monday November 10, 2003 @01:55PM (#7435814)
    You 3l33t H4x0rz are ACM members, R1gh4?
    WTF is this supposed to mean? Are you endorsing the ACM? You do realize the ACM is basically one big (and with the web and open document publishing standards, totally unnecessary, like many other commercial "scientific" journals) scam [nhplace.com] of an organization?
  • In this paper, we presented a measurement study performed over the population of peers that choose to participate in the Gnutella and Napster peer-to-peer file sharing systems. Our measurements captured the bottleneck bandwidth, latency,availability, and file sharing patterns of these peers. Several lessons emerged from the results of our measure-ments. First, there is a significant amount of heterogeneity inboth Gnutella and Napster; bandwidth, latency, availability,and the degree of sharing vary between t
  • by Anonymous Coward
    This thing is plainly very outdated for Gnutella. Their conclusion recommendations include things like having various levels of responsibility for nodes.

    If it were current, they'd at least have mentioned ULTRAPEERS or LEAF nodes! Gnutella currently DOES have nodes which 'volunteer' to carry more load.

    In conclusion---it's really not worth reading anymore, because the designs they studied are dead and replaced already.

    -Terr
  • I first read that headline as "Napster and Genitalia Measurements". I guess that doesn't make much sense at all.

Do you suffer painful hallucination? -- Don Juan, cited by Carlos Casteneda

Working...