Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Finally Real P2P With Brains 237

dfelznic writes: "The mp3 archives of CodeCon are now availble, which is news in itself. But what makes this real interesting is that they are being distributed by BitTorrent. BitTorrent allows users to download a file from multiple different people. Instead of everyone nailing one server, users get the file from other users. Furthurnet uses a similar technology to distribute legal bootlegs of concerts. The archive is available at the BitTorrent demo downloads page. As soon as I started downloading (cable modem) at around 300k I got a request for the file and began uploading at 40k. This could be the answer to the slashdot effect;) Now, who is going to be the first to complain about the use of mp3s instead of oggs?"
This discussion has been archived. No new comments can be posted.

Finally Real P2P With Brains

Comments Filter:
  • Nice. (Score:2, Interesting)

    by Renraku ( 518261 )
    Nice idea, I have to say, but my biggest problem with file-sharing utilities is the fact that the file you're looking for isn't going to be the same with everyone. NudeCheerleader(part1).mpeg isn't going to be the same as NudeCheerleader(part1).mpeg on someone elses comp. There's not a way I know of besides implimenting CRC to prevent people from just renaming files into other things. Maybe NudeCheerleader(part1).mpeg is really GoatseLiveVideo.mpeg, just renamed.
    • Re:Nice. (Score:5, Insightful)

      by shankark ( 324928 ) on Tuesday March 19, 2002 @10:19PM (#3191928)
      That's true. But I think a workaround for this would be to have md5 signatures computed for each of these parts and verify them before they are downloaded. I'm not sure if this isn't being done by others already.
      • Re:Nice. (Score:2, Informative)

        by stype ( 179072 )
        furthur does use md5's to tell concerts apart. I've seen a lot of incomplete files on other programs with the same name that get mixed up with the complete ones, but furthur makes it really hard to access files before the download is done. It doesn't really get the files in order necesarily. Its possibly the greatest piece of code made my hippies since unix.
    • The size of the file is often used to determine if two files are the same. Even if your two files are similar in size, they will not be the exact same amount of bytes, therefore size is actually a very accurate way to find if two files are the same. I believe that Morpheus used this method.
    • Re:Nice. (Score:2, Interesting)

      by ironfroggy ( 262096 )
      I think they compare other things, such as the extra info (title, author, etc) as well as dates and file sizes. I've seen (on morpheus, gnutella, etc) many times when the same filename comes up as seperate results.
    • Re:Nice. (Score:2, Interesting)

      by jerryasher ( 151512 )
      I believe this is what Bitzi [bitzi.com] provides (or was supposed to?) -- a way to register files and lookup various pieces of information:
      With Bitzi:

      * You can look up descriptions, comments, and ratings about your files - or contribute such info yourself
      * Our precise digital fingerprints match info to exact files, so you can distinguish between similar files and search for the very best versions
      * Future file-sharing tools can assure you of a file's contents before you begin downloading
      * Infected or mislabeled files can be flagged, and so discovered or ignored before doing any harm

      The Bitzi catalog is an open resource built by a community of fans, developers, and creators. To get started:
    • Gnutella/Limewire already does this... (I said nice butt... hehehehe) ~Dr. Weird
      • Re:Nice BUT.... (Score:2, Interesting)

        by Anonymous Coward
        The gnutella spec specifies the use of SHA, *NOT* CRC32 or MD5, as some others have recommended. Both of the latter two can be exploited to pass garbage by a check (with CRC32, you have some control over the content, even).

        MD5 is *not* suitable for ensuring that two files are identical when a malicious user is involved. It *is* suitable for ensuring that a malicious user may not hand you anything that passes but pure garbage (given what we know about MD5 today).

        CRC32 is totally unsuitable for any environments that could involve malicious users.

        SHA is the only common hash appropriate for this sort of problem.
        • Re:Nice BUT.... (Score:3, Insightful)

          by treat ( 84622 )
          MD5 is *not* suitable for ensuring that two files are identical when a malicious user is involved. It *is* suitable for ensuring that a malicious user may not hand you anything that passes but pure garbage (given what we know about MD5 today).

          I challenge you to find me any two sets of data with the same md5.

    • NudeCheerleader(part1).mpeg isn't going to be the same as NudeCheerleader(part1).mpeg

      This is where a good hashing algorithm would be great, e.g. md5 hashes to determine if different users have the same item. This seems mandatory, they'd be silly not to have something like this in place

    • eDonkey hashes files (Score:2, Interesting)

      by bonch ( 38532 )
      You download file chunks from multiple people, and files can even have a completely different filename. All files are given a hash value to compare to.

      Speaking of good things about eDonkey, there is also forced uploads, meaning no losers cutting your downloads on you.
    • NudeCheerleader(part1).mpeg isn't going to be the same as NudeCheerleader(part1).mpeg on someone elses comp.

      Then download both! You will have to visually inspect the contents of both to really tell if they are same or not. It's called "research".
  • edonkey2k (Score:2, Informative)

    by Anonymous Coward
    edonkey has been doing this for ages..
  • Comment removed (Score:5, Informative)

    by account_deleted ( 4530225 ) on Tuesday March 19, 2002 @10:19PM (#3191922)
    Comment removed based on user account deletion
  • Doesn't Getright do this already?

    And with filesharing, Kazaa/Grokster?
    • Getright does do multiple downloads, but they are generally from the same source, which helps speed things up, but not that much. Getright can do multiple sources, but it doesn't find the other sources automtically, you have to find them yourself. (Yeah, it has a built in Lycos FTP search for files, but that never returns any results).
    • Last time I checked, Kazaa does this, but not well. It doesn't seem to be very intelligent about choosing what connections are good (and therefore should be given more of the file to download after they finish a currently-assigned chunk) and what connections are bad (and therefore should be dropped, not let to continue at 0.01K/sec!).

      [ On a side note, GetRight allowed for more control over where to download from (and did allow multiple sources, last time I used it -- about a year+ ago). It fact, I used GetRight to download linux ISOs from multiple sources at once :). ]

      Anyway, does this system offer *better* multi-connection filesharing (ie, more intelligent?), or does it keep slow connections, and fail to recognize that a fast connection just finished and should be given more of the file to download?

  • by mo ( 2873 ) on Tuesday March 19, 2002 @10:20PM (#3191935)
    I hate to rain on the parade but Morpheus et al. as well as the latest version of BearShare both do this, and have for some time.
    When you say p2p with brains, to me it means somebody has come up with a elegant balance between centralization and search speeds.
    • Ok.. I knew both of those clients would do parallel downloads, but I didn't realize they were capable of sharing 'chunks' of a file. That is, sharing a file that has not yet completely been downloaded.

      Next time I run into something EDonkey doesn't have, I'll have to try out Bearshare.
    • When you say p2p with brains, to me it means somebody has come up with a elegant balance between centralization and search speeds.

      Ditto, Holmes. The real question is the scalability issue [darkridge.com], and I'm not convinced that the traffic cop features implemented by Gnutella front-ends have really sorted this out.

      When that's the case, that will be some p2p with brains. Right now, the networks only seem to be hanging on because the critical mass of crash-inducing traffic hasn't hit the super-peers yet [com.com]; at least not on the permanent basis.

      What would really make my evening interesting is if someone would be kind enough to contradict me.
    • The next version of eDonkey is supposed to implement a new method of decentralization that requires no servers for search requests. I sure can't wait, since the current eDonkey network is beginning to succumb under the strain of its popularity.
  • And this is new? (Score:5, Informative)

    by Edgewize ( 262271 ) on Tuesday March 19, 2002 @10:21PM (#3191940)
    Could someone explain how this is different from FastTrack [fasttrack.nu] (Kazaa), eDonkey [edonkey2000.com], or the more reputable Swarmcast [opencola.com]?

    Peer broadcasting is hardly something to write /. about, I'd say.

    • Re:And this is new? (Score:3, Interesting)

      by mo ( 2873 )
      They're marketing BitTorrent as a solution to web providers with bandwidth limitations. The client registers a mime-type so when you click on a BitTorrent download link it hands it to the p2p client which then downloads it from the network.
      The technology is nothing spectacular, but it's nice to see a simple install method that integrates nicely into the browser.

      One interesting side-effect of this implementation is that there is no searching. You only download stuff from BitTorrent if you find a link on a web page for it. However, without the requirement for searching, Freenet would be a great replacement for this role of browser-download accellerator. All you really need to do to implement this would be to provide a nice installation .exe of freenet that could parse a meta-file pointing to the freenet key of the object you wanted to download.
      • The client registers a mime-type so when you click on a BitTorrent download link it hands it to the p2p client ...

        Again, edonkey already does this. Has for a while. I imagine the other mentioned programs can as well.

        What makes BitTorent new is they are actually trying to get it used for a legitimate application instead of just arguing that "people could use it for something other than piracy".

        Now, I don't know how long BitTorent has been around, but it appears not to be new (too many "I can download foo real fast with it" comments to think it's just out). This is possibly not their original goal and really just something to point at if a lawyer comes calling (or to try and get VCs to come calling).

        • Now, I don't know how long BitTorent has been around

          From the author's site [bitconjurer.org]:

          BitTorrent has been created by me, Bram Cohen as a full-time project over the last eight months.
      • Freenet can be accessed through freenet: URI links (like &lta href="freenet:SSK@npfV5XQijFkF6sXZvuO0o~kG4wEPAgM/ homepage//"&gt) if you have a uri-handling plugin, which to my knowlege have been written for at least IE and Mozilla. Unfortunately, they are not going to be allowed into the default Freenet install for the forseeable future for political reasons (i.e., Ian doesn't like them).

        You can still use fproxy to access freenet through a web browser, so I suppose you could use a http link to the fproxy presumably running on your user's localhost, but that's somewhat broken and unlikely to catch on.

        --
        Benjamin Coates
    • Re:And this is new? (Score:5, Informative)

      by PureFiction ( 10256 ) on Tuesday March 19, 2002 @10:31PM (#3191987)
      Sure. Fast Track is decentralized file sharing network where there is no guarantee that what you ask for is what you get. They may be codecon mp3's, they may be nasty midget porn incognito.

      eDonkey likewise is more of a filesharing (aka, keyword search, then dowload hits) method.

      Swarmcast is the closest relative to BitTorrent, but BitTorrent avoids the FEC encoding and cryptographically secure block verification in favor of a more centrally controlled broker that uses multi source downloading at various offsets to accomplish the same task.

      In short, BitTorrent is a distribution system where a central server provides content, and peers requesting that content join a mulitsource downloading group where they also share offsets of data with each other (preferably) and download from the central server when necessary.

      This isnt file sharing (really), this is content distribution in a fast and effective manner using peer networking concepts.
      • Re:And this is new? (Score:2, Informative)

        by Edgewize ( 262271 )
        OK, so it is similar to eDonkey but without the problematic public servers. And it is basically identical to Swarmcast, but less robust and potentially slower to complete a file. Both of those also support linking, though each in their own way - I believe that Swarmcast is through Java applet parameters, while eDonkey intercepts ed2k:... style links.

        Just wanted to know where BitTorrent stood in the grand scheme of things.
        • Your missing the point. This is content distribution, where those "problematic public servers" are actually usefull. I.e indie bands sharing music (etree). And they server as Quality Control, you know your getting the files from a central source.

          Regarding eDonkey, they provide no detailed information on how they implement multisource downloads, so it may be that they are actually very similiar. Who knows.

          Lastly, swarmcast is designed for larger decentralized peer networks where no central broker is present. In that scenario FEC and cryptographically secure block transfer make sense, but its a lot of overhead for simply transferring a file. I dont see how your can say BitTorrent is potentially slower. BitTorrent will be faster, as it avoids that overhead entirely, and has a centralized broker/server to fall back on.

  • Why would these people use a closed format with possible impending royalty fees when they could use an uber-1337 open source format instead?

    :-)
  • You have a great product, many customers, and are delivering your product to hordes of happy
    customers online....

    The key to cheap file distribution is to tap the unutilized upload capacity of your customers. It's free.


    Emphasis theirs.

    Free to who? You? Maybe not them. Nothing is free. I like the idea, but I really don't like the way they are selling it.
  • by xfs ( 473411 )
    and kazaa and morpheus (the old one) have been doing the multiple-user download for quite some time
  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Tuesday March 19, 2002 @10:26PM (#3191958)
    Comment removed based on user account deletion
    • by billstewart ( 78916 ) on Wednesday March 20, 2002 @12:25AM (#3192400) Journal
      If you're shipping around small files, like MP3s, there are lots of transfer systems that can do the job. But the Lossless Compression movement for music means that a concert tape is typically a few hundred megabytes large, maybe 1/3 the size of the uncompressed original, so it takes much longer to download, just as ISOs for Linux distributions are large. In that environment, you can't always depend on connections being up for a long enough time, so you need to be able to download parts of files, and swarming systems like BitTorrent help a lot.
  • BitTorrent allows users to download a file from multiple different people. Instead of everyone nailing one server, users get the file from other users.

    eDonkey [edonkey2000.com] does one better. Even if you only have parts of the file downloaded, you can immediately send parts of the file you do have to other users. And eDonkey has had a pretty good track record. I thought everyone and their mother knows about this, so why was this a Slashdot headline, especially when it's pretentious and untruthful?

  • by Com2Kid ( 142006 ) <com2kidSPAMLESS@gmail.com> on Tuesday March 19, 2002 @10:32PM (#3191992) Homepage Journal
    It is a browser plugin (IE) that creates mini distributed networks based around a website.

    So say you start downloading the latest Counterstrike patch from some server. Well you know how servers giving out the CS patch get filled up quickly.

    Well if the users were running this program (plugins to IE, no restart neccisary, look if there is a {browser here} version yourself!) then when they started downloading somebody ELSE could start downloading FROM them.

    No file synch issues (same file, same source) the server just re-directs future downloaders to current downloads and has the original downloaders forward the files along.
  • Loudcloud working on something like this for a little while... something called "bitcasting"?
  • Ok people, I know we all have this dream of distributed web serving, but as a web developer I feel I must explain why this will not work now:

    1) Response Time

    To make this work you need more than a fancy P2P network. Remember site like slashdot are database backed and update very quickly. Sure slashdot caches pages, but many things like user preferences and comments are updated way to quickly for a P2P network too distribute it.

    2) Security

    Yes you can encypt, but who other than a hobbist is going to put the content that represents them on several machine at once and expose themselves to someone breaking it. If someone was successful they could do things like change the slashdot homepage for those they are distributing to. You cannot be a credible source and distribute yourself like that.

    3) Slashcode (yeah I know, slashdot specific)

    Have any of you actually read slashcode? I'll tell you what, it is damn complicated. There is no way a simple patch is going to make a site like this distributable. The entire thing would need redesigned, which is no small job. I'd say that this would be the case for any database backed site as well.

    4) Databases

    Since I mentioned a few times already, I think I'll point out the flaw here. Name one database system that is able to handle and organic network of servers (ie constantly going up and down), keep all the data available, keep all the data available on a resonable connection (not behind 56k lines), give the response time you need, doesn't take up huge amounts of systems resources, and can easily be set up on one of the P2P nodes by even a reasonably competent user. Oh that's right none, and you have to have that in order to have a dynamic site on a P2P network, which is a huge portion of the web at this point.

    Well, that's all I can think off right now on this, but I'm sure there are plenty of other reasons why this isn't feasible in the near future.

    Cheers
    • You make some valid points. Distributing dynamic web content is immensely difficult. There are some projects that attempt to do it, and usually, synchronization servers come into the mix at some point (data loss can be avoided by tracking server availability and assigning new servers as needed, you quickly get mathematical safety). The problems are not unsolvable, but I don't see any large corporation pushing the idea, so it will probably still take a while. Realize that this is about high load on static large files, though, where P2P makes very immediate sense.
    • by Com2Kid ( 142006 ) <com2kidSPAMLESS@gmail.com> on Tuesday March 19, 2002 @10:49PM (#3192054) Homepage Journal
      Uh.. . .

      The idea was to help sites that GET linked to BY Slashdot. /. itself is not, AFAIK, having bandwidth problems.

      You know, those small user pages with some cool casemod on it?

      This network would allow viewers of the site to download the images from EACH OTHER instead of from the main server.
      • The idea was to help sites that GET linked to BY Slashdot. [...] This network would allow viewers of the site to download the images from EACH OTHER instead of from the main server.

        I may be incorrect, but couldn't this scenario be remedied by FreeNet? Doesn't FreeNet distribute and cache popular content all over the place?

        If the Slashdotters won't support FreeNet, nobody will.
      • While it can help out those slashdotted sites with cool casemods, one of the cool things about BitTorrent is that it doesn't just work on whole files at a time, like Freenet and the Napster followons - it's made to handle individual chunks of files (e.g. separate megabytes of CD-sized files), so once you've downloaded a chunk that other people want, you can start uploading that to other people while you're downloading the next chunk. This means that you're able to do useful work before you've finished downloading the whole file, and instead of everybody downloading Meg 1 from the server, then Meg 2, different people get different chunks to download, and can share them with each other.

        Also, because you're typically downloading a few tens to hundreds of megabyte chunks, you're a useful server for 90-99% of the time you're downloading, rather than the Freenet model where you're only useful *after* you've finished downloading the stuff you want. So instead of a long-term persistent set of users who always want stuff, BitTorrent is designed for temporary communities of people who want stuff Right Now, and it doesn't depend on them hanging around being useful after they've got what they want. (So you can download the latest release of a Debian ISO and then go install it without feeling like you're depriving the community by taking your machine offline.)

        BitTorrent might be able to manage larger numbers of smaller files, e.g. a Slashdot event, but I haven't looked lately, and it's more interesting for the bigger things. (Of course, some slashdotting problems aren't file retrievals, but server interactions, like that one-IC web server powered by a potato battery, and it doesn't have anything to offer for that :-)

        • The main site says that it is designed as a drop in replacement for HTTP and even has the potential to match HTTPS services if somebody ever gets around to paying him $$$ to develop that feature. :)

          I do not know how well it scales (hell I have gotten 10 karma points today just for spending 2 f*cking minutes at the site, if even that long. The site has all of 5 or so small pages to it. . . . LOL! ), or how the connections are handled.

          It does NOT appear help with keeping users anon though, naturally not I would think that it shares IPs like mad, LOL!

          "Dumb" file resuming methods (like the one Direct Connect) uses does not need for the person serving the file to have the entire thing, often times I will download the first 300 megs or so of a file from one user and then wait a few weeks until somebody comes along who has the complete file.

          By coincidence I have about 15GB of 1/2 done *COUGH* files *COUGH* on my HD. . . . (no, not MP3s, bleh! :P :)

          But it does work a bit better then those stupid systems that keep record of where files are to leave off and what not.

          Of course one big disadvantage of not having any sort of file integrity checking is that when a SINGLE mistake is made in a file transfer and the rest of the transfer is completed, that mistake can spread throughout the network expontentialy.

          There is actualy a corrupted copy of the Yu Yu Hakusho movie that was going around p2p programs for 2 years, and very well MAY BE STILL going around on the networks, that I started passing out on VNN2000 (any other VNN2000 fans out there? I know there are, I first got linked to it from a persons /. sig, LOL! Hia, this is Shiloh, need help with your VNN2k client? :) ) after a tranfer of that file from a different user got corrupted.

          On VNN2000 I was one of the few (at times only) user with that file, so. . . . by the time I realized my mistake I:

          A: Decided screw it I may get around to fixing it some day

          B: Technicaly it is just a few bytes so anybody who REALLY wanted to get the data could boot up a hex editor and fix things. . . . if they had insansly large amounts of hex editing skills. :) )

          The error was actualy caused by a 'known but WTF is causing it????" bug in VNN2000 that resulted in minor file corruptions at times when resumed files were appended to each other. (I forget the exact details, the bug popped up a number of times, once I do believe it added some of the network code to the end of the file, oops. No, 'close connection'[1] isn't a valid part of an MPEG4 encoded AVI frame, just ask WMP. :) )

          {caveman ugh footnotes}
          [1] Me no bothering to look up WTF this would have really been, you worry, you get bin code, {/end caveman ugh}
      • I realize that we are talking about slashdotted sites, but I've heard so many things already about "distributed slashdot" that I felt the need to debunk that. Besides, many of the sites slash dot links to wouldn't be able to be distributed anyways. Only a static site could be served in this manner, and even then how do you get the browser to pull content from those nodes? And even if we could hack our way through it, we'd just be adding another level of complexity to a system (the web) that is broken to begin with.
    • how many times has a slashdot story linked to a pdf on a server that got ./ed? How many dynamic pdfs do you see floating around. I got bad news for you, most of the web is not dynamic...
    • Remember site like slashdot are database backed and update very quickly. Sure slashdot caches pages, but many things like user preferences and comments are updated way to quickly for a P2P network too distribute it.

      Do you even know what the slashdot effect is?

      This has nothing to do with changing slashdot itself, it's about the possibility of using software to help distribute the load that slashdot dumps on third party web sites when their home page becomes the subject of a hot story. Slashdot readers could become temporary mirrors for the links.

  • BitTorrent (Score:5, Informative)

    by Eloquence ( 144160 ) on Tuesday March 19, 2002 @10:38PM (#3192013)
    I'm sure Bram will notice his server being slashdotted soon enough, but let me say a few words in defense of BT anyway. What makes BT different from Morpheus and BearShare is that files are sent by users to each other, while they are still downloading. This way, the downloaders themselves act as backup. It's not simple multi-source downloading, but targeted towards content-providers who want to reduce load on a central server. In its advantages and disadvantages it's similar to Multicast. Good for high load for specific files at specific times. Kernel.org should use it.

    eDonkey has the same feature (with some differences in the publishing process), but is really an application of its own, very file sharing oriented, closed-source and banner-supported. Not exactly what a content provider would want users to download before they can access his files. Still, ed2k has the advantage of a large user base, and also supports ed2k:// URIs that can be used on webpages.

    SwarmCast is interesting, but the company behind it mostly died, and now it is somewhat in limbo. Its Java base has made it problematic as a desktop application. The only real alternative to BT is Mojo Nation, which is currently being reworked as "MNet".

    If you want to know what CodeCon is all about, check the Feature box on infoAnarchy, we had some detailed coverage.

    • Kernel.org should use it.

      all the reports i see during a /. effect on kernel.org show the site is handling the traffic very well. i don't believe i've ever seen the FTP servers at 100%, 90's maybe, but not 100. IIRC, they have some _serious_ bandwidth on those those servers. if there were an issue, i think the kernel.org folks would probably ask the /. folks to _always_ post a link to the mirror page instead of to the tar.gz file itself. maybe kernel.org actually welcomes the /. effect. you know something like:

      sysadmin: "don't tell me, we've got a new stable kernel release being posted in an hour?"

      Marcelo: "yep"

      sysadmin: "/. effect coming 3 seconds after its posted?"

      Marcelo: "most likely"

      sysadmin: "bring it on!"

      • Quite possible. But then I think about all the poor starving developers who could put food on their families if kernel.org gave their bandwidth budget to them and used BitTorrent instead ..
        • i see the big "operated by transmeta corporation" right on their front page. this doesn't exactly scream that their paying the bandwith bill, but then again it lends itself to being construed as such.

          as a wild guess, say their fat pipe is costing 5k per month. how many starving kernel developers is that going help out? are there really people who are large contributers to the kernel that also have troubles finding a day job?
          • as a wild guess, say their fat pipe is costing 5k per month. how many starving kernel developers is that going help out?

            Heh. You don't really want to know the answer to that question :-)

          • Last i heard, the bandwidth for kernel.org was being provided by ISC, and they figured the donation was worth about $8500 a month. thta was from a post on LKM
  • What about Gnunet? (Score:5, Interesting)

    by Anonymous Coward on Tuesday March 19, 2002 @10:43PM (#3192036)
    I'm very surprised at the little ammount of attention that GNUnet [purdue.edu] has gotten in the P2P arena. GNUnet is anonymous, distributed, encrypted, reputation based, has accounting, allows for distributed queries, and uses dynamic routing. While GNUnet is still beta software, I think it's a great anti-censorship tool. What all this means in non-buzzword speak, is that you have a tool that combines a lot of the great qualities from other similar networks (FreeNet, mojo nation, etc) and doesn't have all of the short comings. Give it a shot.
  • by goofy183 ( 451746 ) on Tuesday March 19, 2002 @10:46PM (#3192044)
    There seems to be a lot of people who really haven't read the site or understand how the technology works. Yes all those P2P filesharing utilities allow you to download the same file from multiple people at once, it's not all that impressive and many of the problems such as validating matching files and such have been worked out.

    This solution is different in a few very large aspects. It allows a company to keep track of who is currently downloading a file from their webserver. This information is then sent to the clients who can start the P2P poriton of the process and download segments of the file from other users, releaving the load on the companies server. In contrast to those other P2P FILE SHARING programs which share all your files not just ones you are currently downloading. A system like this makes the file server not only the original source for that file but the P2P server to find other people to download that ONE file from.

    I can see where people may not want their upload bandwidth being used by others. For this reason any site implementing this feature would probably end up having to provide the file for normal download. The selling point would be a possibly faster download for users of the technology.

    I would personally love to see huge sites like FilePlanet put this to use. Granted it would only be truely usefull for sites that have a constant stream of concurrent downloads for a file at any point in time but it would be much better than having to wait 2 hours in line to download a file :-P
  • Several people commented that this thing allows to redistribute files before you finish downloading them. But this is not a big deal simply because most of the time file is not being downloaded, it just sits on the HDD 99.9999% of its life. The gain from the early upload would be next to nothing.
    • it just sits on the HDD 99.9999% of its life

      However its popularity decreases rapidly throughout its life.

      The demand for a file may be incredibly high during the first hour of its life, stay high for a day and then start decreasing rapidly as it becomes 'old news' and more widely available.

      For example, a trailer for the new Star Wars film may take an hour to download and be in huge demand. The next day, the demand is less concentrated since it has been on television, all the hardcore fans downloaded it the second it was available, etc.
      Being able to start the upload before it has downloaded takes enormous pressure off of the sites that have the complete file.

      It may be an unreasonable requirement, but I'm sure it is not an unwanted feature.
    • by billstewart ( 78916 ) on Wednesday March 20, 2002 @12:20AM (#3192380) Journal
      Some environments, like the Gnutella/Napster/Freenet things, have communities that hang around connected for a long time even if they're not downloading anything. But others, like distributing a new release of RedHat/Debian/Mandrake CDs, or even just Mozilla, have a lot of users who want to show up, download stuff, and leave. This feature makes it possible for them to be a temporary community providing services to each other without requiring longterm committment. If you download a CD using BitTorrent, you're useful for 95-99% of the time you're on line, rather than being consumer-only for the first 100% of the download time and having to hang around for another 100% of the time to be any use to anybody, so the community scales much more cleanly even if the first thing you want to do after downloading the latest Linux release is install it. (Software's a much different usage pattern than music here.)


      Additionally, it makes it very efficient for the first set of people who are downloading the file. Instead of having to download the whole thing from one source, which is probably overloaded, you're able to download pieces from lots of different people. The server takes advantage of this - instead of giving Alice chunks 1, 2, 3, ..N in order, and giving Bob the same things, it spreads around the load, so Alice is downloading chunk 1 while Bob downloads chunk 2, and when they're done, Alice starts downloading Chunk 3 from the server and Chunk 2 from Bob, and other chunks from Dave, Eve, and Freddie if they've gotten them.

      This also reduces the latency required for later people in the process to get their material - instead of waiting for the entire 600MB CD to be copied N times in a row, the downloading gets pipelined.

  • Red Swoosh (Score:2, Interesting)

    by metalhed77 ( 250273 )
    Red Swoosh is a cool technology specifically aimed at distributign the load for things such as images on a website. The client download for IE just involves clicking install and DLing a client that's a few 100kb. After which you mirror a portion of the site. www.deviantart.com uses this, and to good effect. I'm not sure if you can mirror large files on it. It is of course centralized.
  • The latest BearShare and LimeWire both allow you to "swarm" gnutella downloads.
  • From the BitTorret to do list....

    better scaling/performance
    BitTorrent currently scales well to a dozen or so simultaneous downloads. With further modifications, it can be made to scale to thousands.


    A dozen or so simulataneous downloads? Dont think that is going to help prevent the slashdor effect. Though I guess that is getting tested right now!
    • A dozen or so simulataneous downloads? Dont think that is going to help prevent the slashdor effect.

      Slashdot linked sites that use BitTorrent technology will respond an order of magnitude faster, and that's never a bad thing. However, it might not exactly ease slashdottings until they release a Netscape plug-in for all major operating systems, as a larger than average proportion of Slashdot readers use Mozilla and non-Windows desktop environments.

  • by Dr. Awktagon ( 233360 ) on Tuesday March 19, 2002 @11:06PM (#3192121) Homepage

    BitTorrent allows users to download a file from multiple different people.

    Or if you're downloading the latest boy band single: multiple identical people.

  • "As soon as I started downloading (cable modem) at around 300k I got a request for the file and began uploading at 40k."

    I have a 1184/160kbs asymmetric (DSL) connection. This seems like a common ratio with many ISPs these days. A full speed download consumes at least a fith of my upstream bandwidth. Presumably that's due to things like TCP ACKs. Any kind of serious upstream activity squeezes things and can quickly reduce a download to half speed. I can't find the concept described very useful, especially if I'm in a rush to get something. Is there a way to throttle upstream bandwidth consumption?
  • nothing new, edonkey2000 has been doing this for months now.

    linky linky [edonkey2000.com]
  • The weighty download time of acquiring 2 CD's worth of executable has literally been the only deterrent keeping a lot of Morpheus users from pirating the entire contents of their harddrive.

    This represents a key-step in issuing in the new era of "freeware."
  • Mojo Radio [mojoradio.com], a Toronto area radio station ('talk radio for guys') uses something similar to do streaming audio. They use technology from ChainCast Networks [chaincast.com] to distribute the streaming of Windows Media broadcasts. It installs a little app in your Windows machine and runs whenever you listen to the stream.
  • by Anonymous Coward on Tuesday March 19, 2002 @11:35PM (#3192216)
    I'm Bram Cohen, the author of BitTorrent. This little slashdotting seems to be going well so far from my end (over 40 downloads currently, and still going smoothly) but I'd like to hear about peoples's experiences doing the download. Here are some questions -
    • Are you getting pauses where no download is happening? If you are, please be patient, it should kick up again (or start in the first place) after a while.
    • Are you behind NAT? People behind NAT may be getting worse performance, it's a complicated issue.
    • How's your upload/download ratio? There are enough people now that you may see the phenomenon of getting about the same download rate as your upload rate - Cutting off your uploads wouldn't help with this, your peers would just get pissed off at you and stop uploading (I'm not kidding, it has tit-for-tat leech resistance.)
    • Did you run into any technical glitches? It's still fairly young software, so there may be a few little things to iron out.


    So far, this looks like it's going pretty well. Any and all feedback is much appreciated, and will hopefully help make BitTorrent an even better product. Please mail me [mailto] about your experiences.
    • 1. Sometimes I get massive speed-drops (around 5 k/s), but no freezes until now.
      2. I'm behind a NAT. May it be possible to configure an incoming-connections port?
      3. Very Various. At The Moment it's 30 K/s down (max 90 k/s) and 7 k/s up (max 14 k/s).
      4. No problems! Plugged in pefectly into IE.
      Very good work so far. I'll try to set up set up some files later.
      X
    • Heh. Get this, I have a box running as a NAT-router only (ie, no firewall) with zonealarm on my desktop.
      (the reason I'm doing this is mostly because all I need the NAT box for is to share a single IP, and having a real firewall on that got to be too much of a hassle with things like Starcraft and Quake) The NAT box is a P100 running FreeBSD 4.3-Release with natd and ipfw. More interesting is that my NAT box is currently behind *another* NAT box that acts as the gateway router for my ADSL service, also running FreeBSD. (I work for my ISP, which is why I know this :)

      When my download started from the site, it was at ~150Kbps. (pretty much the max for my 1.5M/640K ADSL) It slowed down a little as the upstream bandwidth went up, but that was fine, as it consistently stayed at over 100Kbps.

      I have a question though. How the hell is it that my upload is working at all? I'm on a network so private that it's scary.
      • How the hell is it that my upload is working at all? I'm on a network so private that it's scary.
        BitTorrent connections are bilateral, so you're able to both upload and download on all connections you have to your peers, regardless of which side initiated them. Your uber-NATing keeps anyone else from connecting in, but once you establish a connection out it can send in either direction.
    • How will tit-for-tat leech resistance work if someone has an Asynchronous DSL connection? If my download bandwith is 768 kbps but my upload bandwidth is technically limited to 128 kbps (as is common with many DSL offers for private home users), will the leech resistance feature think I'm guilty?
      • Yes, there is a general tendency for people to get about the same download rate as the upload rate they provide, due to the tit-for-tat algorithms. That's just a general tendency though, practical download and upload rates are dependent on many factors.

        As to whether being on ADSL makes you 'guilty' I don't know, it's very non-judgemental software :-)

        • Well, that's somewhat less than optimal for those of us with lousy slow upload.

          Additionally, maxing out my upload kills downloads entirely, all the way to timeouts (cable connection) - turns out that if I cap uploads at about 5/6 max upload speed, I get normal looking download speed. But another 2k upload and downloads die completely. Looking at the comments further up this page, I can see that other people have had this problem and some have found solutions, so I'll take a look at some of those. But perhaps it wouldn't be an entirely bad idea to consider allowing people to cap uploads at something less than the absolute maximum speed, since otherwise, at least in my case, this software is about as much use as a DOS attack.

          Cheers.

          • maxing out my upload kills downloads entirely

            Same problem here.

            I've got RR cable (2Mbps down / 384kbps up) - and as I approach my ~50K/s upload cap, my dl rate suffers a lot, since the ACK's are bottlenecked...

            bearshare, edonkey, and other apps are aware of this problem, and allow you to 'cap your cap' by restricting the ul rate; BitTorrent should do the same.

            --

            • Thought I would provide the average rate I got during the codecon dl demo: 80K down / 48K up.

              (If I could cap that 50K to 30K, my dl speed would jump to 240K - allowing others to grab a greater selection of file chunks from me (ala edonkey2k))

              --

  • by Wakko Warner ( 324 ) on Tuesday March 19, 2002 @11:40PM (#3192230) Homepage Journal
    Please, call them "legal live concert recordings", not "bootlegs". That's like saying "legal pirated MP3s".

    - A.P.
    • just to clairify.

      bootleg:

      1. To make, sell, or transport (alcoholic liquor) for sale illegally.

      2. To produce, distribute, or sell without permission or illegally: a clandestine outfit that bootlegs compact discs and tapes.

      it was very hard to find someone using the term bootleg to not mean anything more than a live recording though. lots of people call even a bands released live album a bootleg. here's another definition i found, sorry no link, i could only get it on google cache:

      "When someone tapes a show, that is called a live recording. When a company releases an unauthorized copy of that show, that is called a "bootleg". Bootlegs are usually found in compact disc form. However, a CD can only hold approximately 78 m inutes of recording time, forcing the bootlegger to cut songs out of long shows. In essence, a live recording will maintain the original, unadulterated full show while a bootleg version will have songs missing. In fact, they may even be out of order.

      When individuals trade live recordings, no money is transferred or involved. However, when someone buys a bootleg, someone is making money--and usually a lot of money--off music that someone else wrote and performed."

  • Being that I am tired and needing sleep I am not going to kill myself trying to get this to work, but just FYI, it imediately crashe IE on my win2K machine.

    I'm not saying this thing is busted just that it certainly seems that this guys request for more money to work on it is obviously nessesary. Oh, well, I was hoping I could get one program to work tonight even it it wasn't one I have been slaving over for the past week. I guess my computer karma is kinda low right now.
  • by rubybroom ( 553599 ) on Wednesday March 20, 2002 @12:21AM (#3192387)
    I'm not completely versed in morpheus/kazaa/bearshare/whatever, but I understand they allow you to download a file from more than one other person simultaneously, known as "swarming" the download (btw, this is called "anteloping" on furthurnet). It is my further understanding that you can only do this from people who have the *complete* file.

    What bitTorrent (I think) and furthurnet (I know) are doing is different than this. If 5 people are downloading a file from the one person who is sharing it, those 5 people can be the beginning of 5 chains of people, relaying each packet down the chain as they get it, regardless of whether or not anyone has the complete file.

    Furthurnet uses a protocol called PCP (Packet Chain Protocol) to do this, and it automatically arranges the chains so that those with faster upload speeds are toward the top, with the dialup users toward the bottom.

    If the main host goes offline, even if no one on the chain has the entire file, everyone on the chain can still continue downloading everything that the topmost person on the chain has already saved.

    A good example: say a dialup user has large file that is in high demand. A T1 user comes along and spends a long time downloading it off of the dialup users horrible upload speed, and gets about 80% of it before anyone else comes to download. Then you show up with your cable connection and instead of being at the mercy of the upload speed of the dialup guy, you have access to 80% of the file from the plentiful upload speed from the T1 guy. And of course Furthur knows to hook you up to the fastest open slot available when you come along.

    The result of this is that the underlying host and network shape becomes transparent, and you just see a list of shows to download, you start downloading one, and all this stuff happens in the background. The longer everyone stays connected to the network, the more efficient it comes because it has more time to structure it with the faster folks in the "middle", and the slower ones on the "outside".

    Over at furthurnet, the current record is having 71 people on a downloading chain. Combine PCP with the Anteloping and you can have some serious improvement over "dumb" p2p.

    I wont even go into the benefits of the md5 checking furthur does...

    • Xolox [zeropaid.com], a gnutella for Windows, has been out for a long time now, and it allow for a client to serve parts of a *incompletely* downloaded file. Never underestimate the abilities of GNUtella.
    • BitTorrent does in fact behave generally the way you describe although it's a bit more sophisticated than 'anteloping'. The way furthurnet works, distribution always follows a tree, and the leaf nodes don't do any uploading. BitTorrent is much more mesh-like, with pieces pretty much flowing every which way across the network.

      BitTorrent also makes extensive use of checksums, in what I'm guessing is the same way furthurnet does.

      It's actually not too surprising that BitTorrent and furthurnet have a lot of similar features - they were both designed with etree in mind as a primary customer.

  • From the CodeCon website: Welcome, Slashdot visitors! The CodeCon site seems to be holding up just fine, though we've removed our graphics as a precaution. The CodeCon mp3s are also holding up well due to BitTorrent. Please report any client-side problems you encounter.

    I just love this, especially on a site thats about how to handle bandwith ;-)
  • If there emerged a distributed downloading standard that was generally agreeable and became better and better the more servers that participated, I would think it would be great to see a plug-in for popular web serving software to support it.

    Think about something like this: if you were running a site under Apache and had the option of installing a plug-in that would participate in the file sharing network as a server node. The plug-in would let you allocate a defined amount of disk storage and a defined amount of bandwidth. Then sysadmins who felt this was a good thing could just turn on their participation.

    Sure it wouldn't be much at first, but you might get a very large base of servers with good connectivity all playing a role in the system. I think it would help it scale.

    Just a thought. I wonder if anyone has considered a scheme like this.
  • I've read the site and I was at CodeCon for the presentation.

    Having said that, most of these comments are ignorant tripe. Before you post, you might want to take a look at the site and read about what actually goes on in BitTorrent. This will help you avoid looking like an ignoranus.
  • Sounds like this is similar to what MojoNation [mojonation.net] is/was trying to do. Their site doesn't seem to be responding right now, but here's the Google Cached [google.com] version of the technical docs.

According to the latest official figures, 43% of all statistics are totally worthless.

Working...