Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
The Internet

P2P Content Delivery for Open Source 134

Orasis writes "The Open Content Network is a collaborative effort to help deliver open source, public domain, and Creative Commons-licensed content using peer-to-peer technology. The network is essentially a huge 'virtual web server' that links together thousands of computers for the purpose of helping out over-burdened/slashdotted web sites. Any existing mirror or web site can easily join the OCN by tweaking the HTML on their site."
This discussion has been archived. No new comments can be posted.

P2P Content Delivery for Open Source

Comments Filter:
  • Never mind that (Score:2, Insightful)

    by TerryAtWork ( 598364 )
    Can you hide your IP?

    That's the only thing that matters!

    • No Fun (Score:5, Funny)

      by EvilTwinSkippy ( 112490 ) <yoda@nOSpAM.etoyoc.com> on Wednesday January 29, 2003 @03:29PM (#5183959) Homepage Journal
      This is like decaf coffee and nicotine-less smokes.

      What good are these CPU hogging, network lagging programs if they aren't delivering pirated software and p0rn? I won't stand for this abomination!

      Well, at least until the next time I need to download the newest slackware...

      • Re:No Fun (Score:4, Interesting)

        by Anonym0us Cow Herd ( 231084 ) on Wednesday January 29, 2003 @03:48PM (#5184095)
        This is like decaf coffee and nicotine-less smokes.

        At least the nicotine-less smokes still cause cancer.

        You also forgot to mention all the potential dates here on slashdot. At least, you can still find someone to kiss on slashdot... oh, wait... nevermind.

        What good are these CPU hogging, network lagging programs if they aren't delivering pirated software and p0rn?

        Fullfilling their purpose, just like the above items you mentioned.

        Just as in the above cases, part of Corporate America's master plan to remove all joy from the universe. (+1 Insightful)
    • The website is slashdotted.
    • Re:Never mind that (Score:4, Interesting)

      by Orasis ( 23315 ) on Wednesday January 29, 2003 @05:23PM (#5184930)
      While the OCN does not make it easy to discover what other users are downloading, it is not anonymous like Freenet [freenetproject.org].

      However, the problem with systems like Freenet is that the latency is incredibly high and has no way to leverage the existing mirrors that host much of the open source content today.

      Since the OCN is built around HTTP, it can start its downloads immediately from the network of mirrors and as peers are discovered, it moves more of the burden onto the peers.

      The end result is that the OCN has about the same latency as normal web browser, but can provide very fast parallel downloads when leveraging existing mirrors.

      That said, if high latency isn't a problem, then we highly recommend using Freenet [freenetproject.org]. We think its a great project and look forward seeing more and more applications built on top of it.
  • Although it won't help when another worm creates massive amounts of traffic on the internet...

    But at least when the net gets back a usable level, slashdot readers will be able to read about it without slashdotting a site.
  • Call me dumb... (Score:4, Insightful)

    by EvilTwinSkippy ( 112490 ) <yoda@nOSpAM.etoyoc.com> on Wednesday January 29, 2003 @03:21PM (#5183894) Homepage Journal
    I always thought the problem was not enough developers. Shit web space is cheap, and servers easy to scrape up.

    What we need is some gateway product to get young kids hooked on programming.

    • i think its more of an issue with bandwidth than server space. /. effectively is a denial of service attack waiting to happen
    • Not everywhere (Score:1, Insightful)

      by Anonymous Coward
      Broadband ISP's in europe impose a maximum limit for traffic per month. How long before versions come out that only download and not upload? What open source needs is a way to make users pay with bandwith. Instead of using paypal, allow a user to pay for his download with X megs of upload capacity. This could lower costs for distro-upgrades considerably.
      Also, how is this different from bittorrent?
    • I agree. It would be nice to have better tools to work on OSS via P2P networking. I think, there was a project to link several ArgoUML instances via P2P, you I haven't heard much about it recently.
  • Bittorrent (Score:5, Informative)

    by pacc ( 163090 ) on Wednesday January 29, 2003 @03:23PM (#5183907) Homepage
    It's open source peer-to-peer and handles exactly the problem of distributed serving.
    • Re:Bittorrent (Score:4, Interesting)

      by benjiboo ( 640195 ) on Wednesday January 29, 2003 @03:34PM (#5183994)
      And isn't it very popular as a file trading tool? It seems that once a technology has gone down that road, it's very hard to come back to legitimacy credibly. IMO, this will be one of the biggest challenges for any p2p systems of this kind.....
      • Re:Bittorrent (Score:4, Informative)

        by billstewart ( 78916 ) on Wednesday January 29, 2003 @04:01PM (#5184198) Journal
        Are you saying that "file trading" makes it not a legitimate or credible system, or am I misreading you? BitTorrent, unlike some systems, doesn't hide where the main file is located - that's somebody else's problem. One of the main uses has been distributing concert recordings of jam bands, which is legitimate and much easier than mailing tapes around. Bram's done a lot of good work making it scale and exploring the technical problems.
      • In my opinion, the problem with popularity is that websites that use bittorrent don't promote it properly, and bittorrent is too much of a hassel.

        What if bittorrent came with your web browser and was already part of its download manager built into the product. Or what if bittorrent could be installed with the click of a button much like flash, etc..

        Rather than telling people "please leave your bittorrent window opened for as long as possible after downloading..." and making a big fuss out of leechers, etc.. (This kind of stuff scares people away) just force everyone to use bittorrent. Or cap the number of open slots for non-bittorrent users downloading the file.

        Bittorrent already saves bandwidth even if people don't leave the window opened. If you ONLY offered files via bittorrent, then people would use bittorrent, and there would be enough people downloading the file that you wouldn't need people to leave the window opened.

        Re-reading, the above looks more like a ramble. But with a few tweeks, usability features added, and the annoying parts taken out, bittorrent could be as seamless as just "clicking" on the link to download.

        Insted, they make you sit there and install an application, configure an application, etc etc etc.. jump through hoops... just to download a file. Joe Schmoe doesn't want to do that, and neither do I..

        (PS, I do use bittorrent when a website offers files for it)
    • Re:Bittorrent (Score:5, Informative)

      by c_ollier ( 35683 ) on Wednesday January 29, 2003 @03:37PM (#5184012) Journal
      It's open source peer-to-peer and handles exactly the problem of distributed serving.

      Yes, BT is exactly this. A good site about BT : http://smiler.no-ip.org/BT/info.php [no-ip.org]

      I don't see much difference between BT and OCN, by the way. Or am I missing something ?
      • It sounds like they're playing in a similar space, but there are differences in who's driving what functions, and how the different sites interact with each other. I haven't looked at OCN enough to be sure, but it looks like OCN has a structure that's more like Napster, where a bunch of systems hang out for a long time and distribute a wide variety of files (using a BitTorrent-like mechanism), while BitTorrent is designed to distribute a specific file, using the resources of the ad-hoc group of machines are that are interested in obtaining that file right now (or have been interested in it recently). There may also be differences in the size of files they're designed for - BitTorrent mainly hands out files in megabyte chunks, so it's more useful for distributing larger files, such as 10MB or more, while OCN's documentation doesn't make it immediately obvious but looks like it can handle small files as well (and I can't tell whether it splits up big files automagically or just shares the load by downloading multiple smaller files from different places - sounds much more like the latter, but maybe there's something I'm missed when I read the OCN documentation.)

        Also, of course, neither one really fixes the Slashdot problem, because you need to set them up _before_ you're slashdotted :-) On the other hand, once you know you're in trouble, you can update your site to use them all by yourself, as opposed to bullying Slashdot into making a cache or hoping Google has one.

    • The primary purpose of the OCN is to promote public domain, open source, and Creative Commons-licensed content.

      It has been intentionally made very easy to incorporate new clients into the OCN. All you need to do is make an XML-RPC call advertising the content you have available and you're in.

      If Bittorrent joined the OCN, it could leverage the resources of other clients in the network and give more choice to users about which software they want to use.

      The eventual goal is to get this type of technology integrated directly into Mozilla, Squid, and Apache. But that will probably take some time...
    • One of the problems OCN faces is the seemingly obvious problem that data needs to be encoded in order to be shared. This is the problem we faced with MojoNation (the original swarm downloading system) and while throwing around ideas in a brainstorming session Bram came up with the idea of just swarming without encoding the data. This was not suitable to our needs at the time (it only works for popular, massively replicated files) but Bram stuck with the idea and developed it into BitTorrent.

      The key insight here is that when data is encoded to increase its reliability within the p2p network it becomes useless to the person who is holding the data. This is not a problem for some applications, but when you are trying to solve the slahsdot effect or serve popular content it can become a limiting factor. The advantages that BT has over this system are that it does not require the data to be encoded in a special manner by the publisher and that data that is stored on the edge nodes is still useful to those nodes. A design like BT can peer data out of your browser cache and share data a larger range of data from each particular peer. This is going to be a significant advantage in the long-run.
      • This is untrue (Score:4, Insightful)

        by Orasis ( 23315 ) on Wednesday January 29, 2003 @06:22PM (#5185406)
        The OCN does not require data to be encoded and uses standard HTTP for all of its data transfer.

        This means that it can download content from a regular Apache web server w/o any modification whatsoever.

        This also means that the peers are simply embedded web servers, can stream content (video) straight to the browser, and can use SSL out of the box.
      • Jim has incorrectly assumed that OCN uses the same sort of content-encoding as Justin Chapweske's previous project, SwarmCast. It doesn't: OCN strictly uses standard HTTP requests, overlaid with a series of extra headers and URI conventions that help participants find and use mirrors of the desired content.

        Because the OCN's techniques use minimal deltas atop HTTP, well-documented in open specifications, it has the potential to become a standard mix-in for almost all web servers and clients... transparently bringing the power of ad-hoc replication to every web transaction.
  • P2p web content? (Score:4, Interesting)

    by British ( 51765 ) <british1500@gmail.com> on Wednesday January 29, 2003 @03:23PM (#5183911) Homepage Journal
    Why not have a system that can deliver web pages(all the content YOUR hard drive can handle) via a P2P system?

    I'd love to see unreliable, poorly maintained, pop-up happy free websites like angelfire and geocities go away and use a vastly superior P2p system instead. SOmeone wants to connect to your special web page? Have them connect to you via the P2p client. No need to fuss with slow FTPing into servers to upload/update web content. It's already on your system.

    • Re:P2p web content? (Score:3, Informative)

      by br0ck ( 237309 )
      To me, it looks like this system can sort of do that. Point people to your little web server and add a few lines of Javascript to the top of your main web page and all the files reached from there, with extensions that you specify, will be served via the P2P cache. See the instructions [onionnetworks.com]. Also on that page is a link to a large PDF which gets served P2P via an OnionNetworks Java applet. The PDF loaded for me in about 20 seconds. A disadvantage for the user is that currently PDF's stream and you instantly see the first page.

      Perhaps Slashdot could use this only in certain cases like when linking to a pdf or linking directly to the newest iso.
    • You suggest they connect to you specifically with the P2P client to access your own page?

      That sounds like you're just saying the P2P client should integrate a normal web server. Why not just run apache on your computer? Either way equally violates your TOS with your ISP.

      Do you understand that the internet is fundamentally p2p?

      Jason
      ProfQuotes [profquotes.com]
      • Do you understand that the internet is fundamentally p2p?

        uh... no, it isn't. The Internet is fundamentally client/server.

        the benefit of this is precisely NOT what you said. it's not the same as having a web server on your 56K line, that's why it's useful. If you've got useful content, once someone downloads it once, they make it available too. The more useful it is, the more it gets downloaded from you, and the other guy (and they tell two friends and they tell two friends, and so on and so on...).

        After a while its distributed all over the place, so the next user can download the file(s) in 30 discrete pieces from 30 different places in 1/30th the time and at 1/30th the bandwitdth on an individual connection to the network (from the content provider's perspective).

        Also, many P2P networks tend to align themselves around super-nodes, high bandwidth machines, that are capable of serving a lot more content, so the 56K lines of the world aren't slagged...

        see? that's the benefit. not so bad after all...

        • The only difference between a "client" and "server" in a TCP/IP is that by convention we say the "client" is the machine that made the connection and the "server" was waiting. Once established, the connection is identical from both ends. The Internet is a peer to peer network, not a server client network. Read Network Operating Systems by Tannenbaum.

          If you're creating a website for some friends, your 56k modem will easily be able to serve it. If it gets too many hits, pay a real hosting service. It's not that expensive.

          ProfQuotes will easily withstand a slashdotting, it's on a 100mbps pipe. It would be impossible to build a 'real' site like that into a distribution service like you want because there would be no way to synchronize user submissions. A P2P file distribution network is good for large files that don't change often. Not for a 'live' website

          Jason
          ProfQuotes [profquotes.com]
          • agreed that from a *purely technical* standpoint the Internet is P2P... but from a practical standpoint, and from a conceptual standpoint when you're talking about delivery networks, it's few servers (relatively) and many clients that connect to them. It's the few to many model that causes bandwidth clogs. Thanks for the book suggestion anyway ;)

            I also agree that synch is a big problem to over come, and real-time or near real-time content distribution would not be an optimal use for something like this... but in reality most web content isn't real time, it's mostly static -- and therefore a candidate for this type of delivery.

            it's nice that your site has a Big Fat Pipe coming into it, but not everyone with worthwhile content to share has one. this is a method of distributing bandwidth, and ostensibly, making a much more efficient use of our networks, collectively...

            • If you're really interested in the book suggestion, I got the name wrong, it's Distributed Operating Systems (he also wrote Computer Networks, which is an excellent book too).

              Besides the synch problem, P2P distribution has reliability problems; people shutting their computer half way through a download or having corrupt copies to begin with. Many others were already mentioned on this discussion.

              Another problem is that for something as small as a web page, the overhead in co-ordinating where to get the data from is greater than the size of the data itself.

              My point with my site is that this is the best way to deal with possible surges in use. I'm sharing the pipe at least 1000 ways, but 99.9% of the time I'm not even using my share. However, when I need the bandwidth it's there for me; I have peaks where I hit 1-2 mpbs for a few minutes. Since it's shared so many ways by other people in the same situation, the overall price is very cheap (a small fraction of what I'm paying for my broadband connection).

              Jason
              ProfQuotes [profquotes.com]
    • It sounds like something Hotline's been doing for years.
  • by Levine ( 22596 ) <(xc.estaog) (ta) (enivel)> on Wednesday January 29, 2003 @03:23PM (#5183915) Homepage
    ...have been doing this for awhile now. I've seen download sites giving you the option of grabbing game demos from their site or through some small P2P client that they offer, which snags parts of the file from other users and combines them all on your end.

    This technology's out there, but it's nice to see it gaining some fairly widespread adoption.

    levine
    • I haven't seen this feature on FilePlanet, but I don't doubt it doesn't exist. In fact, I think this is an excellent use of P2P technology, for game demos and patches that easily border on 20-100-300megs. Ever sinces sites like FilePlanet have used their queuing methods and/or pay-for-personal-dl-servers, getting such patches and updates can be tedious and annoying at times. And now that theren't arent many FTP mirrors of sites (I remember cdrom.com 's planetquake mirrors from way back), it's even worse. P2P clients with game patches would do the trick, though I suspect with most of them behind 128k or lower upload caps, you'd need a LOT of p2p clients to effectively get reasonable download rates for the larger files from a network.

      Of course, there's the issue that some patches or demos are 'exclusive' or prevent you from redistributing unless you're a big-name site through their EULA, which would make a ad hoc P2P network unable to offer a file.

    • Happy Puppy (Score:3, Informative)

      by Orasis ( 23315 )
      The newly relaunched Happy Puppy [happypuppy.com] uses the Tornado Cache Plug-in to provide some pretty fast downloads. If you're interested in seeing this stuff in action I recommend checking out the site.

      That said, the OCN is open to any and all applications out there, so I'd encourage them to join the OCN.
    • places LIKE fileplanet, yes. Fileplanet, no.

      Fileplanet doesn't use this kind of technology simply because it defeats their business model.

      (not really relevant, of course)
      • Yeah, I think the parent poster was confusing FilePlanet with FileFront [filefront.com], which DOES do p2p file distribution using the RedSwoosh client.

        FilePlanet doesn't want to lessen people's dependency on paying for their central servers, so I don't think they'll be going this route. :)

        --

  • impressive (Score:4, Interesting)

    by Boromir son of Faram ( 645464 ) on Wednesday January 29, 2003 @03:25PM (#5183929) Homepage
    This is a great idea that's been a long time coming. It sounds to me like it takes the ideas first put forth in FreeNet (which spawned later P2P networks like Napster and Kazza) and finally makes them accessible to everyday content producers and consumers.

    I'm wondering if maybe this is the future of blogs like Slashdot, with design, features, and content distributed the same way moderation and commenting are today. Creative Commons licensing would be a further boon.

    This sort of next generation P2P network might be the weapon we need against the forces of evil, if only we are brave enough to use it.
    • way to summon up karma with your +3 staff of whoring, "Boromir".
    • Boromir, son of Faramir, King of Gondor and Minas Tirith

      Uh... Boromir is the son of Denethor, and brother of Faramir.

      In the future, you may wish to take the following advice: identity theft is only effective when you just manage to get the fucking facts straight!

  • by jake_the_blue_spruce ( 64738 ) on Wednesday January 29, 2003 @03:27PM (#5183945) Homepage Journal
    I see a slight problem, depending on how CAW is implemented.

    Scenario #1:
    Assuming the Originator Apache responds with HTTP headers such as those in CAW to advertise site-wide mirrors like this:
    X-URI-RES: http://urnresolver.com/uri-res/N2L?urn:sha1:; N2L

    When the originator Apache site updates any documents, the URN resolver (or mirror) will silently fail without realizing which document has been updated. It would need to rescan the entire website, even when only one document has changed.

    Scenario #2:
    The opposite problem occurs with the Originator Apache responding with HTTP headers such as this:
    X-URI-RES: http://untrustedmirror.com/pub/file.zip; N2R

    The mirror will respond successfully, but will give an out-of-date version of the file without the client or the mirror realizing it. The mirror would then have to manually scan the website on a regular basis (even when nothing has changed) to prevent anything getting too out of date.

    Scenario #3 (Solution):
    However, if the Originator Apache responds with HTTP headers such as this:
    X-URI-RES: http://untrustedmirror.com/pub/file-mirrors.list; N2Ls; urn:sha1

    When the URN resolver or Mirror sees the SHA-1 hash mismatch, it knows which document needs to be updated, and can respond by doing so for just that document.

    I realize that CAW is mainly designed with static files in mind (images, PDFs, ISOs) where updates occur rarely (or never). And no, I don't see Apache calculating the SHA-1 for dynamic pages like Slashdot anytime soon. However, updates do occur to images, PDFs, ISOs, etc. on occasion. I do think CAW(#3) could be used (and useful) for large, heavily subscribed RSS feeds without too much trouble. Maybe elsewhere in dynamic content.
    • by Orasis ( 23315 ) on Wednesday January 29, 2003 @03:51PM (#5184120)
      While these are insightful points, these are not a problem because the OCN obeys HTTP caching semantics. Thus, just like your browser cache, it can deliver stale content within a certain amount of time before the OCN refreshes its cached copy.

      To get a better picture of how HTTP caching semantics work, I recommend trying out Cacheability Engine [ircache.net] and enter in a couple of sites to see how cacheable their content is.

      Also, the OCN uses the SHA-1 hashes for all content addressing as soon as it translates the URL to the SHA-1 URN. There-after, the content is only referred to by its SHA-1 URN, so there is no concern about version conflicts between mirrors/peers, because a single SHA-1 URN can only ever point to a single version.

      Your point re: dynamic data is a good one. The OCN really isn't designed for delivering dynamic content because it changes too frequently and the cached copies on the peers would quickly become stale. However, we are doing some work with caching RSS feeds, which provides a nice trade-off between dynamic and static content.
      • I'm not sure how much that helps. The "Slashdot" effect is typically against recent content which has probably been updated more recently than content expiry settings of the hosting website. Many websites do not have any explicit freshness information set. Is that information mirrored forever, never to be requested?

        The CAW spec explains how originator servers work, but not how mirrors should operate.

        By the way, the OCN-dev list does not appear to be accepting new members when you reply to the confirmation e-mail. Don't know if there's a human in the loop or what.
  • by WPIDalamar ( 122110 ) on Wednesday January 29, 2003 @03:30PM (#5183961) Homepage
    I don't see this helping the slashdot effect. How many people are actually going to download the browser plugins required to make all this work?

    I mean, I might get the plugins if I'm dealing a lot with sites that use this technology, but how many people will be dealing with a lot of these?

    And those sites are using this, are probably the ones that are use to high volumes of traffic, so they prepare for it. The average site that can't handle a slashdot, can't handle it because they generally don't need to.
    • If the site is already Slashdotted and subscribes to this technology, I'd imagine quite a few readers of this site would install the plugin to get the content.
    • This is an irrelevant claim. As with practically any technology, epecially P2P file-sharing concepts, it will only be useful when it's widely used.

      My worry would be in the sites serving up the plugin being /.ed.

      BTW, what's with /. posts about /.? Is this some sort of meta/. phenomenon? And why has /. been so slow of late? Has /. /.ed itself?

      If a /.er on /. /.ed /., how many /s and .s can be fit into one sentence?
  • How would it work? (Score:2, Insightful)

    by benjiboo ( 640195 )
    How would something like this work? Presumably slashdot would have to link to a single site, which then farmed out the requests to participants? If this is the case, there is still a single point of faliure, right? And presumably browsers need to know that they are being redirected so any subsequent requests. Thus, how is this more powerful than an (albeit intelligent) javascipt forwarder...? If it's just a simple load baalncing system then I don't see what's so groundbreaking.
    • no, the way these things typically work, and I'll admit I don't know the specifics of this particular implementation, is that each file is assigned an identifier (sorta like a URL) which is then included in different machines indicies. When you're looking for a site (e.g. a homepage -- a file), you send a query out into the network and get returned a list of hosts that you can then make a file request(s) from. So it is fully distributed in that way.
  • Trojans? (Score:1, Insightful)

    by gmuslera ( 3436 )
    If that network don't have any kind of moderation (and as a p2p network, not sure how exactly that can be done right), what will stop people to post or ensure that is not downloading a trojanized versions of programs? Or a new way to distribute viruses, like the ones that are already for kazaa and similars.
  • Magnificent (Score:3, Funny)

    by mao che minh ( 611166 ) on Wednesday January 29, 2003 @03:31PM (#5183976) Journal
    I for one can not wait until the golden age of computing, an age in which my computer will be inexplicity linked to that of every script kiddie from Taiwan to San Jose. This will be a stunning monument to the ingenuity of man. There will be a great many personal homepages replaced with vivid images of genitalia and beer bottles. What a wonderous age it will be.
    • by billstewart ( 78916 ) on Wednesday January 29, 2003 @04:40PM (#5184524) Journal
      The checksums don't prevent you from being tricked into downloading the wrong stuff, but at least let you know to reject it once you've got it. BitTorrent does a fairly thorough job of this - each chunk of data is checksummed, so a given source can only hand you a megabyte or so before you detect that there's a problem. (I think both of these systems use cryptographic checksums like SHA1 or MD5, rather than CRCs which are forgeable.)


      By the way, your home page is ugly.... [heineken.com]

  • Since when? (Score:5, Insightful)

    by Some Bitch ( 645438 ) on Wednesday January 29, 2003 @03:31PM (#5183978)
    indviduals will be able to help distribute free content by donating their spare bandwidth and disk space to the network.


    Sarcasm aside, while I can see where they're going with this I can't see it ever seriously taking off. Most of the world are still on 56k (or less) and I know I regularly hunt for things to delete so I can squeeze something else on my hard drive.
  • by Sanity ( 1431 ) on Wednesday January 29, 2003 @03:34PM (#5183995) Homepage Journal
    It is worth mentioning that Freenet now incorporates a "redundant splitfile" mechanism (using FEC [everything2.org]) that allows the reliable downloading of massive files. Some guy has made a bunch of 150MB ogg video files of Star Trek Enterprise episodes available, and they are all reliably downloadable (at about 40k/sec across a broadband connection) from Freenet.

    To recap:

    • Reliable
    • Anonymous
    • Totally decentralized
    • More popular files are more widely distributed thus avoiding any /. effect
    Install [freenetproject.org] a recent snapshot of Freenet, then visit this freesite [127.0.0.1] to check it out.
  • Here's an idea... (Score:5, Interesting)

    by Mysticalfruit ( 533341 ) on Wednesday January 29, 2003 @03:36PM (#5184004) Homepage Journal
    You'd run a client on your machine that would act as a local DNS server. Then you'd point your machine to this DNS server. So when you goto a site (say off of slashdot) the DNS server would interact with the P2P network and give the IP of the less loaded machine in the P2P network. Yeah, you'd have to run a deamon on your machine, but oh well...
  • Browser caches? (Score:4, Interesting)

    by mOdQuArK! ( 87332 ) on Wednesday January 29, 2003 @03:37PM (#5184011)
    I always wondered if it were possible to share people's browser cache contents via P2P technology (with exceptions for the secure documents, of course).

    I guess the big problem is still with the indexing.
    • Re:Browser caches? (Score:1, Interesting)

      by Anonymous Coward
      What you said reminds me of Squid's [squid-proxy.org] sibling system.

      Good ol' fashioned caching is almost as good as p2p.

    • Re:Browser caches? (Score:3, Interesting)

      by gmuslera ( 3436 )
      Is almost the same (in a smaller scale) that do a proxy like squid. It can even be part of a cache hierarchy, to distribute load or things like that. Of course, doing this at internet level would imply a lot of open proxies, something not very nice.

      Sharing directly browser's cache content, without much care, will share also webmail messages, personal data, slashdot preferences, whatever you don't want to share.
  • by Anonymous Coward
    ...Google's cache!

    Any existing mirror or web site can easily join the OCN by tweaking the HTML on their site

    Sounds a lot like it to me - especially the bit about it being like a virtual server. I suppose if it stored images and stuff then it's be a bit better, but will it ever match Google's speed and breadth of content?
  • by John_Renne ( 176151 ) <zooi@gniffeMENCK ... net minus author> on Wednesday January 29, 2003 @03:50PM (#5184110) Homepage
    I surely see some potential in the idea but what about standards. Allthough it's GPL-ed the standards aren't adopted by any organisation like IETF, OASIS or W3C. If one of these organisations would see the potential for this it would make things a lot easier. You would get rid of plug-ins, extentions etc. and all browsers as well as servers would support it.

    In fact I'm pretty astonished none of these organisations has ever picked up P2P.
  • by makoffee ( 145275 ) on Wednesday January 29, 2003 @03:54PM (#5184141) Homepage Journal
    It seems like alot of you just read the front page and seem to think this is some type kazaa clone. I agree the browser pug in is kinda gay.

    What this "could" mean is say if your favorite distro has just been updated, we all know how hard it is to download 3 isos while they are in high demand. The thing about OCN and OnionNetworks type software is that the high demand and download rate will help the availibility. Plus everything is authenticated and logged so worms/trojens/fakes really arnt a problem.

    As far as OCN goes, it's not for warez, and divx. I think it's intended for geek's free software distrobution. So love it, and try to inovate it.

    • > the browser pug in is kinda gay.

      That's not a butt-plug, stupid!

      k2r
    • If you look at how it works, there is no other way to pull this off besides using a browser plugin. Of course, this could be integrated into the browser, but that hasn't happened yet.

      If your being sarcastic with the distro comment: You may note that the article says nothing of getting the latest distros, but making redundant mirrors on a P2P network of sites that get overloaded or /.'d.

      The other thing is I don't understand how they can prevent people from using this for illegal files. Anyone can enable their website [onionnetworks.com] to support this type of downloading and there is no central moderation. People were screaming "READ THE ARTICLE" at one guy who posed the question of illegal files, but as you can see, it's really quite simple to slap that JavaScript code on any page reguardless of if it has an OSI-approved license. OCN isn't meant for divx or warez, but it wouldn't be hard to make it that way; I see no way of preventing this.

      [sarcasm]Oh, and by "gay" you mean this plug-in is happy? Because using that as a derogatory term really makes your case stronger with another dumb reply.[/sarcasm]
  • by billstewart ( 78916 ) on Wednesday January 29, 2003 @03:55PM (#5184152) Journal
    Codecon - www.codecon.info [codecon.info] will be February 22-24 in San Francisco. It's a conference about writing code for applications like peer-to-peer and crypto (and crypto peer-to-peer, etc.), oriented towards authors presenting actual working demos. The program [codecon.info] page has abstracts of the talks/demos. Many of these applications overlap some of the same space. One of the organizers is Bram Cohen, author of the BitTorrent P2P file distribution system (and one of the organizers of last year's conference), and the other is Len Sassaman, who does cryptographic remailers.
  • by TrevorB ( 57780 ) on Wednesday January 29, 2003 @03:56PM (#5184160) Homepage
    How far away is this from a

    p2p://www.cnn.com/

    style link for Explorer/Mozilla/Opera/Konqurer?

    Turn everyone's browser cache into p2p.

    CNN's probably a bad example, as the content would have to be updated more frequently... And you'd need some way of having a "revision model", so that sites could be updated. I guess it would be up to the clients to ditch old versions of pages.

    Might also need some sort of (eep!) central authority to verify pages were who they claimed to be (so I couldn't take over CNN, for example). Maybe just signed keys for each content provider would be good enough?
    • Might also need some sort of (eep!) central authority to verify pages were who they claimed to be (so I couldn't take over CNN, for example).

      I have recently performed cursory exploration of using connected user's caches as a source of a p2p web mesh. Yes, (some) people would scream bloody mercy at having their "privacy" invaded, but then again they probably aren't the target market anyway.

      • I'm not worried about privacy, but security. Fortunately, most user-sensitive items could be filtered out by ignoring those which have a stored Cookie or Authorization header. Otherwise people might be able to view other people's private, per-user data.
  • by Loki_1929 ( 550940 ) on Wednesday January 29, 2003 @04:00PM (#5184192) Journal
    Indeed. It's a shame no one thought of it sooner [freenetproject.org].

  • by Anonymous Coward on Wednesday January 29, 2003 @04:10PM (#5184258)
    From the website:

    "Multimedia files are eligible to be distributed via the OCN if they are either released into the public domain or are available under a Creative Commons license that allows the content to be freely copied."

    and

    "Software is eligible to be distributed via the OCN if it is released into the public domain or is licensed under an Open Source license. The license must be an OSI Approved License that adheres to the Open Source Definition."

    Honestly, those constraints seem to seriously restrict the real usefulness of the network. It means if I want to put up a webpage and publish the contents with OCN, I need to go through all the rigamarole to make sure that everything's copacetic with whatever the "approved" licenses are, instead of just saying "ok, stick this out there." Which I may not want to do because if I've just created some magnificent thing (music file, video file, whatever) I may not necessarily want the license to allow anyone to download it and modify it any way they want and then essentially claim it as their own.

    Software is one thing, but online content is something else. Honestly, how many large "media files" have you seen that are licensed under an "Open Content" license?

    Sure, it's nice to have something like this that caters explicitly to the OpenSource crowd, but with those constraints, I can't see it as used for very much other than putting new versions of GPL'd software packages online.
    • Software is one thing, but online content is something else. Honestly, how many large "media files" have you seen that are licensed under an "Open Content" license?

      They don't have to be "Open" as in freely modifiable. They could be licensed under a Creative Commons 'redistribution without modification' licence. That would make perfect sense for movie trailers, game videos etc. It would end the stupidity of movie studios parking their trailers on apple.com, and games sites requiring paid "premium" accounts to download ads which are basically advertising.

      Also, people putting out videos of their autonomous Lego stegosaurus or whatever could use this.

  • by KjetilK ( 186133 ) <kjetil AT kjernsmo DOT net> on Wednesday January 29, 2003 @04:14PM (#5184285) Homepage Journal
    Yeah, this is really cool, and I would certainly like to participate. But rather than insert yet more code in my pages, I think it would be much better if they just looked for the RDF metadata [creativecommons.org] provided by the CC project.

    The CC License Engine framework really does all that is needed to identify a license that would allow the P2P network to redistribute the material , so why not use it?

    Or am I missing something now?

  • by NanoGator ( 522640 ) on Wednesday January 29, 2003 @04:20PM (#5184332) Homepage Journal
    When I wanted to download a Linux distro, P2P was the first place I looked. I didn't want to cost the providers a gig of traffic when they're not making any money on it. Pity I didn't really find what I was looking for. This was a while back, though.

    I'm the type of guy who doesn't like sharing my bandwidth, but I'd be willing to make an exception for Open Source stuff just on the grounds the it helps alleviate the costs of hosting free stuff.
  • by stienman ( 51024 ) <.adavis. .at. .ubasics.com.> on Wednesday January 29, 2003 @04:33PM (#5184459) Homepage Journal
    I looked over the website and the site for the current client, and found only faint, inspecific references to what loading such a client does to your machine and internet connection.

    This is terrible.

    We complain when Gator is loaded as an 'add-on' to our system, yet we don't mind if we are not allowed to download some content without loading some P2P app which then uses our disk space and internet connection to serve others?

    They need to put up a specific message that says, in effect, "This download client will significantly speed up the process of obtaining this file. Once downloaded your computer will allow other people to download this same file, or portions of it, from your computer so they can gain the same speed benefit you will get. There is no security risk, and you can stop the client from letting others download this file by moving or deleting the file, or ending the client by doing x, y and z. If you wish to simply download the file normally without installing this client, click here - otherwise click 'OK'"

    Yes, we all understand what P2P means - we are donating part of our computer and network to the P2P network for as long as we are connected to the internet. But this is not common terminology - ask a non-computer expert who has spent hours downloading music from their favorite P2P app what the P2P app does, and all they know is that they can get "free" music with this cool program. They often have no idea that others are downloading music from their computer, etc.

    This may slow down adoption, but the reality is that the backlash that may come out against it is not worth the extra adoption it may gain without full and well-explained disclosure - as well as a method to download the file normally.

    -Adam
  • I think that the CC project is a great idea and some kind of P2P distribution is great too. I do have some some minor concerns though, one is over their share alike license.

    On the FSF web site there is a short list of Licenses For Works Besides Software and Documentation [fsf.org] and the Creative Commons Share Alike license is not mentioned, so I sent the FSF an email about this and the opinion seems to be that the Share Alike license would not be considered GPL-compatible, which I think is a shame. The email about this can be found on the cc-licenses list archive [ibiblio.org]

  • Innovative!!! (Score:1, Redundant)

    by 1stflight ( 48795 )
    Imagine content delivery system based on p2p instead of http, just try to Slashdot it! The possiblities are amazing! Now, just secure it against worms, tojan horses and altered data...
  • I looked around the web site and couldn't find any mention of the relationship between this project and the original OpenContent project [opencontent.org] that maintains the Open Publication Licence. What's the story?
  • Very cool, but what about bitrot? If I download a freebsd version from it will it be 2.2.5? Or 5.0? I hope there is a good way to check.

    I hope this works. It'd be a nice alternative to porn and mp3's on P2P networks. (Even though it is its own P2P network). Maybe distributed CVS should be next? That way we don't overload a server, but we can still download just the updates we need?

  • I always thought P2P had fantastic potential for things like free software, educational materials, unpopular political speech...

    What we need is a P2P network that outright will not allow certain types of content. That's the only way we'll ever keep the SNR down and avoid giving universities and ISPs an excuse to restrict usage.
  • I have a ultrasparc that can handle the IO, I am actually working on just such an site that can work with that type of system.

    Nice to see that yet again I got a reason to fire up that compiler.

  • Hey, I've just been through the 'pain' of installing gentoo, why the hell can't they use P2P stile downloads spanned accross the mirrors.
    That 40MB file downloading at a painfull 2k because the mirror you picked is abit bussy at the moment.

    And why doesn't it continue to download while compiling. P2P networks let you play a file and download at the same time.

    Many more OT gripes about emerge... but I'll save them for a patch.
  • Regarding:

    This document specifies HTTP extensions that bridge the current location-based Web with the Content-Addressable Web. -- HTTP Extensions for a Content-Addressable Web [open-content.net]

    The World Wide Web is "the universe of network-accessible information [w3.org]", i.e. anything with a URI, including URIs that are not tied to a particular hostname.

    The Web already includes non-location-based URIs like mid: [ietf.org] (for referring to message-ids), and urn:sha1: for referring to a specific set of bits by their checksum.

    This proposal seems like a decent way of bridging HTTP-space with URN-space, but please remember that the Web is more than just HTTP. (see also: URIs, URLs, and URNs [w3.org])

    Anyway, it seems to me that sites that tend to suffer from slashdotting are:

    1. those that use dynamically-generated pages for what is basically static content: this problem can be fixed by sites making sure their content is cacheable [mnot.net], and further deployment of HTTP caches. (I'm not convinced a p2p-style solution is the solution here.)

    2. those with large bandwidth needs (kernel images, linux distribution .iso's, multimedia): as p2p software becomes more mature and widely deployed, everyone will have a urn:sha1: resolver on their desktop (pointing to their p2p software of choice), then whenever a new kernel is announced, the announcement can say:

      Linux kernel version 2.4.20 has been released. It is available from:

      Patch: ftp://ftp.kernel.org/pub/linux/kernel/v2.4/patch-2 .4.20.gz
      a.k.a. urn:sha1:OWXEOVAK2YJW3G6XSULXDWFCNWTX7B2K
      Full source: ftp://ftp.kernel.org/pub/linux/kernel/v2.4/linux-2 .4.20.tar.gz
      a.k.a. urn:sha1:PPWXYMA32YNDNO35UD3IQTCWBVBYK5DC

      and people can just fetch the files using urn:sha1 URIs instead of everyone hitting the same set of mirrors. (gtk-gnutella already supports searching on urn:sha1: URIs [sourceforge.net])

  • I seem to remember an article [linuxjournal.com] in a LinuxJournal a while back talking about uploading the entire Debian archive onto freenet [freenetproject.org] and then using apt-get to download it. If that's the case, then you just have to point it to the correct freesite and let apt-get do the rest for you. I'm sure that a similar program, if not apt, can be created to automatically download and update new opensource programs. So maybe instead of trying to reinvent the wheel, they could just do that. Plus, maybe then, there'd be a decent number of people with semi-permanant nodes on freenet.

    Just my $0.02,
    La Camiseta

"For the love of phlegm...a stupid wall of death rays. How tacky can ya get?" - Post Brothers comics

Working...