Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet Programming IT Technology Your Rights Online

Enhancement To P2P Cuts Network Costs 190

psycho12345 sends in an article in News.com on a study, sponsored by Verizon and Yale, finding that if P2P software is written more 'intelligently' (by localizing requests), the effect of bandwidth hogging is vastly reduced. According to the study, redoing the P2P into what they call P4P can reduce the number of 'hops' by an average of 400%. With localized P4P, less of the sharing occurs over large distances, instead making requests of nearby clients (geographically). The NYTimes covers the development from the practical standpoint of Verizon's agreement with P2P company Pando Networks, which will be involved in distributing NBC television shows next month. So the network efficiencies will accrue to legal P2P content, not to downloads from The Pirate Bay.
This discussion has been archived. No new comments can be posted.

Enhancement To P2P Cuts Network Costs

Comments Filter:
  • 400%? (Score:5, Insightful)

    by Sam H ( 3979 ) <sam@zoy.org> on Friday March 14, 2008 @10:23AM (#22750764) Homepage
    How do you reduce the number of 'hops' by an average of 400%? Negative number of hops? Also, FP.
    • Re:400%? (Score:4, Informative)

      by IndustrialComplex ( 975015 ) on Friday March 14, 2008 @10:38AM (#22750880)
      They probably discussed the number so many times that they lost track of how it was referenced. Lets say they cut it down to 25 from 100. If they went from their method, to the old method, then it would be a 400% increase in the hopcount.

      Sloppy, but we can understand what they were trying to say.
      • Re: (Score:2, Interesting)

        by LandKurt ( 901298 )
        Well, technically going from 25 to 100 is a 300% increase, since the increase is 75. But I realize that whenever the ratio between numbers is four to one it's going to be commonly referenced as 400%, regardless of whether it should actually be a 300% increase or 75% decrease. The mind fixates on the factor of four and wants to use 400 as the percentage. The correct numbers just feel wrong.

        Interestingly this mistake doesn't happen with small changes like 10 or 20 percent. But as soon as something doubles i
        • Erm, it's not actually a mistake. It depends on whether you parse "increase" as being additive, or multiplicative. In everyday english it can be either. So what you're describing is ambiguity rather than error.
    • Re:400%? (Score:4, Informative)

      by MightyYar ( 622222 ) on Friday March 14, 2008 @10:42AM (#22750920)
      The number 400% appears nowhere in the article.
    • Re:400%? (Score:5, Insightful)

      by SatanicPuppy ( 611928 ) * <Satanicpuppy&gmail,com> on Friday March 14, 2008 @10:42AM (#22750924) Journal
      Just typical market speak. 400% sounds sexier than "a factor of four".

      The problem that leaps to my mind is that either you're going to have to collect a huge chunk of routing information so your client can figure out which peers are "close" to you, or a third party is going to have to manage the peering...Neither one of those thrills me, especially since an ISP is pushing the technology, which would make them the obvious third party.
      • Maybe use the data from the DNS records to correlate blocks of IPs that all belong to the same organization? Apply a weight first to IPs coming from the same organization you're a member of, and then a second weight to those that are geographically close (using one of the many services out there that correlate IP to physical location [poor granularity though]). Might even be able to apply some logic that says something like "if getting high latency from IP in block X, weight other IPs from block X lower". M
        • Or maybe use the hop count so that you're measuring network distance rather than geographical distance. You know, the way they describe IN THE SUMMARY.
          • Whoops my bad. It's not in the summary at all, it's in one of the articles. No one could be expected to read that far. Carry on, nothing to see here...
    • Re:400%? (Score:5, Insightful)

      by ThreeGigs ( 239452 ) on Friday March 14, 2008 @10:58AM (#22751074)
      It gets worse. From RTFA:

      "Using the P4P protocol, those same files took an average of 0.89 hops"

      How do you possibly get an average of LESS than one hop, unless you're getting the file from yourself?
      • Re:400%? (Score:5, Funny)

        by mcrbids ( 148650 ) on Friday March 14, 2008 @11:10AM (#22751206) Journal
        How do you possibly get an average of LESS than one hop, unless you're getting the file from yourself?

        Easy! They ran it in simulation, using VMware. Have you ever used VMware? It's an amazing tool that makes an excellent platform for simulations and prototypes, especially when you need to know exactly how applications will perform in the real world.

        Game developers, for example, routinely use VMware sessions. Especially the hard-core, 3D FPS developers.

        No, really!
        • by Nullav ( 1053766 )
          With those results, I'm going to assume VMWare isn't made for testing transfer protocols. You know, unless anti-routers exist.
      • 0.89 hops (Score:3, Insightful)

        by chiasmus1 ( 654565 )
        How do you possibly get an average of LESS than one hop, unless you're getting the file from yourself?

        Usually when people are talking about hops, they are referring to routers. The only way you would not go through a router is if you and the source were in the same LAN. If you get an IP address from your ISP and it is one of the private ones (ie. 192.168.0.0/16) then you will likely have to go through a NAT machine before you will be able to see anyone. If your IP address is a publicly route-able address
      • If a hop is defined as traversing a router or bridge then you could have 0 hops by trading with someone in the same subnet.
      • by ivan256 ( 17499 )
        Maybe if you're measuring from the ISP's backbone router, out, and not counting in-network hops? Maybe they meant 0.89 fewer hops? Sounds like they need to proofread.
      • Re:400%? (Score:5, Informative)

        by laird ( 2705 ) <lairdp@nOSPAM.gmail.com> on Friday March 14, 2008 @04:53PM (#22754790) Journal
        Speaking as the guy that ran the test, I should explain the "hop count" decrease observed in the test in more detail than the article. First, I should clarify that the 'hop' is a long-distance link between metro areas, because that is the resource that is scarce - we ignored router hops, because they aren't meaningful, and generally aren't visible inside ISP infrastructures for security reasons. This means that data that moves within a metro area is zero hops, data pulled from a directly connected area is one 'hop', and so on.

        So in the field testt we saw data transmission distance drop from an average of 5.5 'hops' to 0.89 'hops'. This happens because P4P provides network mapping information, allowing the p2p network to encourage localized data transfers. Generic p2p moved only 6.27% of data within a metro area, while p4p intelligence resulted in 57.98% same-metro area data transfer. Thus deliveries are both faster and cheaper.
    • Re: (Score:2, Funny)

      by kaizokuace ( 1082079 )
      You can maximize the bitterness of the hops by adding more of them to the beginning of the boiling wort. A small amount of hops at the end of the boi...oh what are we talking about again?...
    • I suggested enhancements such as this in 2006. http://www.aigarius.com/blog/2006/08/12/bit-horizon/ [aigarius.com]
  • by TubeSteak ( 669689 ) on Friday March 14, 2008 @10:25AM (#22750776) Journal

    For other ISPs to reap the benefits Verizon did in the test, they too would have to share information about their networks with file-sharing companies, and that they normally keep that information close to their chests.
    Excuse my ignorance, but what about their network is secret, other than the prices they're paying?
    Network topology isn't & can't be a secret...
    • Re: (Score:2, Informative)

      by Anonymous Coward
      The answer is "a lot"

      How much capacity a device has, how many links it has, how much it might cost a carrier to use those links. How much capacity the switching devices in that network have, what firewall/filtering might be in place. Where the devices are phyiscally located.

      There's a lot more to a network that just IP Addresses.
    • Re: (Score:3, Informative)

      by truthsearch ( 249536 )
      My guess is geographic location of IPs, since they're not just talking hops, but distance. If the hops are all geographically local the data likely transfers between less ISPs and backbones. I don't know much about the details, so this is just my interpretation of the claims.

      But wouldn't a protocol that learns and adjusts to the number of hops be nearly as efficient? If preferential treatment were given to connections with fewer hops and the same subnet I bet they'd see similar improvements.
      • by brunes69 ( 86786 )
        Geographic location of IPs is not secret.

        www.maxmind.com

        If your project is open source their database is free.
    • by mr_mischief ( 456295 ) on Friday March 14, 2008 @11:01AM (#22751112) Journal
      You seem so certain.

      Your traceroute program doesn't tell you when your traffic is being routed four hops through a tunnel to cut down on visible hops and to save space in the ISP's main routing table. Without the routing tables at hand you don't know the chances of being routed through your usual preferred route and through a backup route kept in case of congestion. Nothing from the customer end shows where companies like Level 3 and Internap have three or four layers of physical switches with VLANs piled on top between any two routers. Nothing tells you when you're in a star build-out of ten mid-sized cities that all go to the same NOC vs. when you're being mesh routed over lowest latency-weight round robin, although you might guess by statistical analysis and mesh routing of commercial ISP traffic outside the main NAPs is getting more and more rare.

      There's a lot you can easily deduce, especially if your ISP uses honest and informative PTR records. There's still much that an ISP can do that you'll never, ever know about.

      I worked for one ISP where we had 5 Internet connections in four cities to three carriers, but we served 25 cities with them. We had point-to-point lines from our dial-in equipment back to our public-facing NOCs. We had a further 18 or so cities served by having the lines back-hauled from those towns to our dial-in equipment. We had about 12k dialup customers and a few hundred DS1, fractional DS1, frame relay, and DSL customers. Everyone's traffic went through one of two main NOCs on a good day, and their mail, DNS, AAA, and the company's web site traffic never touched the public Internet unless we were routing around trouble. In a couple of places we even put RADIUS slaves and DNS caching servers right in the POP.

      I worked for another that served over 40k dial-up and wireless customers by the time they sold. We had what we called "island POPs". Each local calling area we served had dial-in equipment and a public-facing 'Net connection. Authentication, Authorization, and Accounting, DNS, Mail, and the ISP's website traffic all flowed over the public Internet except in the two towns we had actual NOCs. There were tunnels set up between routers that made traffic from the remote sites to the NOCs look like local traffic on traceroute, but that was mainly for our ease of routing and to be able to redirect people to the internal notification site when they needed to pay their late bills. We (I, actually) also set up L2TP so that we could use dial-up pools from companies like CISP who would encapsulate a dial-in session over IP, authenticate it against our RADIUS, and then allow the user to surf from their network. We paid per average used port per month to let someone else handle the customer's net connection while we handled marketing, billing, and support.

      The first ISP I worked for had lines to four different carriers in four different NAPs in four different states, lots of point-to-point lines for POPs, and a high-speed wireless (4-7 MBps, depending on weather, flocks of birds, and such) link across a major river to tie together two NOCs in two states. Either NOC could route all of the traffic for all the dozens of small towns in both states as long as one of our four main connections and that wireless stayed up (and all the point-to-point ones did, too). If the wireless went down, the two halves of the network could still talk, but over the public Internet. That one got to about 10k customers before it was sold.

      At any of those ISPs, I couldn't tell you exactly who was going to be able to get online or where they were going to be able to get to without my status monitoring systems. On one, all the customers could get online even without the ISP having access to the Internet, but they could only see resources hosted at the ISP. Yet that one might drop five towns from a single cable break. Another one might keep 10k people offline due to a routing issue at a tier-1 NAP, but everyone else was okay. However, if that one's NOC went offline, anyone surfing in other
      • by Xelios ( 822510 )

        We had about 12k dialup customers and a few hundred DS1, fractional DS1, frame relay, and DSL customers. Everyone's traffic went through one of two main NOCs on a good day, and their mail, DNS, AAA, and the company's web site traffic never touched the public Internet unless we were routing around trouble. In a couple of places we even put RADIUS slaves and DNS caching servers right in the POP.
        My HED hurts...
    • Re: (Score:3, Interesting)

      by leuk_he ( 194174 )
      ISP are always very reluctant to tell that they do not have any redundancy in their number of outside links to the rest or the internet. That information just is not available. And how peering agreements work is mostly hidden.

      They simply do not tell, and there is no established protocol to get that information reliable. This p4p would give this information in a way usable to p2p applications.

      One disadvantage of p4p is that not everyone will be equal according to p4p. It might reason that all Americans can b
    • And in fact it would be in the network operators' interests to provide this sort information to P2P developers. After all, the operator derives most of the benefit of the reduced loading from this "P4P" approach.

      Thus all peer-to-peer software, regardless of type or legality, could be done more efficiently.
  • So suddenly the BitTorrent protocol is illegal now?
  • by Dr.Merkwurdigeliebe ( 1055918 ) on Friday March 14, 2008 @10:27AM (#22750796) Homepage
    ... or is it encouraging to see network providers taking a stance other than p2p is bad? This looks good - kind of like "p2p isn't going away, so as long as we have to live with it, let's try to make the best of it"
    • by Tridus ( 79566 )
      Thats how I hope they take it. If it works as well as they claim though, this isn't good just for ISPs. Its good for people using it to download stuff too. I mean, getting data from the other side of the city usually has lower latency then getting it across a trans-atlantic cable.
    • by stiggle ( 649614 )
      They're using it to distribute their own content - they can still be draconian against other P2P content coming into their network infrastructure. Plus they've said they're not looking at putting the technology back into the community for other P2P clients to use. So basically they've done this to save themselves money.

      • But it does show that there is apparantly a lot of room for improvement over what is in the wild now. It demonstrates that the money that a lot of companies declared was wasted by torrent traffic, was indeed waste, and not an insurmountable obstacle that the only solution to it was to throw more bandwidth at the problem.

    • How I take this is that Verizon and NBC are going to use this to fight against Net Neutrality saying "See? If we prioritize the packets to local nodes within our own networks, we get a 400% improvement in data throughput! This means the internet will be 400% better without net neutrality!"
  • So the network efficiencies will accrue to legal P2P content, not to downloads from The Pirate Bay
    ...and they're going to differ between the two how, exactly? (Excuse my ignorance if I'm missing something.)
    • by guruevi ( 827432 )
      The network will force you to

      a) view it WITH commercial breaks every 5 minutes (or worse, since it's now on the interwebs, it might also contain a lot of Cialis and Viagra ads)
      b) use it only on the computer you downloaded it to
      c) be unable to fast forward (or backward) without restarting a commercial

      This will off course add to the revenue and on the other hand turn people off the format so they'll go back to get it from TPB.
    • by Firehed ( 942385 )
      Just a hunch, but the (lack of) pinging of tracker.thepiratebay.org might give it away.
    • Re: (Score:2, Insightful)

      Protocol. Pirate Bay will be a torrent, their "P4P" client will use a different protocol. Now, I don't see why someone couldn't write a bittorrent client that would do the same thing (seek relatively local ips from a tracker). It is public knowledge (or at least readily available) what ISP an IP belongs to, and what country it is in. In some cases, it can be readily localized even further. (large ISPs typically will have local identifiers for the hostname of their router. For example they may use something
  • New math (Score:5, Interesting)

    by ZorbaTHut ( 126196 ) on Friday March 14, 2008 @10:34AM (#22750844) Homepage
    Reducing hops by 400%, eh? That's a nice trick. Can we reduce bandwidth usage by the same amount? I wouldn't mind some free bandwidth.

    I honestly can't figure out where "reduce by 400%" came from. They say the average hops were reduced from 5.5 hops to 0.89 hops, which is either 84% if you're not an idiot or 616% if you are. So I'm really quite confused here. Go figure.
    • Re: (Score:3, Funny)

      by L4t3r4lu5 ( 1216702 )
      Isn't a mean (assumed from "average") of 0.89 hops the same as saying that the median value is less than 1?

      Is it possible that pixies and angel farts are carrying packets between peers in your model?
    • Re:New math (Score:5, Informative)

      by MightyYar ( 622222 ) on Friday March 14, 2008 @10:54AM (#22751034)
      I think I figured out their math, and you aren't going to like it:

      5.5 * 0.89 - 0.89 = 4.0050 or 400%

      As opposed to:

      ( 5.5 - 0.89 ) / 5.5 = 84%
    • Re: (Score:2, Funny)

      by unbug ( 1188963 )

      I honestly can't figure out where "reduce by 400%" came from. They say the average hops were reduced from 5.5 hops to 0.89 hops, which is either 84% if you're not an idiot or 616% if you are.
      That's easy. It came from the 4 in P4P. The more accurate P6P had been vetoed by marketing as too nasty.
  • by n3tcat ( 664243 ) on Friday March 14, 2008 @10:37AM (#22750872)
    While I understand what they're saying here, and I understand the surface intent of the message, I get this feeling that there is some sort of devious underlying motive here. Or it could just be that I have my Slashd^H^H^H^Htinfoil hat on a bit too tight.
    • by evanbd ( 210358 )

      Your computer is broadcasting an IP address!

      Seriously, if your tinfoil hat is on that tight, I have some "security" software to sell you. P2P isn't anonymous, not the way it's normally implemented. If you actually want anonymous P2P, you need to go to something like Freenet [freenetproject.org].

    • Localizing would also mean normally higher speed. I get much much higher speeds domestically than across the Atlantic, for example.

      So it would be a double edged sword...
  • innumeracy (Score:4, Informative)

    by MyNymWasTaken ( 879908 ) on Friday March 14, 2008 @10:37AM (#22750874)

    reduce the number of 'hops' by an average of 400%
    This glaring example of innumeracy is from the submitter, as it is nowhere in the article.

    On average, Pasko said that regular P2P traffic makes 5.5 hops to get its destination. Using the P4P protocol, those same files took an average of 0.89 hops.
    That works out to an average 84% reduction.
    • by pushing-robot ( 1037830 ) on Friday March 14, 2008 @10:49AM (#22750978)

      On average, Pasko said that regular P2P traffic makes 5.5 hops to get its destination. Using the P4P protocol, those same files took an average of 0.89 hops.

      Less than one hop on average? Wow, they must use patented "You downloaded that three months ago, you wanker! Look on your damn file server!" technology.
  • The NYTimes covers the development from the practical standpoint of Verizon's agreement with P2P company Pando Networks, which will be involved in distributing NBC television shows next month. So the network efficiencies will accrue to NBC's content, not to non-sanctioned P2P such as distributing open source software, free software, music, videos, and art in the public domain and licensed under creative commons, or to help distribute software updates for packages such as Azureus [sourceforge.net].

    There, fixed that for you
  • Honestly I think its kind of a cool idea, but the sad part is I don't really see how this could be done on a software level... I think thats why they're citing legal content only... it will take some modifications for routing equipment, won't it?
    • by GreyyGuy ( 91753 )
      I haven't read the article and I'm far from an P2P or IP routing expert, but wouldn't it be possible to make a best guess on proximity based on pinging the peers available, counting the hops to each one and the time to each one to estimate which ones are closest, and then focus on sharing with those?
    • by vrmlguy ( 120854 )

      Honestly I think its kind of a cool idea, but the sad part is I don't really see how this could be done on a software level...
      Why not just do a 'traceroute' to all of the seeds as you discover them, and penalize the ones that are more hops away?
      • Re: (Score:3, Interesting)

        by laird ( 2705 )
        "Why not just do a 'traceroute' to all of the seeds as you discover them, and penalize the ones that are more hops away?"

        That would help peers pick between known peers to exchange data with. The problem is that if you're in a large swarm, you'll only know about a small subset of the swarm, and thus almost certainly miss the best peers to connect to. For example, if you're in a swarm with 10,000 peers, and you know about a random 50 peers, you are 99.5% likely not to find out about the closest peer on the fi
    • by brunes69 ( 86786 )
      Like I posted above, you can use www.maxmind.com's downloadable database to find the geographic location of any IP with a quite high granularity. The database is free to use for open source projects as well.
      • The biggest issue here though is that not all switches and CO's have accurate location data which is where maxmind's database comes from, and some have no data at all (to my knowledge anyways). This would help for the most part, but it won't work perfectly. I'm also curious how they define 'localized.' Like, local to a single CO? Local to a single switch? Local to a town.. city... state... province... country?
  • by FredFredrickson ( 1177871 ) * on Friday March 14, 2008 @10:42AM (#22750922) Homepage Journal
    For this reason, Verizon doesn't suck for broadband uses. In my area, I have Verizon DSL (they haven't given us Fios yet, but they ran the fiber cables a few years back) and I don't have any port blocking (that's right folks, I can send email to ANY server), and they don't limit P2P or Bittorrent (My downloads are fast and fresh). And they haven't turned records over to the government (or at least not reportedly, yet). So far, in the category of BIG ISPs Comcast vs Verizon, Verizon is being the underdog. Which is funny, because start arguing cell phone policies and prices, and watch the argument change completely.
    • For residential FIOS, Verizon blocks incoming port 80 and 25. But, I haven't found any outgoing port blocks.

      Even a residential subscriber can get business FIOS, for about double the monthly fee. It has a static IP, and multiple IPs are available. However, for some obscure reason business FIOS doesn't play well with FIOS TV (which uses the 'Net connection to download video-on-demand and program guide info).

  • They keep touting this P2P protocol, but never actually say what it is. I'll assume it's bittorrent, unless they need to replace protocol with network. I'm guessing it's just the buzzwords that they like.
  • by Opportunist ( 166417 ) on Friday March 14, 2008 @10:47AM (#22750964)
    And let's face it, people, the next protocol will have to have a few features to be accepted, and having "local peers" isn't on the top of the list.

    What the list includes? Easy:

    1. Encryption
    2. Onion routing

    For very obvious reasons. And neither of them decreases bandwidth used. Quite the opposite.
    • by Sloppy ( 14984 )

      Encryption: useful. Onion routing: not useful, probably even negatively useful (it's got to slow things down).

      Increasing efficiency is useful too, because one way or another, the user will end up paying for the bandwidth they use. The sad thing is that right now, you are currently paying for it very indirectly. When the ISPs make it more direct (i.e. tiered pricing) you'll actually feel the market force that makes you care about efficiency. Intelligent people feel it right now (they are able to perceiv

      • People here pay for the bandwidth they use and still don't get it. Actually, it makes matters worse. It seems they're thinking "I paid for those 10 Gigs of traffic, now I somehow have to use them".
  • by ThreeGigs ( 239452 ) on Friday March 14, 2008 @10:47AM (#22750966)
    less of the sharing occurs over large distances, instead making requests of nearby clients (geographically).

    How about a BitTorrent client that gives preference to peers on the *same ISP*?

    Yeah, less hops and all is great, but if an ISP can keep from having to hand off packets to a backbone, they'll save money and perhaps all the hue and cry over P2P will die down some. I'm sure Comcast would rather contract with UUnet to handle half of the current traffic destined for other ISPs than they do now.

    Sort of a 'be nice to the ISPs and they'll be nicer to the users' scenario.
    • I was also thinking that if it's all above board and coordinated via ISPs there should really be some good data available regarding bandwidth utilization.... as in they can positively shape the traffic to point to those who are not currently uploading and utilize their available bandwidth over someone who is already uploading (a different file) a sort of P2P load-balancing routine.

    • Re: (Score:3, Interesting)

      by darthflo ( 1095225 )
      ISPs could easily achieve this without changing a single bit in most bittorren implementations: Jack up the bandwidth within their backbone to whatever's possible. Instead of limiting that ADSL2+ line to 5 mbps running it at 25 and throttling traffic to/from it to 5 mbps at the edge of their network. Connections within the ISP's network would tend to max out those 25 mbps; given some fiber connectivity and recent hardware, users could seed at gigabit throughputs within the provider's network.
      Going back to
    • by mcrbids ( 148650 )
      The problem of distributing large amounts of content *efficiently8 was solved in 1985. No, I'm not kidding. [wikipedia.org]

      Newsgroup servers routinely distribute and cache content locally to minimize overall network traffic. They can distribute only the headers of the news feed, and then cache the content after it's been requested and downloaded.

      This is a *very* efficient content distribution system, and ironically, it's a system more resistant to takedown notices and the like than BT. (It's virtually impossible to entirel
      • Uusenet was also the first thing that came to my mind when I read the summary. Calling it a solved problem is quite a stretch, though. Especially due to the kludgy way binary content has to pretend it is text content to make it through NNTP and all the extra overhead that entails.
    • Geographically isn't what's needed -- Correct. All you really need to do is to prefer to exchange data with peers that are low latency. "latency" is a decent proxy for all kinds of things and it is easy to compute.
  • by br00tus ( 528477 ) on Friday March 14, 2008 @10:56AM (#22751052)
    it's called Mbone [wikipedia.org]. It was created 15 years ago by a bunch of people including Van Jacobson, who had already helped create TCP/IP, wrote traceroute, tcpdump and so forth.


    It would have made Internet broadcasting much more efficient, but it never took off. Why? Because providers never wanted to turn it on, fearing their tubes would get filled with video. So what happened? People broadcast videos anyhow, they just don't use the more efficient Mbone multicasting method.

    Furthermore, when I download a video via Bittorrent, there are usually only a few people, whether they have a complete seed or not, who are sending out data. So how local they are doesn't matter. If there are more people connected, usually most people are sending data out at less than 10K, while there is one (or maybe 2) people sending data out at anywhere from 10K to 200K. So usually I wanted to be hooked to them, no matter where they are - I am getting data from them at many multiples of the average person.

    I care about speed, not locality. The whole point of the Internet and World Wide Web is locality doesn't matter. Speed is what matters to me. For Verizon however, they would prefer most traffic goes over their own network - that way they don't have to worry about exchanging traffic with other providers and so forth. Another thing is - there is tons of fiber crisscrossing the country and world, we have plenty of inter-LATA bandwidth, the whole problem is with bandwidth from the home to the local Central Office. In a lot of countries, natural monopolies are controlled by the government - I always hear about how inefficient that would be and how backwards it would be, but here we have the "last mile" controlled by monopolies and they have been giving us decades-old technology for decades. In fact, the little attacks by the government have been rolled back, in a reversal of the Bell breakup, AT&T now owns a lot of last mile in this country. Hey, it's a safe monopoly that the capitalists, I mean, shareholders, I mean, investors can get nice fat dividends from in stead of re-investing in bleeding edge capital equipment, so why give people a fast connection to their homes? Better to spend money on lawyers fighting public wifi and the like, or commissars and think tanks to brag about how efficient capitalism is in the US of A in 2008.

    • http://finance.yahoo.com/q/bc?s=T&t=1y [yahoo.com]

      The investors in AT&T have lost about 10% of their value in the last year.
      They never recovered from 2001 are still at about 60% of their value then.

      This is true for many large corporations today.

      The executive class is looting and pillaging corporations at the expense of
      a) the workers (1 executive pay == 6000 $40k workers)
      b) the investors (see stock performance above-- think about adding $155 mill in profits that went to one man who took Home depot into the toile
    • Multicasting to clients in your own LAN/WAN infrastructure is not a big idea, it's common sense. When you can expect 15% or more to want the same streams. There are reasons that multicast is not used: they will not have complete control of subscribers to the multicast. Even if the build the set-top box that receives the multicast stream and reports back, interception anywhere in the middle is posslble. Multicast streaming for current cable system content means 'giving' it away... unless all the data is encr
  • What about if a torrent has no seeds or leeches in any remotely local area?

    This is why any "massive improvement" on this aspect makes me skeptical. We all know the reason they want to tie it to local is to save bandwidth costs using only their own uploaders basically which would slow speeds down astronomically. Overseas hosts that can do 300KB/s or more on an upload vs a local that can do a cap of 40KB/s. You decide.
  • I conjectured a couple years ago that this could be done simply by matching up IP addresses to autonomous system numbers and picking peers that are in the same AS number in preference to other peers.
  • by kbonin ( 58917 ) on Friday March 14, 2008 @11:17AM (#22751296)
    Some of us working in the bleeding edge of p2p have been playing with these ideas for years to improve performance (I'm building open VR/MMO over P2P), here's the basics...

    Most true p2p systems use something called a Distributed Hash Table (DHT) [wikipedia.org] to store and search for metadata such as file location and file metadata. Examples are Pastry [wikipedia.org], Chord [wikipedia.org], and (my favorite) Kademlia [wikipedia.org]. These systems index data by ids which are generally a hash (MD5 or SHA1) of the data.

    Without going into the details of the algorithms, the search process exploits the topology of the DHT, which becomes something called an "overlay network" [wikipedia.org]. This lets you efficiently search millions of nodes for the IDs you're interested in in seconds, but it doesn't guarantee the nodes you find will be anywhere near you in physical or network topology space.

    The trick some of us are playing with is including topology data in our DHT structure and/or search, to weigh the search to nodes which happen to be close in network topology space.

    What they are likely doing is something along these lines, since they have the real topology instead of what we can map using tools like tracert.

    If they really want to help p2p, then they would expose this topology information to us p2p developers, and let us use it to make all our applications better. What they're likely planning is pushing their own p2p, which will be faster and less stressful on their internal network (by avoiding peering point traversal at all costs, which is when bandwidth actually costs THEM). The problem is their p2p will likely include other less desired features, like RIAA/MPAA friendly logging and DRM, and then they'll have a plausible reason to start degrading other p2p systems which aren't as friendly by their metrics, such as distributing content they don't control or can't monetize... Then again, maybe I'm just a cynic...
    • Re: (Score:3, Informative)

      by laird ( 2705 )
      "If they really want to help p2p, then they would expose this topology information to us p2p developers, and let us use it to make all our applications better. What they're likely planning is pushing their own p2p..."

      P4P isn't a p2p network. P4P is an open standard that can be implemented by any ISP and any p2p network, and which has been tested so far on BitTorrent (protocol, not company) and Pando software, and the Verizon and Telefonica networks. Participants include all of the major P2P companies and ma
  • by Animats ( 122034 ) on Friday March 14, 2008 @12:13PM (#22751960) Homepage

    This has been my main criticism of "p2p" user-level networking for years. The selection of "peers" has no clue about network structure. The routing performance is just awful. Finally, someone is doing something about it.

    One problem is that, from an endpoint perspective, it's tough to extract network topology and bandwidth. Hop count is only moderately useful. But there are a few tricks one can use.

    There are several basic numbers of interest - bandwidth, delay ("lag"), hops,"bottleneck points" and commercial boundary crossings. Each of these can be measured.

    Delay, or lag, is the easiest to measure. A few pings and you've got it.

    With bittorrent, you're not committed to staying with a peer for an entire download. So you can observe the bandwidth of the peers you're talking to and preferentially use the higher bandwidth ones. You really have to transmit for a while to get a solid bandwidth number, especially since Comcast introduced "Boost" quality of service, which increases bandwidth allocation for a few seconds on demand, then reduces it.

    If you do a traceroute, you'll usually observe that many hops show low lag (those are usually hops within a single data center) while others show higher lag. The number of high-lag hops is the number of "bottleneck points" in the path.

    Commercial boundary crossings occur then packets cross from one ISP to another at a peering point. Users don't notice this much, but carriers are very interested in minimizing that traffic. Converting IP addresses to autonomous system numbers, as someone mentioned, can tell you when you're crossing a boundary.

    So it's possible to collect enough data to do intelligent routing without much help from the network provider. What to do with that data is a separate question, but a solveable one.

  • obvious (Score:3, Informative)

    by debatem1 ( 1087307 ) on Friday March 14, 2008 @02:29PM (#22753404)
    This is really freaking obvious. I wrote a p2p application that cached based on search requests and then fetched based on router hops years ago, and presumed it was nothing new then. I strongly doubt this will be an unencumbered technology if it ever sees the light of day.

Behind every great computer sits a skinny little geek.

Working...