Follow Slashdot stories on Twitter


Forgot your password?
The Internet Media

Replacing TCP? 444

olau writes "TCP, the transfer protocol that most of the Internet is using, is getting old. These guys have invented an alternative that combines UDP with rateless erasure codes, which means that packets do not have to be resent. Cool stuff! It also has applications for peer-to-peer networks (e.g. for something like BitTorrent). They are even preparing RFCs! The guy who started it, Petar Maymounkov, is of Kademlia fame."
This discussion has been archived. No new comments can be posted.

Replacing TCP?

Comments Filter:
  • by Power Everywhere ( 778645 ) on Thursday October 21, 2004 @11:38AM (#10587580) Homepage
    Now that Digital is little more than IP spread across a few different companies, maybe the holder of Decnet's patents could release the protocol under an Open Source license. If I recall correctly it was quite the networking layer.
    • Not all of Digital's network tech was sane...

      DEC MMJ

      Digital Equipment Co.'s proprietary Modified Modular Jack. It is identical to a standard USOC 6-position jack, except that the locking tab has been offset to the right to prevent an MMP (modified modular plug) from fitting into a USOC jack. AMP5-555237-2 Plug
      • I found one of these in a warehouse back when I was in highschool. I always wondered what such a useless looking cable could be for. It wasn't like that tab would have prevented me from putting the cable in a plug wrong. I ended up grinding off the offensive bit and using it anyway, it was 80ft of otherwise good cable encased in the stiffest plastic I've ever seen used. Now it runs through dirt.
    • by JohnGrahamCumming ( 684871 ) * <> on Thursday October 21, 2004 @11:55AM (#10587872) Homepage Journal
      It was quite a complicated set of protocols and IIRC the final versions were using TCP/IP as their transport layer and below. The versions before that were using OSI.

  • nonsense (Score:4, Interesting)

    by N3wsByt3 ( 758224 ) on Thursday October 21, 2004 @11:38AM (#10587581) Journal
    TCP may be old, but it can go on for another 50 years wothout any problem.

    • Re:nonsense (Score:4, Interesting)

      by savagedome ( 742194 ) on Thursday October 21, 2004 @11:43AM (#10587683)
      TCP may be old, but it can go on for another 50 years wothout any problem.

      Somehow, the title of your post is appropriate for the subject of your mesaage.
      Did you pull that 50 years out of your..... you know.

      Change is not going to happen overnight. They are suggesting a better protocol which is still in the labs for the most part.

      Remember how long since IPv6 is out? And now check what we are still using. It's all about changing it gradually.
      • Re:nonsense (Score:4, Insightful)

        by Aragorn992 ( 740050 ) on Thursday October 21, 2004 @03:25PM (#10590985)
        They are suggesting a better protocol which is still in the labs for the most part.

        And what makes this a better protocol? Its vast history of being a solid, reliable protocol with its massive amount of industrial testing? Oh wait thats TCP.

        Frankly, new stacks that look better than old ones are a dime a dozen. Until you test them in the real world you will never know and why bother changing TCP when it does a damn fine job right now?

        IPv6 was implemented for the sole reason (primarily) of addressing the shortage of IP addresses.
    • flamebait? (Score:4, Insightful)

      by N3wsByt3 ( 758224 ) on Thursday October 21, 2004 @11:52AM (#10587832) Journal
      I fail to see what is flamebaiting it is to say that TCP can go on for another 50 years, without problem.

      Exactly the same kind of post a bit below gets 'insightful'.

      It is simply true. Yes, there are some little drawbacks with TCP, but in the whole article, they do not give a compelling reason to switch, let alone why one would *have* to. I mean, RTFA: TCP is at 1-3% and the most efficient would be a throughput with 3-5% (loss)...but so what? It's not optimal, but does it anywhere claims TCP is doomed because it's not optimal in certain area's?

      There are myriads of things that aren't optimal on the Net, it doesn't mean they have been here for years and will be for years to come, nor that it is a necessity to switch, if the only thing lacking is that it's not optimally suited.
    • Re:nonsense (Score:3, Insightful)

      by ebuck ( 585470 )
      Who flamebaited this? It's not derogatory, and it's a valid opinion. TCP's shortcomings are going to constantly be mitigated by better router and switch hardware.

      The article is little more than an advertisment for their socket management software, so it's no more news than what SCO produces, but from what I can gather...

      There's no way to obtain data that was truly lost, so they are using an error correcting encoding of the data (and wasting some of the bandwith in the process) They're not really doing
  • Evil Bit? (Score:5, Funny)

    by Sexy Commando ( 612371 ) on Thursday October 21, 2004 @11:38AM (#10587588) Journal
    Does it have Evil Bit implemented?
  • Old!=bad (Score:5, Insightful)

    by gspr ( 602968 ) on Thursday October 21, 2004 @11:40AM (#10587627)
    The submitter says that TCP is getting old, but does that really tell us anything about how well it does its job?
    • Re:Old!=bad (Score:5, Interesting)

      by jfengel ( 409917 ) on Thursday October 21, 2004 @11:47AM (#10587769) Homepage Journal
      From the first paragraph of TFA:

      This problem stems from the fact that TCP calculates the congestion of the channel based on its round-trip time. The round-trip time, however, reflects not only the congestion level, but also the physical length of the connection. This is precisely why TCP is inherently unable to reach optimal speeds on long high-bandwidth connections.

      A secondary, but just as impairing, property of TCP is its inability to tolerate even small amounts (1% - 3%) of packet loss.

      I find it kind of interesting that these are two competing problems: one having to do with high bandwidth (and presumably high-reliability) connections, the other with low-reliability connections. My home DSL, however, often fits into the latter category: 3% packet loss is not uncommon on bad days. So maybe the two aren't so incompatible after all.
      • Re:Old!=bad (Score:3, Informative)

        by BridgeBum ( 11413 )
        TCP is a good protocol, but it is far from perfect. It does have weakness in it's windowing mechanism which can vastly reduce the throughput over long distances/mild latency with long durations. (Think FTP.) This is probably the problem they are refering to with #1.

        For #2, this could simply be retransmission problems. TCP has a strict sequential ordering to packets, so a packet lost in the stream could cause other packets to be discarded, forcing more retransmissions than were technically required. T
        • Re:Old!=bad (Score:5, Informative)

          by amorsen ( 7485 ) <> on Thursday October 21, 2004 @12:24PM (#10588264)
          TCP has a strict sequential ordering to packets, so a packet lost in the stream could cause other packets to be discarded

          That's what selective ack is for. It's pretty old hat at this point, everyone has it implemented.

        • Re:Old!=bad (Score:5, Informative)

          by ebuck ( 585470 ) on Thursday October 21, 2004 @12:42PM (#10588632)
          Most modern TCP/IP implementations use a sliding window algorithim internally, which caches the out of order packets while it's trying to re-get the missing packet.

          So you're really only concerining yourself with ordered delivery when the delay in the natural order is greater than that of the sliding window (the cache of received packets that you're not going to tell the software about just yet)

          This takes care of trivial things where packets arrive "just barely" out of order, but in a true packet loss situation, can block the channel for quite some time.

          So the strict sequence you refer to is the sequence the packets are presented to the next layer (usually the application in Ethernet), not the sequence the packets are received by the machine.
        • Re:Old!=bad (Score:4, Interesting)

          by InfiniteWisdom ( 530090 ) on Thursday October 21, 2004 @04:43PM (#10591925) Homepage
          No, the problem is with the way TCP's congestion window works. When there is no loss, the congestion window is increased linearly, typically at 1 packet a roundtrip. On the other hand, whenever a packet is lost the window is cut in half. On a high latency, high bandwidth link (i.e. one with a large delay-bandwidth product) the window required to keep the link saturated could be tens of thousands of packets. You could tweak the parameters, but ultimately you still have additive-increase multiplicative decrease (AIMD).

          The key problem with TCP is that it assumes that most losses happen because of packets being dropped due to congestion, not because of data corruption. Treating every loss as a congestion event worked well in the early days of the Internet, but is counterproductive today where the core of the Internet has plenty of spare capacity and congestion usually happens at the edges.

          If ECN were universal, one could ignore losses (for the purpose of congestion control). There are lots and lots of protocols that have been designed by the computer science research community... like look through the proceedings of, say, ACM SIGCOMM []. You'll find no shortage of new protocols.

          Designing one that has big enough advantages to justify the cost of switching is another matter. TCP works find to today's cablemodems and such. Its when you start talking about trans-continental multi-gigabit networks that its limitations become a problem.
    • Re:Old!=bad (Score:5, Insightful)

      by RAMMS+EIN ( 578166 ) on Thursday October 21, 2004 @12:05PM (#10587984) Homepage Journal
      TCP does have its shortcomings.

      As they mention on their site, TCP's bandwidth usage is dependent on the latency of the link. This is due to the fact that sliding windows (the number of packets that are allowed to be out on the network) have a limited size. TCP sends a windowful of packets, then waits until one is acknowledged before sending another one. On high-latency links, this can cause a lot of bandwidth to go unused. There is an extension to TCP that allows for larger windows, addressing this problem.

      Another problem with TCP is slow start and rapid back-off. IIRC, a TCP connection linearly increases its bandwidth, but will halve it immediately when a packet is lost. The former makes for a very slow startup, whereas the latter causes a connection to slow down dramatically, even though a lost packet can be due to many factors other than congestion. Slow start has been addressed by allowing connections to speed up quicker, about rapid back-off I'm not sure.

      The solution this company provides seems to play nice with TCP by varying the transmission speed in waves. Apparently, this improves speed over TCP's back-off mechanism, but it obviously doesn't provide optimal bandwidth utilization.
      • Re:Old!=bad (Score:5, Informative)

        by Gilk180 ( 513755 ) on Thursday October 21, 2004 @01:04PM (#10588968)
        Actually, TCP increases exponentially until the first packet is dropped. It backs off to half, then increases linearly, until another packet is dropped, backs off to half ...

        This means that a TCP connection uses ~75% of the bandwidth available to it (after all this stabilizes). So if there is only a single tcp connection over a given length, it will be 75% full at best. However, the whole reason for doing a lot of this is to allow many connections to coexist. If you transmit as fast as possible, you will get the highest throughput possible, but you will end up with a lot of dropped packets and won't play nice with others.
        • Re:Old!=bad (Score:3, Interesting)

          by Harik ( 4023 )
          I call bullshit to your logic. I've got a number of point-to-point T1s for specific uses, and I can quiesce them and do large file transfers over them for backups. I get ~170k transfer speeds, which are theoretical link maximum. (packet/frame overhead). According to you, I should saturate out at 144k.

          Now, what they're doing isn't quite the same. You wouldn't serve webpages or email over their protocol. You would do something like bittorrent, though, since you can snag the missing frame later or from

      • Re:Old!=bad (Score:3, Informative)

        by j1m+5n0w ( 749199 )

        TCP's bandwidth usage is dependent on the latency of the link. This is due to the fact that sliding windows (the number of packets that are allowed to be out on the network) have a limited size.

        TCP's window size is not a fixed size. There is a maximum, but most "normal" connections are well below it. (Links with very high bandwidth and high delay may reach this limit, which can be increased somewhat with the right TCP extensions.) TCP's window size, in fact, regulates its bandwidth consumption.


        • Re:Old!=bad (Score:3, Interesting)

          by dgatwood ( 11270 )
          The back-off design works relatively well if you assume that the network is relatively reliable. As soon as non-bandwidth-related packet loss is introduced at any real frequency, it becomes the most horrible nightmare in existence. Add a filter to your firewall sometime that tdrops 1% of packets randomly. Try to use the connection.

          This is compounded when you start doing layered networking (PPPoSSH or similar) on top of such a system. Then, the packet dropping problem is replaced with multi-second late

  • by FooAtWFU ( 699187 ) on Thursday October 21, 2004 @11:40AM (#10587628) Homepage
    If it's not an open protocol (if they charge for use) it may find niche applications. If it is, it may proliferate. I wasn't able to find details about this on the site.
    • by RealAlaskan ( 576404 ) on Thursday October 21, 2004 @11:51AM (#10587814) Homepage Journal
      If it's not an open protocol (if they charge for use) ...

      It doesn't look good. Their webpage [] says:``We are planning to release Rateless Codes for free, non-commercial use, in open source under the GNU Public License.'' They are seriously confused about the terms of their own license. The GPL doesn't lend itself to ``free, non-commercial use'': it lets the licensee use and distribute freely, at any price, for any use.

      Either they'll loose that ``non-commercial use'' crap, or this will go nowhere.

      • They are confused (Score:4, Interesting)

        by ebuck ( 585470 ) on Thursday October 21, 2004 @01:04PM (#10588973)
        They tell you how they solved the solution, but fail to tell you how much impact this solution has overall.

        They're basically stating that now they can flood the connection with packets.

        But they've also told you that the packets contain your data in an error correcting encoding. What they don't mention is this:

        How much overhead is required by the error correcting encoding?

        How many errors can the error correcting encoding handle? (drops 1 packet = ok, drops 400 packets = bad)

        How much cpu computation is required to encode and decode the payload?

        How is the cpu overhead managed? (how much performance will be lost by context switching, etc.)

        So they're just playing the game of distracting people with the best part of thier performance measurement without bothering to mention the performance impact of all of the other trade-offs they admitted to making.

      • by Fnkmaster ( 89084 ) * on Thursday October 21, 2004 @01:47PM (#10589679)
        They apparently already have some problems with GPL compliance []. They are distributing binaries of a tool called Rateless Copy which they state uses the GPLed libAsync library under their own license [] while promising to deliver source code under the GPL "soon".

        Don't get me wrong, I think they mean well, but they are trying to prohibit commercial use of GPL-linked works. Nobody said the GPL was always friendly to your business plan, but you can either take it or leave it, not have it both ways. I know one of the founders of this company, Chris Coyne - he used to regularly come to my parties in college. Nice guys, and I am sure they mean well. They could use some serious guidance though on licensing and IP issues, not to mention trying to make a viable business out of network software, which is a tough proposition in itself.

        Feel free to contact me if you need help guys. :)

      • by NoMercy ( 105420 ) on Thursday October 21, 2004 @01:52PM (#10589788)
        It's quite bloody obvious what would happen, if the protocal takes off, the BSD folk need to implement a BSD version, the linux guys would then dump there version and use the BSD version because the BSD guys tend to write better code, and it would likely be more comptable, windows would then use the BSD version and MacOS X would have the BSD version by default.

        No point really not releasing it under a BSD licence in the first place, save leting the BSD guys write a far better version for the world to use.

        The RFC is more important than the code anyway, and there fools for not writing the RFC first.
    • The front page says "not patented". They could claim it as a trade secret, but not if they plan to introduce an RFC. So even if they don't wish to open their sources, there will be open implementations very quickly.

      Assuming there aren't any underlying IP issues. I'm not aware of any patents on the forward error correcting codes they're using, but that doesn't mean they don't exist. And assuming some jackass doesn't say, "Here's a thing and it's not patented; I'll patent it and then license it back to t
      • Acutally, there are likely to be a mountain of patent issues. Not sure how legitimate these will turn out to be.

        Peter's paper was originally presented at the same conference where Digital Fountain also presented approximately the same thing. (Building on the LT codes that they had been shipping for years)

        I'm quite sure that DF has a stack of patents on their version.

        This may get interesting.

        (Disclaimer: DF laid me off in 2003)
  • by Anonymous Coward on Thursday October 21, 2004 @11:40AM (#10587636)
    Some inefficiencies are one thing, but you're going to need a compelling reason to get everyone to switch.
  • by RAMMS+EIN ( 578166 ) on Thursday October 21, 2004 @11:41AM (#10587651) Homepage Journal
    TCP is old, but that doesn't mean it's bad or replacement is due. Some shortcomings have surfaced and been adressed. For the most part, TCP does a good job at what it was designed to do.
    • by billstewart ( 78916 ) on Thursday October 21, 2004 @12:46PM (#10588694) Journal
      As far as I can tell from the documentation, this isn't really a TCP replacement. It's basically a set of efficient forward error correction codes, so if you've got a relatively wide open pipe you can transmit on without much packet loss, it can blast away and recover from occasional losses, but it doesn't do any actual congestion control.
      • They have a "TCP-friendliness" option that varies the transmission rate in a way that TCP windowing can probably cooperate with, so you can set the rate knobs to something less than full blast,
      • but nothing they've documented appears to address the problem of multiple users of this application trying to use a transmission path at the same time, and
      • they also don't document anything that does path rate discovery - so it may work fine if you've got a couple of small pipes feeding a fat network, but if you've got a fat pipe on the sending end and a skinny pipe on the receiving end, they don't document anything that figures out what rate is safe to transmit at.
      They also don't document when you would want to use this and when you would want to use TCP and when you would want to use this on top of TCP.
  • by drunkennewfiemidget ( 712572 ) on Thursday October 21, 2004 @11:42AM (#10587663) Homepage
    Because then you're going to have the suits trying to push it down, no matter how great/useful it is in an effort to kill the possibility of coming out with something that could make pirating any easier or more efficient. That's the only way they're going to see it.

    It's good to see innovation though, nonetheless.
  • by syates21 ( 78378 ) on Thursday October 21, 2004 @11:42AM (#10587664)
    This is hardly an innovative idea, and usually by the time you end up considering all the issues you wind up with something that looks a lot more like TCP than you had originally intended.

    Plus there are already protocol stacks that work around most of their gripes about TCP (slow performance over long pipes, etc).
    • by jrumney ( 197329 ) on Thursday October 21, 2004 @11:49AM (#10587794)
      It appears that they get better performance than TCP by considering (all - 1) the issues. Basically, their protocol works and performs better than TCP because the pipes have spare capacity. If the pipes were at capacity, their protocol would break down. TCP has been designed to be robust in all conditions. Protocols like this that rely on "in most cases we can get away with allowing more errors than TCP does" are not going to replace TCP.
  • by Andreas(R) ( 448328 ) on Thursday October 21, 2004 @11:42AM (#10587666) Homepage
    Their website of the so called "experts" is down, it's slashdotted! (ironic?)

    Here is a summary of their technology copied from their website:

    Rateless Internet The Problem

    Rateless Internet is an Internet transport protocol implemented over UDP, meant as a better replacement of TCP. TCP's legacy design carries a number of inefficiencies, the most prominent of which is its inability to utilize most modern links' bandwidth. This problem stems from the fact that TCP calculates the congestion of the channel based on its round-trip time. The round-trim time, however, reflects not only the congestion level, but also the physical length of the connection. This is precisely why TCP is inherently unable to reach optimal speeds on long high-bandwidth connections.

    A secondary, but just as impairing, property of TCP is its inability to tolerate even small amounts (1% - 3%) of packet loss. This additionally forces TCP to work at safe and relatively low transmission speeds with under 1% loss rates. Nevertheless, our extended real-life measurements show that highest throughput is generally achieved at speeds with anywhere between 3% and 5% loss.

    The Solution

    By using our core coding technology we were able to design a reliable Internet transmission protocol which can circumvent both of the fore-mentioned deficiencies of TCP, while still remaining TCP-friendly. By using encoded, rather than plain, transmission we are able to send at speeds with any packet loss level. Rateless coding is used in conjunction with our Universal Congestion Control algorithm, which allows Rateless Internet to remain friendly to TCP and other congestion-aware protocols.

    Universal Congestion Control is an algorithm for transmission speed control. It is based on a simple and clean idea. Speed is varied in a wave-like fashion. The top of the wave achieves near-optimal throughput, while the bottom is low enough to let coexisting protocols like TCP smoothly receive a fair share of bandwidth. The time lengths of the peaks and troughs can be adjusted parametrically to achieve customized levels of fairness between Rateless Internet and TCP.

    The Rateless Internet transport is now available through our Rateless Socket product in the form of a C/C++ socket library. Rateless Internet is ideal for Internet-based applications, running on the network edges, that require high bandwidth in a heterogenous environment. It was specifically built with peer-to-peer and live multimedia content delivery applications in mind.

    • Also, (from the site [] we know that

      Rateless Socket will be made available to select companies and institutions on a per-case basis. To find out more, please contact us.
    • What? (Score:5, Informative)

      by markov_chain ( 202465 ) on Thursday October 21, 2004 @12:51PM (#10588778) Homepage
      I don't know what their reasoning is, but both their claims about TCP seem incorrect.

      1. TCP does not use round trip time to calculate any "congestion levels." It increases the connection rate until packets get dropped, presumably because some router in the middle got overloaded.

      2. Packet loss is used as a signal to TCP to slow down because it tried to send too fast. The lost packets are subsequently retransmitted, so TCP can indeed not only tolerate but recover from packet loss. The only real case they have is packet loss due to reasons other than TCP's own aggressive sending rate, such as UDP traffic, wireless links, etc.

      Given these concerns, I can't help but think that they are inventing a protocol that works well only if used on a small scale. TCP is designed to back down if it thinks it's sending too fast, and is not really optimal. One can always hack a pair of TCP nodes to not play by the rules and get more than the fair share, but the problem is that that solution wouldn't work if it were adopted network-wide.

      • Re:What? (Score:5, Insightful)

        by panZ ( 67763 ) <> on Thursday October 21, 2004 @02:13PM (#10590116)
        You are absolutely correct. I was looking for someone to post this argument before doing it myself. Thanks! Mod this guy up!

        TCP doesn't use round trip time to calculate a link speed. In fact, it doesn't just the opposite. It uses a sliding window method so it can send many packets before any one has ACKed. This is done to soften the blow that round trip times will have on your send rates!

        TCP regulates its send rates by slowly sending faster and faster, then when a packet is dropped, it drops its rate fast. Slow increases and exponential backoffs make for VERY efficient link utilization on reliable networks with many active nodes whether it be a fast office LAN or world wide network.

        Their method appears to just spray data without paying attention to what other nodes are doing. It sounds like it is much better suited for point to point communications on unreliable networks. E.g. cellular data networks, packets get dropped much more frequently because of interference rather than congestions. TCP might back off too quickly in this condition because it is optimized to deal with congestion. Their protocol might be great for that first/last hop between your cell phone and the cell tower but otherwise, it undermines the great balance that TCP achieves on our amazing little internet.

      • Re:What? (Score:3, Informative)

        TCP does not use round trip time to calculate any "congestion levels."

        That's true, but TCP does implicitly rely upon the RTT in its ordinary operation; it does not increase the size of its congestion window until it sees an ACK, which implies a delay equal to the RTT. So, TCP doesn't use RTT in an explicit calculation but RTT does affect how quickly you're able to ramp up your utilization of the link after a packet loss.

        TCP can indeed not only tolerate but recover from packet loss

        Yes, but the questio

  • Brilliant! (Score:4, Insightful)

    by morgdx ( 688154 ) on Thursday October 21, 2004 @11:42AM (#10587671) Homepage

    A slightly faster equivelent to TCP that I have to pay for and no-one else uses.

    Sign me up for that sucker right now.

  • Not F/OSS (Score:3, Interesting)

    by TwistedSquare ( 650445 ) on Thursday October 21, 2004 @11:42AM (#10587672) Homepage
    Considering this is slashdot and all, I was surprised that their implementation does not appear to be open source (or indeed, freely available at all), though presumably such an implementation will be possible following the RFCs. It seems to work nicely alongside TCP using UDP, quite a cool idea. The question is whether it can break TCP's de facto stranglehold on reliable Internet communication. I'd love to play with it if I could.
  • Whew! (Score:5, Funny)

    by PhotoGuy ( 189467 ) on Thursday October 21, 2004 @11:43AM (#10587688) Homepage
    "TCP, the transfer protocol that most of the Internet is using

    Often stories are posted that refer to products or code names, with no description, which is quite annoying.

    I'm glad to see this post doesn't run that risk.

    Thanks for clearing that up for me.


  • better replacement of TCP. TCP's legacy design carries a number of inefficiencies, the most prominent of which is its inability to utilize most modern links' bandwidth. This problem stems from the fact that TCP calculates the congestion of the channel based on its round-trip time. The round-trim time, however, reflects not only the congestion level, but also the physical length of the connection. This is precisely why TCP is inherently unable to reach optimal speeds on long high-bandwidth connections.

    A sec
  • by bigberk ( 547360 ) <> on Thursday October 21, 2004 @11:44AM (#10587707)
    There's no doubt that an alternative to TCP might have technical merits. But as far as communication protocols go, TCP itself is pretty amazing. Modern TCP implementations have been tweaked over decades and have impressive performance and reliability. And modern TCP/IP stacks have rather unspoofable connection establishment, another excellent feature for security.

    If you want to replace TCP, you have to do more than just develop a new protocol that is faster. It would have to outperform TCP in speed, reliability, and substantially so in order to outweigh the costs of ditching a well-established and trusted protocol.
  • by IronChefMorimoto ( 691038 ) on Thursday October 21, 2004 @11:45AM (#10587727)
    While this sounds very interesting (have to re-take all those networking certification exams again, I guess), when I read this...

    The guy who started it, Petar Maymounkov, is of Kademlia fame." eyes told my brain this...

    The guy who started it, Petar Maymounkov, is of Chlamydia fame."

    I was about to wonder what sort of "fame" you could get from that. Need coffee. Need sleep.

  • Respect! (Score:5, Funny)

    by WormholeFiend ( 674934 ) on Thursday October 21, 2004 @11:47AM (#10587759)
    Find out what it means to me
    Take care, TCP

    Oh socket to me, socket to me,
    socket to me, socket to me...
  • by D3 ( 31029 ) <> on Thursday October 21, 2004 @11:47AM (#10587760) Journal
    "By using encoded, rather than plain, transmission we are able to send at speeds with any packet loss level."
    I want to know how they get data transmission at 100% loss!
  • by RAMMS+EIN ( 578166 ) on Thursday October 21, 2004 @11:47AM (#10587762) Homepage Journal
    Can anybody explain to me how this technology? It reads like marketing speak to me, despite the fact that it will be/is released as open source. How does the technology actually achieve reliability without retransmits? Does it actually achieve it?
  • by JohnGrahamCumming ( 684871 ) * <> on Thursday October 21, 2004 @11:47AM (#10587766) Homepage Journal
    1. This is coming from a company who are surely going to want to make money out of it somehow. Part of the reason TCP succeeded is there was no one to pay.

    2. They don't seem to understand the GPL:

    "We are planning to release Rateless Codes for free, non-commercial use, in open source under the GNU Public License."

    The GPL doesn't restrict commercial use, and hence the only way that they can do this is either they try to add some conditions to the GPL, or they use another mechanism to restrict commercial use: e.g. patents.

    No matter how good this technology is it's not going to get wide adoption is an alternative to TCP unless it's unencumbered.

  • by kakos ( 610660 ) on Thursday October 21, 2004 @11:49AM (#10587796)
    I did read their website and it looks like their revolutionary new replacement for TCP is UDP with their proprietary ECC built on top of it. However, there is a good reason why TCP never used ECC (they did exist back then).

    1) The major problem a TCP packet will face is getting dropped. They mention this problem. They claim their encoding will solve this problem. It won't. No ECC algorithm will allow you to recover a dropped packet.

    2) Most packets that are corrupted are corrupted well beyond the repair of most ECCs.

    3) ECCs will cause packet size to increase. Not a huge problem, but why do it when ECCs don't help too much to begin with?
    • by Another MacHack ( 32639 ) on Thursday October 21, 2004 @11:59AM (#10587926)
      You'd be right if they were talking about an error correcting code designed to repair damage to a packet in transit.

      They're actually talking about erasure correction, where each symbol is a packet as a whole. In a very simple form, you send a packet sequence like this:

      A B (A^B) C D (C^D)

      So if you lose packet A, you reconstruct it from (B ^ (A^B)) = A. This simple scheme increases the number of packets sent by 50%, but allows you to tolerate a 33% loss, presuming you don't lose bursts of packets. There are more sophisticated schemes, of course, and there are various tradeoffs of overhead versus robustness.
      • You are correct: you can overcome a burst error of any length with a suitable ECC.

        However, what do you do when you encounter a burst error longer than your ECC design limit? A more complex code may not be the answer for reasons of computational complexity.

        The solution (employed with great success in CD media) is interleaving: instead of sending A1A2A3 B1B2B3 C1C2C3 you send A1B1C1 A2B2C2 A3B3C3. Let's assume the middle of that, A2B2C3, got lost, and you can only recover an error of one third that length

    • by netwiz ( 33291 ) on Thursday October 21, 2004 @12:02PM (#10587954) Homepage
      well, assuming that the dropped frames aren't sequential in large number, some kind of ECC (think RAID5 for IP) could alleviate this issue. Granted, you'd be sending three packets for every two packets worth of data, but you could lose any one of them and still be okay.

      However, I don't think most people would necessarily enjoy 50% larger payloads required to make this work. It could be tuned back, but for every decrease in overhead, the effect of losing a frame gets worse. In the end (and this is purely speculative, as I've no real data or math to back this up) it may be that TCP remains more effective with better throughput.

      I'll be honest, I don't see/experience the kinds of lag and retransmission problems that are described in the article, and any large streaming transfers to my home or desk regulary consume 100% of my available bandwidth. So for me, TCP works just fine.
    • Sorry this is not correct. Rateless erasure codes is ECC per complete file not per packet. Please read the paper [].
    • by Raphael ( 18701 ) on Thursday October 21, 2004 @12:17PM (#10588134) Homepage Journal

      You are almost correct, except that doing the error correction on the whole file instead of a single (or a couple of) packets allows the file to be transmitted even if one or several packets are dropped.

      However, this kind of error correction is only good if you are exchanging rather large files. They cannot claim to replace TCP if their protocol cannot be used for any kind of interactive work. Take for example SSH or HTTP (the protocol that we are using for reading Slashdot): they are both implemented on top of TCP and they require TCP to work equally well for exchanging small chunks of data (keypress in an SSH session, or HTTP GET request) and exchanging larger chunks of data (HTTP response).

      While their new protocol would probably work well for the larger chunks of data, it is very likely to reduce the degrade the link performance for the smaller bits exchanged during an interactive session. So they are probably good for file sharing. But TCP has many other uses...

      Also, they mention that they can be TCP-friendly by transmitting in "waves". I doubt that these "waves" can be very TCP-friendly if their time between peaks is too short. One thing that TCP does not do well is to adapt its transmission rate to fast-changing network conditions. So if they allow the user or the application designer to set the frequency of these waves, they could end up with a very TCP-unfriendly protocol.

  • sounds good (Score:3, Interesting)

    by spacerodent ( 790183 ) on Thursday October 21, 2004 @11:53AM (#10587842)
    the real issue with any new technology like this is will it be accepted as a standard? For that to happen it needs to be relyable, easy to get (free), and hard to exploit. This looks to have all the above so in 5 to 10 years maybe we'll see this implmented everywhere.
  • by l3v1 ( 787564 ) on Thursday October 21, 2004 @11:54AM (#10587856)
    It doesn't seem reason enough to build a new protocol for TCP replacement just because it is "old". Protols ar not living beings you know, aging doesn't show on their cells. Of course troubles always has been popping up over the years, but nothing unsolvable (and nothing in the likings of IP).

    All in all, it's good to have new alternative solutions and new technologies at hand, but to state such things as replacing TCP 'cause its age (maturity, that is :D ) is a bit winged to say the least.

  • They say that this new protocol would tolerate higher rates of packet loss. What happens to TCP connections when a lot of connections use this new protocol? Would it create worse congestion on networks, which this protocol can tolerate, but which would greatly slow down TCP connections?
  • by Ars-Fartsica ( 166957 ) on Thursday October 21, 2004 @11:55AM (#10587859)
    Just look at the adoption rates on IPv6. No one is going to touch a new protocol at this stage. Its not even clear that this is needed. Point me at a specific TCP pain point that is specifically and obviously reducing internet adoption...any takers?
  • Yawn (Score:5, Insightful)

    by Rosco P. Coltrane ( 209368 ) on Thursday October 21, 2004 @11:55AM (#10587864)
    YAWN Protocol --> Yet Another Wonderful New Protocol

    ASCII is still around, despite its numerous shortcomings. There's this small thing called "backward compatibility" that people/consumers seem to love, for some reason. Well, same thing for TCP/IP. Even IPv6 has trouble taking off in the general public, despite being essentially just a small change in the format, so never mind the YAWN Protocol this article is about...
  • by Anonymous Coward on Thursday October 21, 2004 @11:58AM (#10587911)
    Tried their "Rateless Copy" utility, transferring a 5.8 mb binary file from my web server in Texas to my local connection in Toronto.

    With Rateless Copy: time between 31-41 seconds, average of 200k/s, the resulting file is corrupted. Tried it again to ensure, same result.

    Without rateless copy (http file download) 8 seconds, average of 490k/s, the resulting file works fine as expected.

    Sorry, but I don't think it's all that great.
  • SCTP (Score:5, Interesting)

    by noselasd ( 594905 ) on Thursday October 21, 2004 @12:01PM (#10587939)
    Why not SCTP ? See RFC 2960. Already in the Linux kernel, Kame, (solaris ?) and probably others.
    Intro here []

    - SCTP can be used in many "modes"
    * Provides reliable messaging (like UDP,but reliable)
    * Can be used as a stream protocol (like TCP).
    * One connection/association can hold multiple streams.
    * One-to-many relation for messaging.
    * Better at dealing with syn flooding than TCP.

    Then again, I guess inveting the wheel is more "fun" :-/
    • SCTP is indeed interesting. I've tangled with it when playing with SIGTRAN, i.e. SS7 over IP. The nightmares ceased a while ago. :-)

      One of the more interesting special-purpose protocols I've ever messed with was the PACSAT Broadcast protocol [], used for downloading files from satellites.

      It makes the assumption that downloaded files are of general interest, so it broadcasts them. Which files it broadcasts is in response to requests from ground stations. Ground stations can also request fills, pieces of fi

  • by 89cents ( 589228 ) on Thursday October 21, 2004 @12:03PM (#10587962)
    The air that I breathe, the water that I drink, and the land that I walk upon is getting really old. Someone needs to replace it all.

    Really, is TCP flawed?

  • by shic ( 309152 ) on Thursday October 21, 2004 @12:08PM (#10588015)
    When considering protocols for information transport, it is very important to be absolutely sure what assumptions you are making. There are a number of non-independent factors which influence the suitability (and hence efficiency) of network protocols to application demands. Bandwidth, for example, is related to but doesn't define the statistical distribution of latencies; maximum packet rate and their relationship to packet size. The channel error rate (and statistical distributions of packet failures) are again linked to fragmentation and concatenation of transmitted datagrams - and this in turn affects latencies when considering "reliable" transport protocols. Routing policy and symmetry of physical links introduces yet more tradeoffs which need to be considered - not to mention the potential problems evaluating if the burden of protocol computations outweighs the advantage of an improved strategy for a given physical link. (And I'm not even going to mention security!) When considering protocols the most important thing to consider is the model they assume of the communications infrastructure on which they are to be deployed. TCP is likely the optimal solution given the assumptions TCP makes... if you change those assumptions to more closely fit a particular network infrastructure you will likely get better performance only on that infrastructure, but far worse performance where your new assumptions do not hold. I used to be interested in the idea of dynamically synthesizing protocols to best suit the actual physical links in a heterogeneous network... however my ideas were met with extreme disinterest; I felt my critics demanded I present a protocol which beats TCP under TCP's assumptions - and no amount of hand-waving and explanation would convince them this was a silly request. I still think the idea has merit - but having wasted 3 years of my life trying to push it uphill, I've found other interesting (and far more productive) avenues to pursue.
  • by AaronW ( 33736 ) on Thursday October 21, 2004 @12:13PM (#10588072) Homepage
    While there are a number of issues with TCP, I think it would be much better in the long run to work on fixing TCP rather than replace it. That way all the existing apps can take advantage of the fixes.

    One thing that bothers me is I see ISPs applying policing to their subscriber's bandwidth. Policing is quite unfriendly to TCP, unlike, say, shaping. With policing, a router decides either to pass, drop, or mark a packet based on if it exceeds certain bandwidth constraints. Shaping, on the other hand, will buffer packets and introduce additional latency, thus helping TCP find the sweet spot. Of course shaping will also drop, since nobody provides infinite buffer space.

    TCP is relatively easy to extend. There are still some free flag bits and additional fields can be added to the TCP header if needed.

  • by Bluefirebird ( 649667 ) on Thursday October 21, 2004 @12:18PM (#10588154)
    (NOT) Everyone knows that TCP has problems and for many years people have been developing transport protocols that enhance or replace TCP.
    These guys haven't invented anything new. There are many flavours of TCP with different congestion mechanisms and there is a special kind of transport protocol that solves most problems...
    I'm talking about SCPS-TP, supported by NASA and it performs very well with high bit-error links (like satellites) and it also copes with high delay. The good thing about SCPS-TP is that it's compatible with TCP, because it basically an extension of TCP.
    There is another problem with using UDP based transport protocols... they usually have low priority in routers (probably because you can use UDP for VoIP...)
  • by buck68 ( 40037 ) on Thursday October 21, 2004 @12:24PM (#10588253) Homepage

    This work seems to be about two things (which I am not sure I see a strong connection between): lowering transport latency, and using available bandwidth better. The latter has been the subject of many papers in the last few years. There are now several serious proposals of how to fix TCP with respect to long fat pipes. They don't seem to support the idea that retransmissions are harmful. So I'm going to talk about the first issue, transport latency.

    The idea of using error-correcting codes (ECC) to eliminate the need for retransmissions is an interesting one. The main benefit is to reduce transport latency (the total time it takes to send data from application A to B). Here is another paper [] proposes has a similar idea, applied at a different level of the network architecture.

    The root problem here is that network loss leads to increases in the transport latency experienced by applications. In TCP, the latency increases because TCP will resend data that is lost. That means at least one extra round-trip-time per retransmission. This "Rateless TCP" approach uses ECC so that the lost data can be recovered from other packets that were not dropped. In this way, the time to retransmit packets may not be needed. I say may, because there will be a loss rate threshold which will exceed the capability of the ECC, and retransmission will become necessary to ensure reliability. But, as long as the loss rate is below the threshold, then retransmissions will not be necessary. Note that the more "resilient" you make the ECC (meaning supporting a higher loss threshold), the more work will be needed at the ends. So you are not eliminating latency due to packet loss, you are simply moving it away from packet retransmission into the process of ECC. However, if you've got good ECC, the total latency will go down.

    The ECC approach may be a nice middle ground. But, it the ultimate solution to minimize latency is probably through a combination of active queue management (AQM) and early congestion notification (ECN). Unlike ECC, this approach really would aim to eliminate packet loss in the network due to congestion, and therefore completely eliminate the associated latency. Either ECC or regular TCP would benefit. In a controlled testbed using AQM and ECN, I've completely saturated a network with gigabits of traffic, consisting of thousands of flows, and had virtually no packet loss.

    It should also be noted that retransmission is NOT the dominant source of transport latency in of TCP. I am a co-author on a paper [] that shows another way (other than eliminating retransmission) to greatly reduce the transport latency of TCP. The basic idea is that the send-side socket buffer turns out to be the dominant source of latency (data sits in the kernel socket buffer waiting for transmission). In the above paper, we show how a dynamic socket buffer (one that tracks the congestion window) can dramatically reduce the transport latency of TCP. We allow applications to select this behaviour through a TCP_MINBUF socket option.

    -- Buck

  • by jsebrech ( 525647 ) on Thursday October 21, 2004 @12:24PM (#10588270)
    This doesn't provide anything like what TCP provides, namely a connection between two network nodes that allows transfer of arbitrary data with guaranteed reliability, with automated congestion control for optimized use of available network resources.

    As far as I can tell (their website could use some more straightforward actual content), this is more like bittorrent, where a file is cut up into blocks, the blocks get distributed across the network, and anyone interested in the file then reconstructs it from available data from all sources, not necessarily having to get the entire file correctly from a single source. Only it does it more efficiently than bittorrent.

    The two protocols target very different uses. TCP excels in interactive use, where the data is sent as it is generated, and no error is tolerable in the single sender-to-receiver link. Bittorrent (and other distributed network protocols) target batch jobs, where throughput is more important than reliability (because reliability can be reconstructed on the client through clever hashing schemes), and where responsiveness is entirely irrelevant.

    So, this could not possibly replace TCP, since it does not do what TCP is most useful for. At the same time, the criticisms aimed at TCP by the rateless designers are valid, but well known, since TCP is indeed poorly suited for high-volume high-throughput high-delay transmissions of prepackaged data.

    Still, good job to them for trying to come up with better protocols for niche or not-so-niche markets. I wish them all the best.
  • by Anonymous Coward on Thursday October 21, 2004 @12:25PM (#10588287)
    I've designed a new protocol that can send data from A to B losslessly at infinite speed without an actual connection between A and B.

    How does it work? Well, it's layered over Rateless Internet, in which (as we all know) packets do not have to be resent. So it carefully loses all packets and relies on Rateless Internet to make sure they arrive safely at the other side and do not have to be resent. Because no packets need to make it from A to B, you don't need any network hardware, and data can be sent just as fast as your machine can drop packets.

    Guess I'd better apply for a patent...

  • Their key error (Score:5, Insightful)

    by RealProgrammer ( 723725 ) on Thursday October 21, 2004 @12:27PM (#10588332) Homepage Journal
    Nevertheless, our extended real-life measurements show that highest throughput is generally achieved at speeds with anywhere between 3% and 5% loss.

    That's for just them. What if all hosts on the entire Internet were by design stuffing packets at a 3-5% error rate? Meltdown, that's what. Their "real-life" measurements do not scale, suffering from the usual assumed linearity of new designs for complex systems.

    Sometimes people fall in love with their new ideas, thinking that the rest of the world missed something obvious.

    • Re:Their key error (Score:4, Interesting)

      by hburch ( 98908 ) on Thursday October 21, 2004 @01:18PM (#10589215)
      TCP assumes all packet losses are due to congestion. This is not always true. For example, a wireless connection can have loss due to interference instead of congestion. Although I have not looked at this in quite a while, it was considered a known issue that decreased TCP performance over wireless. This is exactly the sort of setting where an ECCs would be able to greatly increase local performance without adversely affecting global performance.

      Avoiding congestion while maintaining performance is a hard problem. Fortunately, you degrade your own performance if you create congestion and congestion often occurs on the edge. We would really like to avoid the tragedy of the commons with congestion and the Internet. If we cannot, the Internet may truly collapse by 2006.

      • Re:Their key error (Score:4, Interesting)

        by RealProgrammer ( 723725 ) on Thursday October 21, 2004 @01:54PM (#10589816) Homepage Journal
        "TCP assumes all packet losses are due to congestion."

        It may be a quibble, but isn't it more accurate to say that TCP reacts to packet losses the same no matter what the cause? The packet, or group of packets, is just lost. I haven't looked into the status of RFC 3168 (ECN, router congestion flag) lately, so maybe I'm wrong.

        Since you mention it, the tragedy of the commons would be accentuated by pushing the network past saturation by design. By grabbing bandwidth, the 'haves' can effectively lock out the 'have nots' and their slower hardware, which would eventually result in no one having a usable network.
  • Well... (Score:5, Informative)

    by jd ( 1658 ) <{moc.oohay} {ta} {kapimi}> on Thursday October 21, 2004 @12:33PM (#10588458) Homepage Journal
    You could always use the existing "Reliable Multicast" protocols out there. Not only do those work over UDP, but you can target packets to multiple machines. IBM, Lucent, Sun, the US Navy and (yeek!) even Microsoft have support for Reliable Multicast, so it's already got much better brand-name support than this other TCP alternative.

    So others can have fun slashdotting other technologies, here are some websites. There are probably others, but this should keep those who do really want to move away from TCP happy.

  • by John Sokol ( 109591 ) on Thursday October 21, 2004 @12:40PM (#10588597) Homepage Journal [] I call it Error Correcting IP, and used it to stream live video from Sri Lanka in 1997 with Arthur C. Clarke Hal's Birthday []
    it was a 64K shared line with 90% packet loss, I received 60Kbps for the video stream. ( I have the video to prove it )

    We even filled preliminary patents on this back in 1996 but they were never followed through with.

    Luigi Rizzo (now head of the FreeBSD project)also did some excellent work on this also. []
    He calls it Erasure codes.

    Which is more accurate since UDP doesn't have errors, it either come across 99.999% perfect or not at all.
    So there is more information then in an error situation where ever bit is questionable.

    What this means almost 1/2 the hamming distance in the codes in needed to correct an errasure verses and error.

    Turns out the Error/Erasure correcting scheme it critical and not obvious. I spent almost 5 years working on this part time before it started making some real breakthroughs.

    My original system was designed for 25% packet loss (not uncommon in 1996).
    In the inital idea we added 1 exored packet for every three data packets, but at 25% packet loss, it turns out that it didn't increase reliablity at all! Working this out with probablities was a major eye opener!

    Even when you work the problem out you realize you will still need some retransmissions to make up for lost packets, there is no possible solutions without this.

    I have been trying to find people to help opensource this since I have working far too hard just to survive since 2000 to even consider taking on another task.

    Anyone interested in my research and carring this forward please see my site and contact me.

    John L. Sokol
  • by mbone ( 558574 ) on Thursday October 21, 2004 @12:51PM (#10588775)
    TCP uses packet loss (NOT round trip times - where did that come from?) to signal congestion, and thus to implement congestion control. This does not work well in a typical wireless (802.11X, 802.16, etc) environment, where packet losses are to be expected. TCP also does not co-exist well with lots of UDP streaming.

    So, there IS a need for a TCP replacement. However, the one being developed in the IETF [] is DCCP []. Basically, the idea is separating congestion control and packet loss replacement.

    Maymounkov and company say that they are preparing RFC's (which implies that they intend to submit this to the ITEF), but have not yet done so. So, maybe if they do, and if they offer an unencumbered license, their technology could be used, but it is way too soon to tell.
  • HSLink! (Score:4, Interesting)

    by CODiNE ( 27417 ) on Thursday October 21, 2004 @01:16PM (#10589184) Homepage
    This may be just a wee bit offtopic, but it may be my only chance to ask...

    Who remembers HSLink?? If I recall correctly it was an add-on transfer method for Procomm Plus. It allowed two people connected over a modem to simultaneously send files to each other at basically double the normal speed. I remember thinking it had to be a scam, but me and my friends tested it and were able to send double the usual info in whatever time we were used to. (I forget, 10 minutes a meg I think)

    How did this work? Were we fooled or was it for reals? Could something like that be applied to dial-up internet connections?

  • It's hooey... (Score:5, Insightful)

    by Frennzy ( 730093 ) on Thursday October 21, 2004 @01:16PM (#10589190) Homepage
    TCP doesn't use RTT to 'calculate congestion'.

    This is a load of fluff, trying to capitalize on the 'p2p craze'. There are plenty of TCP replacements out there, that actually make sense. As far as TCP not being able to utilize 'today's bandwidth', again...hooey. Gigabit ethernet (when backed by adequate hardware, and taking advantage of jumbo frames) moves a HELL (two orders of magnitude) of a lot more data than your typical home broadband connection...using TCP.

I've finally learned what "upward compatible" means. It means we get to keep all our old mistakes. -- Dennie van Tassel