Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet Media

Replacing TCP? 444

olau writes "TCP, the transfer protocol that most of the Internet is using, is getting old. These guys have invented an alternative that combines UDP with rateless erasure codes, which means that packets do not have to be resent. Cool stuff! It also has applications for peer-to-peer networks (e.g. for something like BitTorrent). They are even preparing RFCs! The guy who started it, Petar Maymounkov, is of Kademlia fame."
This discussion has been archived. No new comments can be posted.

Replacing TCP?

Comments Filter:
  • by Power Everywhere ( 778645 ) on Thursday October 21, 2004 @11:38AM (#10587580) Homepage
    Now that Digital is little more than IP spread across a few different companies, maybe the holder of Decnet's patents could release the protocol under an Open Source license. If I recall correctly it was quite the networking layer.
  • nonsense (Score:4, Interesting)

    by N3wsByt3 ( 758224 ) on Thursday October 21, 2004 @11:38AM (#10587581) Journal
    TCP may be old, but it can go on for another 50 years wothout any problem.

  • Not F/OSS (Score:3, Interesting)

    by TwistedSquare ( 650445 ) on Thursday October 21, 2004 @11:42AM (#10587672) Homepage
    Considering this is slashdot and all, I was surprised that their implementation does not appear to be open source (or indeed, freely available at all), though presumably such an implementation will be possible following the RFCs. It seems to work nicely alongside TCP using UDP, quite a cool idea. The question is whether it can break TCP's de facto stranglehold on reliable Internet communication. I'd love to play with it if I could.
  • Re:nonsense (Score:4, Interesting)

    by savagedome ( 742194 ) on Thursday October 21, 2004 @11:43AM (#10587683)
    TCP may be old, but it can go on for another 50 years wothout any problem.

    Somehow, the title of your post is appropriate for the subject of your mesaage.
    Did you pull that 50 years out of your..... you know.

    Change is not going to happen overnight. They are suggesting a better protocol which is still in the labs for the most part.

    Remember how long since IPv6 is out? And now check what we are still using. It's all about changing it gradually.
  • Re:Old!=bad (Score:5, Interesting)

    by jfengel ( 409917 ) on Thursday October 21, 2004 @11:47AM (#10587769) Homepage Journal
    From the first paragraph of TFA:

    This problem stems from the fact that TCP calculates the congestion of the channel based on its round-trip time. The round-trip time, however, reflects not only the congestion level, but also the physical length of the connection. This is precisely why TCP is inherently unable to reach optimal speeds on long high-bandwidth connections.

    A secondary, but just as impairing, property of TCP is its inability to tolerate even small amounts (1% - 3%) of packet loss.

    I find it kind of interesting that these are two competing problems: one having to do with high bandwidth (and presumably high-reliability) connections, the other with low-reliability connections. My home DSL, however, often fits into the latter category: 3% packet loss is not uncommon on bad days. So maybe the two aren't so incompatible after all.
  • sounds good (Score:3, Interesting)

    by spacerodent ( 790183 ) on Thursday October 21, 2004 @11:53AM (#10587842)
    the real issue with any new technology like this is will it be accepted as a standard? For that to happen it needs to be relyable, easy to get (free), and hard to exploit. This looks to have all the above so in 5 to 10 years maybe we'll see this implmented everywhere.
  • Comment removed (Score:2, Interesting)

    by account_deleted ( 4530225 ) on Thursday October 21, 2004 @11:56AM (#10587879)
    Comment removed based on user account deletion
  • SCTP (Score:5, Interesting)

    by noselasd ( 594905 ) on Thursday October 21, 2004 @12:01PM (#10587939)
    Why not SCTP ? See RFC 2960. Already in the Linux kernel, Kame, (solaris ?) and probably others.
    Intro here [uni-essen.de]

    - SCTP can be used in many "modes"
    * Provides reliable messaging (like UDP,but reliable)
    * Can be used as a stream protocol (like TCP).
    * One connection/association can hold multiple streams.
    * One-to-many relation for messaging.
    * Better at dealing with syn flooding than TCP.

    Then again, I guess inveting the wheel is more "fun" :-/
  • by daserver ( 524964 ) on Thursday October 21, 2004 @12:03PM (#10587957) Homepage
    Sorry this is not correct. Rateless erasure codes is ECC per complete file not per packet. Please read the paper [nyu.edu].
  • Re:Old!=bad (Score:2, Interesting)

    by labradort ( 220776 ) on Thursday October 21, 2004 @12:12PM (#10588065)
    Now I understand why my satellite based ISP from years ago would start off transfers at a trickle and slowly speed up. It interpreted the 700ms ping into space and back as congestion.

    This could have a benefit for parts of the world where the Internet is available only by satellite.
    People typically think of Antarctica and mountain regions when I say that, but really, any place without phone or cable, or where you need something better than text messaging over a blackberry, fits the role of using satellite ISP.

    I only need to travel a mile from my University with terrific bandwidth to be in place where there are no phone lines or cable available.
  • by spaceyhackerlady ( 462530 ) on Thursday October 21, 2004 @12:18PM (#10588153)

    SCTP is indeed interesting. I've tangled with it when playing with SIGTRAN, i.e. SS7 over IP. The nightmares ceased a while ago. :-)

    One of the more interesting special-purpose protocols I've ever messed with was the PACSAT Broadcast protocol [amsat.org], used for downloading files from satellites.

    It makes the assumption that downloaded files are of general interest, so it broadcasts them. Which files it broadcasts is in response to requests from ground stations. Ground stations can also request fills, pieces of files they haven't received yet. The protocols have hooks to allow files to be downloaded in pieces, over several orbits, since individual passes rarely exceed 20 minutes, and the downlink bit rate isn't all that high. This all runs over a protocol analogous to UDP.

    You can get files by just listening, if you want to.

    ...laura

  • by John Sokol ( 109591 ) on Thursday October 21, 2004 @12:40PM (#10588597) Homepage Journal
    ecip.com [ecip.com] I call it Error Correcting IP, and used it to stream live video from Sri Lanka in 1997 with Arthur C. Clarke Hal's Birthday [halbday.com]
    it was a 64K shared line with 90% packet loss, I received 60Kbps for the video stream. ( I have the video to prove it )

    We even filled preliminary patents on this back in 1996 but they were never followed through with.

    Luigi Rizzo (now head of the FreeBSD project)also did some excellent work on this also. http://info.iet.unipi.it/~luigi/fec.html [unipi.it]
    He calls it Erasure codes.

    Which is more accurate since UDP doesn't have errors, it either come across 99.999% perfect or not at all.
    So there is more information then in an error situation where ever bit is questionable.

    What this means almost 1/2 the hamming distance in the codes in needed to correct an errasure verses and error.

    Turns out the Error/Erasure correcting scheme it critical and not obvious. I spent almost 5 years working on this part time before it started making some real breakthroughs.

    My original system was designed for 25% packet loss (not uncommon in 1996).
    In the inital idea we added 1 exored packet for every three data packets, but at 25% packet loss, it turns out that it didn't increase reliablity at all! Working this out with probablities was a major eye opener!

    Even when you work the problem out you realize you will still need some retransmissions to make up for lost packets, there is no possible solutions without this.

    I have been trying to find people to help opensource this since I have working far too hard just to survive since 2000 to even consider taking on another task.

    Anyone interested in my research and carring this forward please see my site and contact me.

    John L. Sokol
  • They are confused (Score:4, Interesting)

    by ebuck ( 585470 ) on Thursday October 21, 2004 @01:04PM (#10588973)
    They tell you how they solved the solution, but fail to tell you how much impact this solution has overall.

    They're basically stating that now they can flood the connection with packets.

    But they've also told you that the packets contain your data in an error correcting encoding. What they don't mention is this:

    How much overhead is required by the error correcting encoding?

    How many errors can the error correcting encoding handle? (drops 1 packet = ok, drops 400 packets = bad)

    How much cpu computation is required to encode and decode the payload?

    How is the cpu overhead managed? (how much performance will be lost by context switching, etc.)

    So they're just playing the game of distracting people with the best part of thier performance measurement without bothering to mention the performance impact of all of the other trade-offs they admitted to making.

  • HSLink! (Score:4, Interesting)

    by CODiNE ( 27417 ) on Thursday October 21, 2004 @01:16PM (#10589184) Homepage
    This may be just a wee bit offtopic, but it may be my only chance to ask...

    Who remembers HSLink?? If I recall correctly it was an add-on transfer method for Procomm Plus. It allowed two people connected over a modem to simultaneously send files to each other at basically double the normal speed. I remember thinking it had to be a scam, but me and my friends tested it and were able to send double the usual info in whatever time we were used to. (I forget, 10 minutes a meg I think)

    How did this work? Were we fooled or was it for reals? Could something like that be applied to dial-up internet connections?

    -Don.
  • Re:Their key error (Score:4, Interesting)

    by hburch ( 98908 ) on Thursday October 21, 2004 @01:18PM (#10589215)
    TCP assumes all packet losses are due to congestion. This is not always true. For example, a wireless connection can have loss due to interference instead of congestion. Although I have not looked at this in quite a while, it was considered a known issue that decreased TCP performance over wireless. This is exactly the sort of setting where an ECCs would be able to greatly increase local performance without adversely affecting global performance.

    Avoiding congestion while maintaining performance is a hard problem. Fortunately, you degrade your own performance if you create congestion and congestion often occurs on the edge. We would really like to avoid the tragedy of the commons with congestion and the Internet. If we cannot, the Internet may truly collapse by 2006.

  • by TTK Ciar ( 698795 ) on Thursday October 21, 2004 @01:31PM (#10589439) Homepage Journal

    I'd still like to see a good protocol that doesn't require in-order packet receipt in addition to the changes that they mentioned; when transfering large volumes of data, why not?

    This is exactly why FTP uses UDP for its data transfer. So use FTP. In the last decade, though, improvements to TCP stacks have mostly mooted this difference. Once upon a time, when one end of a TCP transfer NACK'd a packet, it meant that packet and every packet after it would need to be re-sent (even those which had been sent already). So if you send packets 100, 101, 102 and then get a NACK for 100, you'd have to send 100, 101, 102, 103, etc. But with modern implementations, the receiver would keep packets 101 and 102, so the sender only needs to re-send packet 100, and then proceed to packet 103. This has made TCP much more efficient over lossy networks. Deferred NACKs also deal well with the problem of out-of-order packet arrival, when tardy packets show up before the deferred NACK is transmitted.

    I'm very dubious that these folks' protocol can realistically replace TCP; it simply doesn't add any must-have qualities. The only place where I see an advantage is transmitting audio and video, where you don't necessarily want to retransmit lost data, and are willing to put up with minute gaps in the data stream.

    A transport protocol that really deserves more attention is TTCP (TCP for Transactions). It abbreviates the TCP connection handshake, which makes it much more efficient / better-performing for very short transactions (like typical HTTP usage).

    -- TTK

  • Re:Old!=bad (Score:3, Interesting)

    by Harik ( 4023 ) <Harik@chaos.ao.net> on Thursday October 21, 2004 @01:38PM (#10589532)
    I call bullshit to your logic. I've got a number of point-to-point T1s for specific uses, and I can quiesce them and do large file transfers over them for backups. I get ~170k transfer speeds, which are theoretical link maximum. (packet/frame overhead). According to you, I should saturate out at 144k.

    Now, what they're doing isn't quite the same. You wouldn't serve webpages or email over their protocol. You would do something like bittorrent, though, since you can snag the missing frame later or from another peer. It'd be really good for streaming media, since mere milliseconds after the frame is due it's already useless. Moreso with videoconferencing and VoIP, since those can't have even a tiny buffer.

  • by Fnkmaster ( 89084 ) * on Thursday October 21, 2004 @01:47PM (#10589679)
    They apparently already have some problems with GPL compliance [nyud.net]. They are distributing binaries of a tool called Rateless Copy which they state uses the GPLed libAsync library under their own license [nyud.net] while promising to deliver source code under the GPL "soon".


    Don't get me wrong, I think they mean well, but they are trying to prohibit commercial use of GPL-linked works. Nobody said the GPL was always friendly to your business plan, but you can either take it or leave it, not have it both ways. I know one of the founders of this company, Chris Coyne - he used to regularly come to my parties in college. Nice guys, and I am sure they mean well. They could use some serious guidance though on licensing and IP issues, not to mention trying to make a viable business out of network software, which is a tough proposition in itself.


    Feel free to contact me if you need help guys. :)

  • Re:Their key error (Score:4, Interesting)

    by RealProgrammer ( 723725 ) on Thursday October 21, 2004 @01:54PM (#10589816) Homepage Journal
    "TCP assumes all packet losses are due to congestion."

    It may be a quibble, but isn't it more accurate to say that TCP reacts to packet losses the same no matter what the cause? The packet, or group of packets, is just lost. I haven't looked into the status of RFC 3168 (ECN, router congestion flag) lately, so maybe I'm wrong.

    Since you mention it, the tragedy of the commons would be accentuated by pushing the network past saturation by design. By grabbing bandwidth, the 'haves' can effectively lock out the 'have nots' and their slower hardware, which would eventually result in no one having a usable network.
  • Re:nonsense (Score:3, Interesting)

    by Alioth ( 221270 ) <no@spam> on Thursday October 21, 2004 @02:21PM (#10590236) Journal
    Also, why layer it on top of UDP? Why not just have a new layer 3 IP protocol instead?
  • Re:SCTP (Score:2, Interesting)

    by Anonymous Coward on Thursday October 21, 2004 @03:00PM (#10590707)
    SCTP is designed for a slightly different purpose than high speed reliable streams over slighly lossy paths. SCTP works well for grouping the requests for HTTP images and related files into one stream so that the startup bandwidth can be increased. My understand of SCTP for high bandwidth slightly lossy paths is that it will perform similar to TCP with the slight advantage that some substreams may have reduced latency.

    There are other existing solutions to the problem of increasing bandwidth utilization for high bandwidth moderately lossy links. High speed TCP and various variaents that modify the TCP congestion control algorithm at high data rates have been reasonably well studied, but still involve fundamential trade-offs. IMHO, you need link information or path information that seperates the dropped packets into dropped due to congestion and dropped due to natural error. Otherwise, in the schemes that I've seen, you can end up competeting for bandwidth unfairly or even starving the normal TCP connections. Also IMHO, effectively using the SACK information would go a long way to increasing data rates over TCP.

    As for the scheme that is proposed in the article, I think it is highly flawed in several respects. First, although forward error correction (FEC) can be used at a packet level, it is highly undesirable to do so for multiple reasons: it significantly increases the bandwidth used for normal transmission, it increases the complexity of stream reassembly, and in most cases, it's not needed. Second, I really don't think you buy very much by using "wave-based" transmission timing. It seems like that has a lot of potential to interfere badly both with other "wave-based" transmission and with normal TCP.

    Finally, although I realize that there can be things to gain by reworking TCP, especially for slighly lossy links, it is important to ensure that any improvement has nearly as good as performance as TCP on low corruption loss links. For low corruption loss links with many connections, TCP does really well.
  • Re:Old!=bad (Score:4, Interesting)

    by InfiniteWisdom ( 530090 ) on Thursday October 21, 2004 @04:43PM (#10591925) Homepage
    No, the problem is with the way TCP's congestion window works. When there is no loss, the congestion window is increased linearly, typically at 1 packet a roundtrip. On the other hand, whenever a packet is lost the window is cut in half. On a high latency, high bandwidth link (i.e. one with a large delay-bandwidth product) the window required to keep the link saturated could be tens of thousands of packets. You could tweak the parameters, but ultimately you still have additive-increase multiplicative decrease (AIMD).

    The key problem with TCP is that it assumes that most losses happen because of packets being dropped due to congestion, not because of data corruption. Treating every loss as a congestion event worked well in the early days of the Internet, but is counterproductive today where the core of the Internet has plenty of spare capacity and congestion usually happens at the edges.

    If ECN were universal, one could ignore losses (for the purpose of congestion control). There are lots and lots of protocols that have been designed by the computer science research community... like look through the proceedings of, say, ACM SIGCOMM [slashdot.org]. You'll find no shortage of new protocols.

    Designing one that has big enough advantages to justify the cost of switching is another matter. TCP works find to today's cablemodems and such. Its when you start talking about trans-continental multi-gigabit networks that its limitations become a problem.
  • Re:Old!=bad (Score:3, Interesting)

    by dgatwood ( 11270 ) on Thursday October 21, 2004 @05:08PM (#10592274) Homepage Journal
    The back-off design works relatively well if you assume that the network is relatively reliable. As soon as non-bandwidth-related packet loss is introduced at any real frequency, it becomes the most horrible nightmare in existence. Add a filter to your firewall sometime that tdrops 1% of packets randomly. Try to use the connection.

    This is compounded when you start doing layered networking (PPPoSSH or similar) on top of such a system. Then, the packet dropping problem is replaced with multi-second latency all of a sudden, and performance goes to hell in a handbasket. The backoff design works well in the lab, but doesn't always work well in a real-world environment....

    What we need to do, IMHO, is to stop doing backoff after a connection has been established for a few thousand packets. Combine this with preemptible bandwidth reservation, and you have a much more robust connection in terms of maximizing data rate in a real-world network environment.

    Basically the idea is this: on connect, you do a bandwidth estimation based on the sliding window results for the first few thousand packets.

    You tell the router upstream "Okay, it looks like I'm getting 128Mbps to server A. Please reserve that much bandwidth and notify me if you are no longer able to provide it because something more important or more realtime needs the bandwidth."

    If higher priority traffic preempts you, you would get an "ICMP bandwidth reduced" packet or something. You then switch back to the sliding window to establish an estimate of your current bandwidth (or possibly the packet contains that info).

    If the app is consistently trying to push more data, the stack might periodically request a higher bandwidth allocation. If it succeeds (ICMP bandwidth increased?), the stack could increase the bandwidth allocated to that particular connection.

    Thoughts?

  • I found one of these in a warehouse back when I was in highschool. I always wondered what such a useless looking cable could be for. It wasn't like that tab would have prevented me from putting the cable in a plug wrong. I ended up grinding off the offensive bit and using it anyway, it was 80ft of otherwise good cable encased in the stiffest plastic I've ever seen used. Now it runs through dirt.
  • FAST TCP (Score:1, Interesting)

    by physman ( 460332 ) on Thursday October 21, 2004 @07:06PM (#10593364) Homepage
    There was something about stramlining the TCP protocal which could speed up the internet and other networks. See http://www.newscientist.com/news/news.jsp?id=ns999 93799 [newscientist.com] and http://en.wikipedia.org/wiki/FAST_TCP [wikipedia.org]

An authority is a person who can tell you more about something than you really care to know.

Working...