Replacing TCP? 444
olau writes "TCP, the transfer protocol that most of the Internet is using, is getting old. These guys have invented an alternative that combines UDP with rateless erasure codes, which means that packets do not have to be resent. Cool stuff! It also has applications for peer-to-peer networks (e.g. for something like BitTorrent). They are even preparing RFCs! The guy who started it, Petar Maymounkov, is of Kademlia fame."
Has anyone considered Decnet? (Score:4, Interesting)
nonsense (Score:4, Interesting)
Not F/OSS (Score:3, Interesting)
Re:nonsense (Score:4, Interesting)
Somehow, the title of your post is appropriate for the subject of your mesaage.
Did you pull that 50 years out of your..... you know.
Change is not going to happen overnight. They are suggesting a better protocol which is still in the labs for the most part.
Remember how long since IPv6 is out? And now check what we are still using. It's all about changing it gradually.
Re:Old!=bad (Score:5, Interesting)
This problem stems from the fact that TCP calculates the congestion of the channel based on its round-trip time. The round-trip time, however, reflects not only the congestion level, but also the physical length of the connection. This is precisely why TCP is inherently unable to reach optimal speeds on long high-bandwidth connections.
A secondary, but just as impairing, property of TCP is its inability to tolerate even small amounts (1% - 3%) of packet loss.
I find it kind of interesting that these are two competing problems: one having to do with high bandwidth (and presumably high-reliability) connections, the other with low-reliability connections. My home DSL, however, often fits into the latter category: 3% packet loss is not uncommon on bad days. So maybe the two aren't so incompatible after all.
sounds good (Score:3, Interesting)
Comment removed (Score:2, Interesting)
SCTP (Score:5, Interesting)
Intro here [uni-essen.de]
- SCTP can be used in many "modes"
* Provides reliable messaging (like UDP,but reliable)
* Can be used as a stream protocol (like TCP).
* One connection/association can hold multiple streams.
* One-to-many relation for messaging.
* Better at dealing with syn flooding than TCP.
Then again, I guess inveting the wheel is more "fun"
Re:Encoded Packets doesn't Solve Problems (Score:3, Interesting)
Re:Old!=bad (Score:2, Interesting)
This could have a benefit for parts of the world where the Internet is available only by satellite.
People typically think of Antarctica and mountain regions when I say that, but really, any place without phone or cable, or where you need something better than text messaging over a blackberry, fits the role of using satellite ISP.
I only need to travel a mile from my University with terrific bandwidth to be in place where there are no phone lines or cable available.
Re:SCTP (and other neat protocols) (Score:3, Interesting)
SCTP is indeed interesting. I've tangled with it when playing with SIGTRAN, i.e. SS7 over IP. The nightmares ceased a while ago. :-)
One of the more interesting special-purpose protocols I've ever messed with was the PACSAT Broadcast protocol [amsat.org], used for downloading files from satellites.
It makes the assumption that downloaded files are of general interest, so it broadcasts them. Which files it broadcasts is in response to requests from ground stations. Ground stations can also request fills, pieces of files they haven't received yet. The protocols have hooks to allow files to be downloaded in pieces, over several orbits, since individual passes rarely exceed 20 minutes, and the downlink bit rate isn't all that high. This all runs over a protocol analogous to UDP.
You can get files by just listening, if you want to.
...laura
EC over IP I have been doing this for years. (Score:5, Interesting)
it was a 64K shared line with 90% packet loss, I received 60Kbps for the video stream. ( I have the video to prove it )
We even filled preliminary patents on this back in 1996 but they were never followed through with.
Luigi Rizzo (now head of the FreeBSD project)also did some excellent work on this also. http://info.iet.unipi.it/~luigi/fec.html [unipi.it]
He calls it Erasure codes.
Which is more accurate since UDP doesn't have errors, it either come across 99.999% perfect or not at all.
So there is more information then in an error situation where ever bit is questionable.
What this means almost 1/2 the hamming distance in the codes in needed to correct an errasure verses and error.
Turns out the Error/Erasure correcting scheme it critical and not obvious. I spent almost 5 years working on this part time before it started making some real breakthroughs.
My original system was designed for 25% packet loss (not uncommon in 1996).
In the inital idea we added 1 exored packet for every three data packets, but at 25% packet loss, it turns out that it didn't increase reliablity at all! Working this out with probablities was a major eye opener!
Even when you work the problem out you realize you will still need some retransmissions to make up for lost packets, there is no possible solutions without this.
I have been trying to find people to help opensource this since I have working far too hard just to survive since 2000 to even consider taking on another task.
Anyone interested in my research and carring this forward please see my site and contact me.
John L. Sokol
They are confused (Score:4, Interesting)
They're basically stating that now they can flood the connection with packets.
But they've also told you that the packets contain your data in an error correcting encoding. What they don't mention is this:
How much overhead is required by the error correcting encoding?
How many errors can the error correcting encoding handle? (drops 1 packet = ok, drops 400 packets = bad)
How much cpu computation is required to encode and decode the payload?
How is the cpu overhead managed? (how much performance will be lost by context switching, etc.)
So they're just playing the game of distracting people with the best part of thier performance measurement without bothering to mention the performance impact of all of the other trade-offs they admitted to making.
HSLink! (Score:4, Interesting)
Who remembers HSLink?? If I recall correctly it was an add-on transfer method for Procomm Plus. It allowed two people connected over a modem to simultaneously send files to each other at basically double the normal speed. I remember thinking it had to be a scam, but me and my friends tested it and were able to send double the usual info in whatever time we were used to. (I forget, 10 minutes a meg I think)
How did this work? Were we fooled or was it for reals? Could something like that be applied to dial-up internet connections?
-Don.
Re:Their key error (Score:4, Interesting)
Avoiding congestion while maintaining performance is a hard problem. Fortunately, you degrade your own performance if you create congestion and congestion often occurs on the edge. We would really like to avoid the tragedy of the commons with congestion and the Internet. If we cannot, the Internet may truly collapse by 2006.
It sounds like a crock (Score:3, Interesting)
I'd still like to see a good protocol that doesn't require in-order packet receipt in addition to the changes that they mentioned; when transfering large volumes of data, why not?
This is exactly why FTP uses UDP for its data transfer. So use FTP. In the last decade, though, improvements to TCP stacks have mostly mooted this difference. Once upon a time, when one end of a TCP transfer NACK'd a packet, it meant that packet and every packet after it would need to be re-sent (even those which had been sent already). So if you send packets 100, 101, 102 and then get a NACK for 100, you'd have to send 100, 101, 102, 103, etc. But with modern implementations, the receiver would keep packets 101 and 102, so the sender only needs to re-send packet 100, and then proceed to packet 103. This has made TCP much more efficient over lossy networks. Deferred NACKs also deal well with the problem of out-of-order packet arrival, when tardy packets show up before the deferred NACK is transmitted.
I'm very dubious that these folks' protocol can realistically replace TCP; it simply doesn't add any must-have qualities. The only place where I see an advantage is transmitting audio and video, where you don't necessarily want to retransmit lost data, and are willing to put up with minute gaps in the data stream.
A transport protocol that really deserves more attention is TTCP (TCP for Transactions). It abbreviates the TCP connection handshake, which makes it much more efficient / better-performing for very short transactions (like typical HTTP usage).
-- TTK
Re:Old!=bad (Score:3, Interesting)
Now, what they're doing isn't quite the same. You wouldn't serve webpages or email over their protocol. You would do something like bittorrent, though, since you can snag the missing frame later or from another peer. It'd be really good for streaming media, since mere milliseconds after the frame is due it's already useless. Moreso with videoconferencing and VoIP, since those can't have even a tiny buffer.
Re:Is it an open protocol? (Score:5, Interesting)
Don't get me wrong, I think they mean well, but they are trying to prohibit commercial use of GPL-linked works. Nobody said the GPL was always friendly to your business plan, but you can either take it or leave it, not have it both ways. I know one of the founders of this company, Chris Coyne - he used to regularly come to my parties in college. Nice guys, and I am sure they mean well. They could use some serious guidance though on licensing and IP issues, not to mention trying to make a viable business out of network software, which is a tough proposition in itself.
Feel free to contact me if you need help guys.
Re:Their key error (Score:4, Interesting)
It may be a quibble, but isn't it more accurate to say that TCP reacts to packet losses the same no matter what the cause? The packet, or group of packets, is just lost. I haven't looked into the status of RFC 3168 (ECN, router congestion flag) lately, so maybe I'm wrong.
Since you mention it, the tragedy of the commons would be accentuated by pushing the network past saturation by design. By grabbing bandwidth, the 'haves' can effectively lock out the 'have nots' and their slower hardware, which would eventually result in no one having a usable network.
Re:nonsense (Score:3, Interesting)
Re:SCTP (Score:2, Interesting)
There are other existing solutions to the problem of increasing bandwidth utilization for high bandwidth moderately lossy links. High speed TCP and various variaents that modify the TCP congestion control algorithm at high data rates have been reasonably well studied, but still involve fundamential trade-offs. IMHO, you need link information or path information that seperates the dropped packets into dropped due to congestion and dropped due to natural error. Otherwise, in the schemes that I've seen, you can end up competeting for bandwidth unfairly or even starving the normal TCP connections. Also IMHO, effectively using the SACK information would go a long way to increasing data rates over TCP.
As for the scheme that is proposed in the article, I think it is highly flawed in several respects. First, although forward error correction (FEC) can be used at a packet level, it is highly undesirable to do so for multiple reasons: it significantly increases the bandwidth used for normal transmission, it increases the complexity of stream reassembly, and in most cases, it's not needed. Second, I really don't think you buy very much by using "wave-based" transmission timing. It seems like that has a lot of potential to interfere badly both with other "wave-based" transmission and with normal TCP.
Finally, although I realize that there can be things to gain by reworking TCP, especially for slighly lossy links, it is important to ensure that any improvement has nearly as good as performance as TCP on low corruption loss links. For low corruption loss links with many connections, TCP does really well.
Re:Old!=bad (Score:4, Interesting)
The key problem with TCP is that it assumes that most losses happen because of packets being dropped due to congestion, not because of data corruption. Treating every loss as a congestion event worked well in the early days of the Internet, but is counterproductive today where the core of the Internet has plenty of spare capacity and congestion usually happens at the edges.
If ECN were universal, one could ignore losses (for the purpose of congestion control). There are lots and lots of protocols that have been designed by the computer science research community... like look through the proceedings of, say, ACM SIGCOMM [slashdot.org]. You'll find no shortage of new protocols.
Designing one that has big enough advantages to justify the cost of switching is another matter. TCP works find to today's cablemodems and such. Its when you start talking about trans-continental multi-gigabit networks that its limitations become a problem.
Re:Old!=bad (Score:3, Interesting)
This is compounded when you start doing layered networking (PPPoSSH or similar) on top of such a system. Then, the packet dropping problem is replaced with multi-second latency all of a sudden, and performance goes to hell in a handbasket. The backoff design works well in the lab, but doesn't always work well in a real-world environment....
What we need to do, IMHO, is to stop doing backoff after a connection has been established for a few thousand packets. Combine this with preemptible bandwidth reservation, and you have a much more robust connection in terms of maximizing data rate in a real-world network environment.
Basically the idea is this: on connect, you do a bandwidth estimation based on the sliding window results for the first few thousand packets.
You tell the router upstream "Okay, it looks like I'm getting 128Mbps to server A. Please reserve that much bandwidth and notify me if you are no longer able to provide it because something more important or more realtime needs the bandwidth."
If higher priority traffic preempts you, you would get an "ICMP bandwidth reduced" packet or something. You then switch back to the sliding window to establish an estimate of your current bandwidth (or possibly the packet contains that info).
If the app is consistently trying to push more data, the stack might periodically request a higher bandwidth allocation. If it succeeds (ICMP bandwidth increased?), the stack could increase the bandwidth allocated to that particular connection.
Thoughts?
Re:Has anyone considered Decnet? (Score:3, Interesting)
FAST TCP (Score:1, Interesting)