Replacing TCP? 444
olau writes "TCP, the transfer protocol that most of the Internet is using, is getting old. These guys have invented an alternative that combines UDP with rateless erasure codes, which means that packets do not have to be resent. Cool stuff! It also has applications for peer-to-peer networks (e.g. for something like BitTorrent). They are even preparing RFCs! The guy who started it, Petar Maymounkov, is of Kademlia fame."
Has anyone considered Decnet? (Score:4, Interesting)
Re:Has anyone considered Decnet? (Score:3, Informative)
DEC MMJ
Digital Equipment Co.'s proprietary Modified Modular Jack. It is identical to a standard USOC 6-position jack, except that the locking tab has been offset to the right to prevent an MMP (modified modular plug) from fitting into a USOC jack. AMP5-555237-2 Plug
Re:Has anyone considered Decnet? (Score:3, Interesting)
Re:Has anyone considered Decnet? (Score:5, Informative)
http://en.wikipedia.org/wiki/DECnet
John.
nonsense (Score:4, Interesting)
Re:nonsense (Score:4, Interesting)
Somehow, the title of your post is appropriate for the subject of your mesaage.
Did you pull that 50 years out of your..... you know.
Change is not going to happen overnight. They are suggesting a better protocol which is still in the labs for the most part.
Remember how long since IPv6 is out? And now check what we are still using. It's all about changing it gradually.
Re:nonsense (Score:4, Insightful)
And what makes this a better protocol? Its vast history of being a solid, reliable protocol with its massive amount of industrial testing? Oh wait thats TCP.
Frankly, new stacks that look better than old ones are a dime a dozen. Until you test them in the real world you will never know and why bother changing TCP when it does a damn fine job right now?
IPv6 was implemented for the sole reason (primarily) of addressing the shortage of IP addresses.
Re:nonsense (Score:5, Funny)
Duuude, you can't compare TCP to IP, they're different units! You have to divide them (TCP/IP)!
"/" is read as "over" (Score:3, Insightful)
So TCP/IP is "Transmission control protocol over internet protocol" or just "TCP over IP".
While you don't see it much, this also inferrs "HTTP/TCP/IP" and so forth.
So just as "voice over IP" doesn't make voice and IP the same, TCP over IP doesn't really relate one to the other.
After all, if you use dialup, the first hop of your connection is typically TCP or UDP over PPP (Point to Point Protocol) and the ISP acts as a gateway from the
Re:"/" is read as "over" (Score:3, Insightful)
No. Your connection to your ISP is IP encapsulated in PPP frames. PPP is a session-based, authenticated layer 2 framing type, loosely based on HDLC. The PPP frames are discarded at the ISP's NAS/LNS, which then forwards the inner IP goo to the correct destination (usually its default rou
flamebait? (Score:4, Insightful)
Exactly the same kind of post a bit below gets 'insightful'.
It is simply true. Yes, there are some little drawbacks with TCP, but in the whole article, they do not give a compelling reason to switch, let alone why one would *have* to. I mean, RTFA: TCP is at 1-3% and the most efficient would be a throughput with 3-5% (loss)...but so what? It's not optimal, but does it anywhere claims TCP is doomed because it's not optimal in certain area's?
There are myriads of things that aren't optimal on the Net, it doesn't mean they have been here for years and will be for years to come, nor that it is a necessity to switch, if the only thing lacking is that it's not optimally suited.
Re:nonsense (Score:3, Insightful)
The article is little more than an advertisment for their socket management software, so it's no more news than what SCO produces, but from what I can gather...
There's no way to obtain data that was truly lost, so they are using an error correcting encoding of the data (and wasting some of the bandwith in the process) They're not really doing
Re:nonsense (Score:3, Interesting)
It sounds like a crock (Score:3, Interesting)
I'd still like to see a good protocol that doesn't require in-order packet receipt in addition to the changes that they mentioned; when transfering large volumes of data, why not?
This is exactly why FTP uses UDP for its data transfer. So use FTP. In the last decade, though, improvements to TCP stacks have mostly mooted this difference. Once upon a time, when one end of a TCP transfer NACK'd a packet, it meant that packet and every packet after it would need to be re-sent (even those which had been se
Evil Bit? (Score:5, Funny)
A working wikipedia link for Kademlia (Score:4, Informative)
Re:A working wikipedia link for Kademlia (Score:3, Informative)
Old!=bad (Score:5, Insightful)
Re:Old!=bad (Score:5, Interesting)
This problem stems from the fact that TCP calculates the congestion of the channel based on its round-trip time. The round-trip time, however, reflects not only the congestion level, but also the physical length of the connection. This is precisely why TCP is inherently unable to reach optimal speeds on long high-bandwidth connections.
A secondary, but just as impairing, property of TCP is its inability to tolerate even small amounts (1% - 3%) of packet loss.
I find it kind of interesting that these are two competing problems: one having to do with high bandwidth (and presumably high-reliability) connections, the other with low-reliability connections. My home DSL, however, often fits into the latter category: 3% packet loss is not uncommon on bad days. So maybe the two aren't so incompatible after all.
Re:Old!=bad (Score:3, Informative)
For #2, this could simply be retransmission problems. TCP has a strict sequential ordering to packets, so a packet lost in the stream could cause other packets to be discarded, forcing more retransmissions than were technically required. T
Re:Old!=bad (Score:5, Informative)
That's what selective ack is for. It's pretty old hat at this point, everyone has it implemented.
Re:Old!=bad (Score:4, Informative)
And that only works within the TCP window size. For discretely-sized data transfers on the order of megabytes and gigabytes, delaying transfer of one TCP window (on the order of dozens of kilobytes) is not the best way to get maximum throughput.
Re:Old!=bad (Score:5, Informative)
So you're really only concerining yourself with ordered delivery when the delay in the natural order is greater than that of the sliding window (the cache of received packets that you're not going to tell the software about just yet)
This takes care of trivial things where packets arrive "just barely" out of order, but in a true packet loss situation, can block the channel for quite some time.
So the strict sequence you refer to is the sequence the packets are presented to the next layer (usually the application in Ethernet), not the sequence the packets are received by the machine.
Re:Old!=bad (Score:4, Interesting)
The key problem with TCP is that it assumes that most losses happen because of packets being dropped due to congestion, not because of data corruption. Treating every loss as a congestion event worked well in the early days of the Internet, but is counterproductive today where the core of the Internet has plenty of spare capacity and congestion usually happens at the edges.
If ECN were universal, one could ignore losses (for the purpose of congestion control). There are lots and lots of protocols that have been designed by the computer science research community... like look through the proceedings of, say, ACM SIGCOMM [slashdot.org]. You'll find no shortage of new protocols.
Designing one that has big enough advantages to justify the cost of switching is another matter. TCP works find to today's cablemodems and such. Its when you start talking about trans-continental multi-gigabit networks that its limitations become a problem.
Re:Old!=bad (Score:5, Insightful)
As they mention on their site, TCP's bandwidth usage is dependent on the latency of the link. This is due to the fact that sliding windows (the number of packets that are allowed to be out on the network) have a limited size. TCP sends a windowful of packets, then waits until one is acknowledged before sending another one. On high-latency links, this can cause a lot of bandwidth to go unused. There is an extension to TCP that allows for larger windows, addressing this problem.
Another problem with TCP is slow start and rapid back-off. IIRC, a TCP connection linearly increases its bandwidth, but will halve it immediately when a packet is lost. The former makes for a very slow startup, whereas the latter causes a connection to slow down dramatically, even though a lost packet can be due to many factors other than congestion. Slow start has been addressed by allowing connections to speed up quicker, about rapid back-off I'm not sure.
The solution this company provides seems to play nice with TCP by varying the transmission speed in waves. Apparently, this improves speed over TCP's back-off mechanism, but it obviously doesn't provide optimal bandwidth utilization.
Re:Old!=bad (Score:5, Informative)
This means that a TCP connection uses ~75% of the bandwidth available to it (after all this stabilizes). So if there is only a single tcp connection over a given length, it will be 75% full at best. However, the whole reason for doing a lot of this is to allow many connections to coexist. If you transmit as fast as possible, you will get the highest throughput possible, but you will end up with a lot of dropped packets and won't play nice with others.
Re:Old!=bad (Score:3, Interesting)
Now, what they're doing isn't quite the same. You wouldn't serve webpages or email over their protocol. You would do something like bittorrent, though, since you can snag the missing frame later or from
Re:Old!=bad (Score:3, Informative)
TCP's window size is not a fixed size. There is a maximum, but most "normal" connections are well below it. (Links with very high bandwidth and high delay may reach this limit, which can be increased somewhat with the right TCP extensions.) TCP's window size, in fact, regulates its bandwidth consumption.
Re:Old!=bad (Score:3, Interesting)
This is compounded when you start doing layered networking (PPPoSSH or similar) on top of such a system. Then, the packet dropping problem is replaced with multi-second late
Is it an open protocol? (Score:5, Insightful)
Re:Is it an open protocol? (Score:5, Informative)
It doesn't look good. Their webpage [rateless.com] says:``We are planning to release Rateless Codes for free, non-commercial use, in open source under the GNU Public License.'' They are seriously confused about the terms of their own license. The GPL doesn't lend itself to ``free, non-commercial use'': it lets the licensee use and distribute freely, at any price, for any use.
Either they'll loose that ``non-commercial use'' crap, or this will go nowhere.
They are confused (Score:4, Interesting)
They're basically stating that now they can flood the connection with packets.
But they've also told you that the packets contain your data in an error correcting encoding. What they don't mention is this:
How much overhead is required by the error correcting encoding?
How many errors can the error correcting encoding handle? (drops 1 packet = ok, drops 400 packets = bad)
How much cpu computation is required to encode and decode the payload?
How is the cpu overhead managed? (how much performance will be lost by context switching, etc.)
So they're just playing the game of distracting people with the best part of thier performance measurement without bothering to mention the performance impact of all of the other trade-offs they admitted to making.
Re:Is it an open protocol? (Score:5, Interesting)
Don't get me wrong, I think they mean well, but they are trying to prohibit commercial use of GPL-linked works. Nobody said the GPL was always friendly to your business plan, but you can either take it or leave it, not have it both ways. I know one of the founders of this company, Chris Coyne - he used to regularly come to my parties in college. Nice guys, and I am sure they mean well. They could use some serious guidance though on licensing and IP issues, not to mention trying to make a viable business out of network software, which is a tough proposition in itself.
Feel free to contact me if you need help guys.
Re:Is it an open protocol? (Score:5, Funny)
No point really not releasing it under a BSD licence in the first place, save leting the BSD guys write a far better version for the world to use.
The RFC is more important than the code anyway, and there fools for not writing the RFC first.
Re:Is it an open protocol? (Score:5, Informative)
Almost true. It gives you a license to distribute that code, under certain terms. No license is needed to use a copy. Read section 5 of the GPL [gnu.org] for an elegant confirmation of that.
When they say "free, non-commercial use", and they talk about the GPL, they are making sense. The Linux kernel, which is GPL, may use their implementation.
Sorry, you're way off here. Redhat sells Linux, and their customers use it for commercial purposes. Therefore, the Redhat version of the Linux kernel can't use their implementation. The issue is moot, however, since the non-commercial restriction would make their license non-GPL-compatible. Their implementation couldn't be included even in a kernel which would see non-commercial use only.
Microsoft cannot use their implementation in any of its Windows OSes ...
Wrong. If MS wanted to make a version which they would give free of charge to private individuals and charitable organizations, they could use it, since that would be entirely non-commercial, both in distribution and in use.
For MS, that would be a problem, but it's the GPL part, not the non-commercial part, that would bite them. As I said above, the non-commercial restriction would render this license incompatible with the GPL.
The GPL does not lend itself to "commercial use", because the "standard" model of software licensing is paying a price per use/copy of the binary, and the GPL doesn't work this way.
Sorry, you've mis-construed non-commercial. Non-commercial typically means not used in commerce. Believe it or not, is is possible to sell software for non-commercial use. If you run your business using the software, that's commercial use. By the way, you should tell Redhat and Novell that bit about the GPL not working for ``paying a price per use/copy of the binary'', that'll be a revelation to them.
Re:Is it an open protocol? (Score:4, Informative)
Thanks... I'm not sure if you're confused or trolling, but I respond anyway:
Microsoft as long as they didn't change the stack could use it in their kernel without releasing the full kernel code.
Wrong. The reason the LGPL exists is to address this specific case. If the stack were LGPL licensed, then MS could include it, and only be required to submit any changes they made to the LGPL'ed stack itself.
As the Stack wouldn't be a derivative of the whole kernel.
How could you possibly argue that their stack was a derivative of the MS kernel? I assume you meant to say that the MS kernel was a derivative of the stack... But then you are 100% wrong here. If MS used this code, then their kernel would be a derivative work, there's basically no way to get around it.
Technically, the FSF specifically states that if two programs run in operation via a "pipe", that it doesn't consider them linked. Any form of dynamic linking, they are considered linked, so you couldn't load the stack as a dll or driver and consider yourself not a derivative work. There is perhaps a tiny bit of leeway if you're a lawyer, where you could say "the FSF thinks loading a driver makes the kernel a derivative work, but I don't, so let's argue that in court".
Now to be really nice MSFT might have to make available
You are way off base. GPL apps are specifically allowed to include LGPL stuff. This is a special case exemption from the normal rules of the GPL. LGPL stuff is *not* allowed to include GPL stuff.
Re:Is it an open protocol? (Score:3, Informative)
Assuming there aren't any underlying IP issues. I'm not aware of any patents on the forward error correcting codes they're using, but that doesn't mean they don't exist. And assuming some jackass doesn't say, "Here's a thing and it's not patented; I'll patent it and then license it back to t
Re:Is it an open protocol? (Score:3, Insightful)
Peter's paper was originally presented at the same conference where Digital Fountain also presented approximately the same thing. (Building on the LT codes that they had been shipping for years)
I'm quite sure that DF has a stack of patents on their version.
This may get interesting.
(Disclaimer: DF laid me off in 2003)
Re: (Score:2)
Re:Is it an open protocol? (Score:2)
What exactly about TCP is getting old? (Score:4, Insightful)
TCP is old...so what? (Score:5, Insightful)
This isn't a general TCP Replacement (Score:5, Insightful)
Re:TCP is old...so what? (Score:3, Insightful)
(Hint: redundant data is the same as packet loss from a time standpoint)
I wouldn't make any mention of bittorrent, etc.. (Score:5, Insightful)
It's good to see innovation though, nonetheless.
Yet another "reliable UDP" layer (Score:3, Insightful)
Plus there are already protocol stacks that work around most of their gripes about TCP (slow performance over long pipes, etc).
Re:Yet another "reliable UDP" layer (Score:5, Insightful)
Rateless Internet (slashdotted) (Score:5, Informative)
Here is a summary of their technology copied from their website:
Rateless Internet The Problem
Rateless Internet is an Internet transport protocol implemented over UDP, meant as a better replacement of TCP. TCP's legacy design carries a number of inefficiencies, the most prominent of which is its inability to utilize most modern links' bandwidth. This problem stems from the fact that TCP calculates the congestion of the channel based on its round-trip time. The round-trim time, however, reflects not only the congestion level, but also the physical length of the connection. This is precisely why TCP is inherently unable to reach optimal speeds on long high-bandwidth connections.
A secondary, but just as impairing, property of TCP is its inability to tolerate even small amounts (1% - 3%) of packet loss. This additionally forces TCP to work at safe and relatively low transmission speeds with under 1% loss rates. Nevertheless, our extended real-life measurements show that highest throughput is generally achieved at speeds with anywhere between 3% and 5% loss.
The Solution
By using our core coding technology we were able to design a reliable Internet transmission protocol which can circumvent both of the fore-mentioned deficiencies of TCP, while still remaining TCP-friendly. By using encoded, rather than plain, transmission we are able to send at speeds with any packet loss level. Rateless coding is used in conjunction with our Universal Congestion Control algorithm, which allows Rateless Internet to remain friendly to TCP and other congestion-aware protocols.
Universal Congestion Control is an algorithm for transmission speed control. It is based on a simple and clean idea. Speed is varied in a wave-like fashion. The top of the wave achieves near-optimal throughput, while the bottom is low enough to let coexisting protocols like TCP smoothly receive a fair share of bandwidth. The time lengths of the peaks and troughs can be adjusted parametrically to achieve customized levels of fairness between Rateless Internet and TCP.
The Rateless Internet transport is now available through our Rateless Socket product in the form of a C/C++ socket library. Rateless Internet is ideal for Internet-based applications, running on the network edges, that require high bandwidth in a heterogenous environment. It was specifically built with peer-to-peer and live multimedia content delivery applications in mind.
Re:Rateless Internet (slashdotted) (Score:3, Informative)
What? (Score:5, Informative)
1. TCP does not use round trip time to calculate any "congestion levels." It increases the connection rate until packets get dropped, presumably because some router in the middle got overloaded.
2. Packet loss is used as a signal to TCP to slow down because it tried to send too fast. The lost packets are subsequently retransmitted, so TCP can indeed not only tolerate but recover from packet loss. The only real case they have is packet loss due to reasons other than TCP's own aggressive sending rate, such as UDP traffic, wireless links, etc.
Given these concerns, I can't help but think that they are inventing a protocol that works well only if used on a small scale. TCP is designed to back down if it thinks it's sending too fast, and is not really optimal. One can always hack a pair of TCP nodes to not play by the rules and get more than the fair share, but the problem is that that solution wouldn't work if it were adopted network-wide.
Re:What? (Score:5, Insightful)
TCP doesn't use round trip time to calculate a link speed. In fact, it doesn't just the opposite. It uses a sliding window method so it can send many packets before any one has ACKed. This is done to soften the blow that round trip times will have on your send rates!
TCP regulates its send rates by slowly sending faster and faster, then when a packet is dropped, it drops its rate fast. Slow increases and exponential backoffs make for VERY efficient link utilization on reliable networks with many active nodes whether it be a fast office LAN or world wide network.
Their method appears to just spray data without paying attention to what other nodes are doing. It sounds like it is much better suited for point to point communications on unreliable networks. E.g. cellular data networks, packets get dropped much more frequently because of interference rather than congestions. TCP might back off too quickly in this condition because it is optimized to deal with congestion. Their protocol might be great for that first/last hop between your cell phone and the cell tower but otherwise, it undermines the great balance that TCP achieves on our amazing little internet.
Re:What? (Score:3, Informative)
That's true, but TCP does implicitly rely upon the RTT in its ordinary operation; it does not increase the size of its congestion window until it sees an ACK, which implies a delay equal to the RTT. So, TCP doesn't use RTT in an explicit calculation but RTT does affect how quickly you're able to ramp up your utilization of the link after a packet loss.
TCP can indeed not only tolerate but recover from packet loss
Yes, but the questio
Brilliant! (Score:4, Insightful)
A slightly faster equivelent to TCP that I have to pay for and no-one else uses.
Sign me up for that sucker right now.
Not F/OSS (Score:3, Interesting)
Whew! (Score:5, Funny)
Often stories are posted that refer to products or code names, with no description, which is quite annoying.
I'm glad to see this post doesn't run that risk.
Thanks for clearing that up for me.
-d
here is the story, before it gets slashdotted (Score:2, Redundant)
A sec
Is replacing TCP necessary? (Score:5, Insightful)
If you want to replace TCP, you have to do more than just develop a new protocol that is faster. It would have to outperform TCP in speed, reliability, and substantially so in order to outweigh the costs of ditching a well-established and trusted protocol.
Ugh -- eyes are playing tricks on me. (Score:5, Funny)
The guy who started it, Petar Maymounkov, is of Kademlia fame."
The guy who started it, Petar Maymounkov, is of Chlamydia fame."
I was about to wonder what sort of "fame" you could get from that. Need coffee. Need sleep.
IronChefMorimoto
Respect! (Score:5, Funny)
Find out what it means to me
R-E-S-P-E-C-T
Take care, TCP
Oh socket to me, socket to me,
socket to me, socket to me...
Re:Respect! (Score:5, Funny)
ANY loss level?? (Score:5, Funny)
Re:ANY loss level?? (Score:3, Funny)
Can Anybody Explain? (Score:3, Insightful)
Not gonna work if encumbered (Score:5, Insightful)
2. They don't seem to understand the GPL:
"We are planning to release Rateless Codes for free, non-commercial use, in open source under the GNU Public License."
The GPL doesn't restrict commercial use, and hence the only way that they can do this is either they try to add some conditions to the GPL, or they use another mechanism to restrict commercial use: e.g. patents.
No matter how good this technology is it's not going to get wide adoption is an alternative to TCP unless it's unencumbered.
John.
Re:Not gonna work if encumbered (Score:5, Insightful)
Re:Not gonna work if encumbered (Score:3, Funny)
Encoded Packets doesn't Solve Problems (Score:5, Insightful)
1) The major problem a TCP packet will face is getting dropped. They mention this problem. They claim their encoding will solve this problem. It won't. No ECC algorithm will allow you to recover a dropped packet.
2) Most packets that are corrupted are corrupted well beyond the repair of most ECCs.
3) ECCs will cause packet size to increase. Not a huge problem, but why do it when ECCs don't help too much to begin with?
Re:Encoded Packets doesn't Solve Problems (Score:5, Informative)
They're actually talking about erasure correction, where each symbol is a packet as a whole. In a very simple form, you send a packet sequence like this:
A B (A^B) C D (C^D)
So if you lose packet A, you reconstruct it from (B ^ (A^B)) = A. This simple scheme increases the number of packets sent by 50%, but allows you to tolerate a 33% loss, presuming you don't lose bursts of packets. There are more sophisticated schemes, of course, and there are various tradeoffs of overhead versus robustness.
Re:Encoded Packets doesn't Solve Problems (Score:3, Informative)
However, what do you do when you encounter a burst error longer than your ECC design limit? A more complex code may not be the answer for reasons of computational complexity.
The solution (employed with great success in CD media) is interleaving: instead of sending A1A2A3 B1B2B3 C1C2C3 you send A1B1C1 A2B2C2 A3B3C3. Let's assume the middle of that, A2B2C3, got lost, and you can only recover an error of one third that length
Re:Encoded Packets doesn't Solve Problems (Score:5, Informative)
Rateless codes are important because they are the first step to a "digital fountain" model of data transfer. The idea is this: say you have a file of 10KB. Using erasure codes, you can "oversample" this file -- that is, turn this file into 40 1KB chunks. Now by collecting ANY 10 of these 1KB chunks, you can reconstruct the original file.
The reason the digital fountain is so cool (and the reason for its name) is because it would make the most amazing BitTorrent client ever. Why? In BitTorrent, you're doing things like choking/unchoking, aggressively seeking the rarest pieces, and hoping that some guy who has the ONLY COPY of piece X doesn't leave the network before it is replicated. Using Erasure Codes, everyone would just blindly get whatever data is sent to them (where each data unit is an "oversample"), and blindly forward it on all outbound links. But if you're getting data in this manner, isn't there a chance you'd get the same piece of data twice (which is like getting the same piece twice in BitTorrent, which is no use)? If you oversample enough, you can make the chance of this happening approach 0. Note in this scheme, there's no choking or unchoking here (so you'll never go through a period of time on a rare torrent where EVERYONE is choking you, and your download rate is 0KB/s), and you don't have to worry about that one guy leaving the swarm with some critical piece -- not just because you can always reconstruct it, but also because that piece is no more critical than any other.
It's beautiful, really. So why aren't we doing it? Simple -- erasure codes are computationally expensive to compute, although
they're getting easier as new sorts of erasure codes are being developed, and PCs get faster. The first big breakthrough was with Tornado codes [ucla.edu], if I remember correctly. Anyway, we'll see what the future holds...
- shadowmatter
Re:Encoded Packets doesn't Solve Problems (Score:4, Insightful)
However, I don't think most people would necessarily enjoy 50% larger payloads required to make this work. It could be tuned back, but for every decrease in overhead, the effect of losing a frame gets worse. In the end (and this is purely speculative, as I've no real data or math to back this up) it may be that TCP remains more effective with better throughput.
I'll be honest, I don't see/experience the kinds of lag and retransmission problems that are described in the article, and any large streaming transfers to my home or desk regulary consume 100% of my available bandwidth. So for me, TCP works just fine.
Re:Encoded Packets doesn't Solve Problems (Score:3, Informative)
Congratulations, you have just re-invented TCP with selective acknowledgments (a.k.a. SACK, RFC 2018 [ietf.org], published in October 1996).
For more discussion on this topic, see also: http://www.icir.org/floyd/sacks.html [icir.org].
Re:Encoded Packets doesn't Solve Problems (Score:3, Interesting)
Re:Encoded Packets doesn't Solve Problems (Score:4, Insightful)
You are almost correct, except that doing the error correction on the whole file instead of a single (or a couple of) packets allows the file to be transmitted even if one or several packets are dropped.
However, this kind of error correction is only good if you are exchanging rather large files. They cannot claim to replace TCP if their protocol cannot be used for any kind of interactive work. Take for example SSH or HTTP (the protocol that we are using for reading Slashdot): they are both implemented on top of TCP and they require TCP to work equally well for exchanging small chunks of data (keypress in an SSH session, or HTTP GET request) and exchanging larger chunks of data (HTTP response).
While their new protocol would probably work well for the larger chunks of data, it is very likely to reduce the degrade the link performance for the smaller bits exchanged during an interactive session. So they are probably good for file sharing. But TCP has many other uses...
Also, they mention that they can be TCP-friendly by transmitting in "waves". I doubt that these "waves" can be very TCP-friendly if their time between peaks is too short. One thing that TCP does not do well is to adapt its transmission rate to fast-changing network conditions. So if they allow the user or the application designer to set the frequency of these waves, they could end up with a very TCP-unfriendly protocol.
sounds good (Score:3, Interesting)
old - so what then ? (Score:3, Insightful)
All in all, it's good to have new alternative solutions and new technologies at hand, but to state such things as replacing TCP 'cause its age (maturity, that is
Would this "steal bandwith"? (Score:2)
Good luck, but it will never happen (Score:5, Insightful)
Re:Good luck, but it will never happen (Score:3, Insightful)
Or rather: the protocol is built on top of UDP, meaning that it will only be implemented at the application layer. There are other fine protocols built on top of UDP, such as RTP/RTCP used for streaming. Are there any operating systems implementing these protocols in the kernel? I don't think so. Are there libraries providing a convenient API allowing application developers to use these protocols
Yawn (Score:5, Insightful)
ASCII is still around, despite its numerous shortcomings. There's this small thing called "backward compatibility" that people/consumers seem to love, for some reason. Well, same thing for TCP/IP. Even IPv6 has trouble taking off in the general public, despite being essentially just a small change in the format, so never mind the YAWN Protocol this article is about...
Doesn't work. Sorry, do not collect $200. (Score:5, Informative)
With Rateless Copy: time between 31-41 seconds, average of 200k/s, the resulting file is corrupted. Tried it again to ensure, same result.
Without rateless copy (http file download) 8 seconds, average of 490k/s, the resulting file works fine as expected.
Sorry, but I don't think it's all that great.
Re:Doesn't work. Sorry, do not collect $200. (Score:5, Informative)
Just transfered a ~600MB (Linux CD) ISO from Europe to Australia.
rateless-copy did about 400K/s
scp managed 2.8M/s
At least both files were intact
SCTP (Score:5, Interesting)
Intro here [uni-essen.de]
- SCTP can be used in many "modes"
* Provides reliable messaging (like UDP,but reliable)
* Can be used as a stream protocol (like TCP).
* One connection/association can hold multiple streams.
* One-to-many relation for messaging.
* Better at dealing with syn flooding than TCP.
Then again, I guess inveting the wheel is more "fun"
Re:SCTP (and other neat protocols) (Score:3, Interesting)
SCTP is indeed interesting. I've tangled with it when playing with SIGTRAN, i.e. SS7 over IP. The nightmares ceased a while ago. :-)
One of the more interesting special-purpose protocols I've ever messed with was the PACSAT Broadcast protocol [amsat.org], used for downloading files from satellites.
It makes the assumption that downloaded files are of general interest, so it broadcasts them. Which files it broadcasts is in response to requests from ground stations. Ground stations can also request fills, pieces of fi
Getting Old? (Score:4, Funny)
Really, is TCP flawed?
Better TCP by whose rules? (Score:4, Insightful)
Update TCP, don't add new protocol (Score:4, Informative)
One thing that bothers me is I see ISPs applying policing to their subscriber's bandwidth. Policing is quite unfriendly to TCP, unlike, say, shaping. With policing, a router decides either to pass, drop, or mark a packet based on if it exceeds certain bandwidth constraints. Shaping, on the other hand, will buffer packets and introduce additional latency, thus helping TCP find the sweet spot. Of course shaping will also drop, since nobody provides infinite buffer space.
TCP is relatively easy to extend. There are still some free flag bits and additional fields can be added to the TCP header if needed.
-Aaron
They invented TCP... again!! (Score:4, Informative)
These guys haven't invented anything new. There are many flavours of TCP with different congestion mechanisms and there is a special kind of transport protocol that solves most problems...
I'm talking about SCPS-TP, supported by NASA and it performs very well with high bit-error links (like satellites) and it also copes with high delay. The good thing about SCPS-TP is that it's compatible with TCP, because it basically an extension of TCP.
There is another problem with using UDP based transport protocols... they usually have low priority in routers (probably because you can use UDP for VoIP...)
Transport latency and TCP (Score:5, Informative)
This work seems to be about two things (which I am not sure I see a strong connection between): lowering transport latency, and using available bandwidth better. The latter has been the subject of many papers in the last few years. There are now several serious proposals of how to fix TCP with respect to long fat pipes. They don't seem to support the idea that retransmissions are harmful. So I'm going to talk about the first issue, transport latency.
The idea of using error-correcting codes (ECC) to eliminate the need for retransmissions is an interesting one. The main benefit is to reduce transport latency (the total time it takes to send data from application A to B). Here is another paper [mit.edu] proposes has a similar idea, applied at a different level of the network architecture.
The root problem here is that network loss leads to increases in the transport latency experienced by applications. In TCP, the latency increases because TCP will resend data that is lost. That means at least one extra round-trip-time per retransmission. This "Rateless TCP" approach uses ECC so that the lost data can be recovered from other packets that were not dropped. In this way, the time to retransmit packets may not be needed. I say may, because there will be a loss rate threshold which will exceed the capability of the ECC, and retransmission will become necessary to ensure reliability. But, as long as the loss rate is below the threshold, then retransmissions will not be necessary. Note that the more "resilient" you make the ECC (meaning supporting a higher loss threshold), the more work will be needed at the ends. So you are not eliminating latency due to packet loss, you are simply moving it away from packet retransmission into the process of ECC. However, if you've got good ECC, the total latency will go down.
The ECC approach may be a nice middle ground. But, it the ultimate solution to minimize latency is probably through a combination of active queue management (AQM) and early congestion notification (ECN). Unlike ECC, this approach really would aim to eliminate packet loss in the network due to congestion, and therefore completely eliminate the associated latency. Either ECC or regular TCP would benefit. In a controlled testbed using AQM and ECN, I've completely saturated a network with gigabits of traffic, consisting of thousands of flows, and had virtually no packet loss.
It should also be noted that retransmission is NOT the dominant source of transport latency in of TCP. I am a co-author on a paper [ieee.org] that shows another way (other than eliminating retransmission) to greatly reduce the transport latency of TCP. The basic idea is that the send-side socket buffer turns out to be the dominant source of latency (data sits in the kernel socket buffer waiting for transmission). In the above paper, we show how a dynamic socket buffer (one that tracks the congestion window) can dramatically reduce the transport latency of TCP. We allow applications to select this behaviour through a TCP_MINBUF socket option.
-- Buck
Replacing TCP indeed (Score:5, Insightful)
As far as I can tell (their website could use some more straightforward actual content), this is more like bittorrent, where a file is cut up into blocks, the blocks get distributed across the network, and anyone interested in the file then reconstructs it from available data from all sources, not necessarily having to get the entire file correctly from a single source. Only it does it more efficiently than bittorrent.
The two protocols target very different uses. TCP excels in interactive use, where the data is sent as it is generated, and no error is tolerable in the single sender-to-receiver link. Bittorrent (and other distributed network protocols) target batch jobs, where throughput is more important than reliability (because reliability can be reconstructed on the client through clever hashing schemes), and where responsiveness is entirely irrelevant.
So, this could not possibly replace TCP, since it does not do what TCP is most useful for. At the same time, the criticisms aimed at TCP by the rateless designers are valid, but well known, since TCP is indeed poorly suited for high-volume high-throughput high-delay transmissions of prepackaged data.
Still, good job to them for trying to come up with better protocols for niche or not-so-niche markets. I wish them all the best.
I have a better protocol (Score:5, Funny)
How does it work? Well, it's layered over Rateless Internet, in which (as we all know) packets do not have to be resent. So it carefully loses all packets and relies on Rateless Internet to make sure they arrive safely at the other side and do not have to be resent. Because no packets need to make it from A to B, you don't need any network hardware, and data can be sent just as fast as your machine can drop packets.
Guess I'd better apply for a patent...
Their key error (Score:5, Insightful)
That's for just them. What if all hosts on the entire Internet were by design stuffing packets at a 3-5% error rate? Meltdown, that's what. Their "real-life" measurements do not scale, suffering from the usual assumed linearity of new designs for complex systems.
Sometimes people fall in love with their new ideas, thinking that the rest of the world missed something obvious.
Re:Their key error (Score:4, Interesting)
Avoiding congestion while maintaining performance is a hard problem. Fortunately, you degrade your own performance if you create congestion and congestion often occurs on the edge. We would really like to avoid the tragedy of the commons with congestion and the Internet. If we cannot, the Internet may truly collapse by 2006.
Re:Their key error (Score:4, Interesting)
It may be a quibble, but isn't it more accurate to say that TCP reacts to packet losses the same no matter what the cause? The packet, or group of packets, is just lost. I haven't looked into the status of RFC 3168 (ECN, router congestion flag) lately, so maybe I'm wrong.
Since you mention it, the tragedy of the commons would be accentuated by pushing the network past saturation by design. By grabbing bandwidth, the 'haves' can effectively lock out the 'have nots' and their slower hardware, which would eventually result in no one having a usable network.
Well... (Score:5, Informative)
So others can have fun slashdotting other technologies, here are some websites. There are probably others, but this should keep those who do really want to move away from TCP happy.
EC over IP I have been doing this for years. (Score:5, Interesting)
it was a 64K shared line with 90% packet loss, I received 60Kbps for the video stream. ( I have the video to prove it )
We even filled preliminary patents on this back in 1996 but they were never followed through with.
Luigi Rizzo (now head of the FreeBSD project)also did some excellent work on this also. http://info.iet.unipi.it/~luigi/fec.html [unipi.it]
He calls it Erasure codes.
Which is more accurate since UDP doesn't have errors, it either come across 99.999% perfect or not at all.
So there is more information then in an error situation where ever bit is questionable.
What this means almost 1/2 the hamming distance in the codes in needed to correct an errasure verses and error.
Turns out the Error/Erasure correcting scheme it critical and not obvious. I spent almost 5 years working on this part time before it started making some real breakthroughs.
My original system was designed for 25% packet loss (not uncommon in 1996).
In the inital idea we added 1 exored packet for every three data packets, but at 25% packet loss, it turns out that it didn't increase reliablity at all! Working this out with probablities was a major eye opener!
Even when you work the problem out you realize you will still need some retransmissions to make up for lost packets, there is no possible solutions without this.
I have been trying to find people to help opensource this since I have working far too hard just to survive since 2000 to even consider taking on another task.
Anyone interested in my research and carring this forward please see my site and contact me.
John L. Sokol
TCP needs a replacement, but... (Score:3, Informative)
So, there IS a need for a TCP replacement. However, the one being developed in the IETF [ietf.org] is DCCP [ietf.org]. Basically, the idea is separating congestion control and packet loss replacement.
Maymounkov and company say that they are preparing RFC's (which implies that they intend to submit this to the ITEF), but have not yet done so. So, maybe if they do, and if they offer an unencumbered license, their technology could be used, but it is way too soon to tell.
HSLink! (Score:4, Interesting)
Who remembers HSLink?? If I recall correctly it was an add-on transfer method for Procomm Plus. It allowed two people connected over a modem to simultaneously send files to each other at basically double the normal speed. I remember thinking it had to be a scam, but me and my friends tested it and were able to send double the usual info in whatever time we were used to. (I forget, 10 minutes a meg I think)
How did this work? Were we fooled or was it for reals? Could something like that be applied to dial-up internet connections?
-Don.
It's hooey... (Score:5, Insightful)
This is a load of fluff, trying to capitalize on the 'p2p craze'. There are plenty of TCP replacements out there, that actually make sense. As far as TCP not being able to utilize 'today's bandwidth', again...hooey. Gigabit ethernet (when backed by adequate hardware, and taking advantage of jumbo frames) moves a HELL (two orders of magnitude) of a lot more data than your typical home broadband connection...using TCP.
Re:UDP... is drwining the internet. (Score:3)
Uh. Wouldnt have thought THAT could happen....