Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
The Internet Media

Replacing TCP? 444

olau writes "TCP, the transfer protocol that most of the Internet is using, is getting old. These guys have invented an alternative that combines UDP with rateless erasure codes, which means that packets do not have to be resent. Cool stuff! It also has applications for peer-to-peer networks (e.g. for something like BitTorrent). They are even preparing RFCs! The guy who started it, Petar Maymounkov, is of Kademlia fame."
This discussion has been archived. No new comments can be posted.

Replacing TCP?

Comments Filter:
  • Old!=bad (Score:5, Insightful)

    by gspr ( 602968 ) on Thursday October 21, 2004 @11:40AM (#10587627)
    The submitter says that TCP is getting old, but does that really tell us anything about how well it does its job?
  • by FooAtWFU ( 699187 ) on Thursday October 21, 2004 @11:40AM (#10587628) Homepage
    If it's not an open protocol (if they charge for use) it may find niche applications. If it is, it may proliferate. I wasn't able to find details about this on the site.
  • by Anonymous Coward on Thursday October 21, 2004 @11:40AM (#10587636)
    Some inefficiencies are one thing, but you're going to need a compelling reason to get everyone to switch.
  • by RAMMS+EIN ( 578166 ) on Thursday October 21, 2004 @11:41AM (#10587651) Homepage Journal
    TCP is old, but that doesn't mean it's bad or replacement is due. Some shortcomings have surfaced and been adressed. For the most part, TCP does a good job at what it was designed to do.
  • by drunkennewfiemidget ( 712572 ) on Thursday October 21, 2004 @11:42AM (#10587663)
    Because then you're going to have the suits trying to push it down, no matter how great/useful it is in an effort to kill the possibility of coming out with something that could make pirating any easier or more efficient. That's the only way they're going to see it.

    It's good to see innovation though, nonetheless.
  • by syates21 ( 78378 ) on Thursday October 21, 2004 @11:42AM (#10587664)
    This is hardly an innovative idea, and usually by the time you end up considering all the issues you wind up with something that looks a lot more like TCP than you had originally intended.

    Plus there are already protocol stacks that work around most of their gripes about TCP (slow performance over long pipes, etc).
  • Brilliant! (Score:4, Insightful)

    by morgdx ( 688154 ) on Thursday October 21, 2004 @11:42AM (#10587671) Homepage

    A slightly faster equivelent to TCP that I have to pay for and no-one else uses.

    Sign me up for that sucker right now.

  • by bigberk ( 547360 ) <bigberk@users.pc9.org> on Thursday October 21, 2004 @11:44AM (#10587707)
    There's no doubt that an alternative to TCP might have technical merits. But as far as communication protocols go, TCP itself is pretty amazing. Modern TCP implementations have been tweaked over decades and have impressive performance and reliability. And modern TCP/IP stacks have rather unspoofable connection establishment, another excellent feature for security.

    If you want to replace TCP, you have to do more than just develop a new protocol that is faster. It would have to outperform TCP in speed, reliability, and substantially so in order to outweigh the costs of ditching a well-established and trusted protocol.
  • by RAMMS+EIN ( 578166 ) on Thursday October 21, 2004 @11:47AM (#10587762) Homepage Journal
    Can anybody explain to me how this technology? It reads like marketing speak to me, despite the fact that it will be/is released as open source. How does the technology actually achieve reliability without retransmits? Does it actually achieve it?
  • 1. This is coming from a company who are surely going to want to make money out of it somehow. Part of the reason TCP succeeded is there was no one to pay.

    2. They don't seem to understand the GPL:

    "We are planning to release Rateless Codes for free, non-commercial use, in open source under the GNU Public License."

    The GPL doesn't restrict commercial use, and hence the only way that they can do this is either they try to add some conditions to the GPL, or they use another mechanism to restrict commercial use: e.g. patents.

    No matter how good this technology is it's not going to get wide adoption is an alternative to TCP unless it's unencumbered.

    John.
  • by jrumney ( 197329 ) on Thursday October 21, 2004 @11:49AM (#10587794)
    It appears that they get better performance than TCP by considering (all - 1) the issues. Basically, their protocol works and performs better than TCP because the pipes have spare capacity. If the pipes were at capacity, their protocol would break down. TCP has been designed to be robust in all conditions. Protocols like this that rely on "in most cases we can get away with allowing more errors than TCP does" are not going to replace TCP.
  • by kakos ( 610660 ) on Thursday October 21, 2004 @11:49AM (#10587796)
    I did read their website and it looks like their revolutionary new replacement for TCP is UDP with their proprietary ECC built on top of it. However, there is a good reason why TCP never used ECC (they did exist back then).

    1) The major problem a TCP packet will face is getting dropped. They mention this problem. They claim their encoding will solve this problem. It won't. No ECC algorithm will allow you to recover a dropped packet.

    2) Most packets that are corrupted are corrupted well beyond the repair of most ECCs.

    3) ECCs will cause packet size to increase. Not a huge problem, but why do it when ECCs don't help too much to begin with?
  • flamebait? (Score:4, Insightful)

    by N3wsByt3 ( 758224 ) on Thursday October 21, 2004 @11:52AM (#10587832) Journal
    I fail to see what is flamebaiting it is to say that TCP can go on for another 50 years, without problem.

    Exactly the same kind of post a bit below gets 'insightful'.

    It is simply true. Yes, there are some little drawbacks with TCP, but in the whole article, they do not give a compelling reason to switch, let alone why one would *have* to. I mean, RTFA: TCP is at 1-3% and the most efficient would be a throughput with 3-5% (loss)...but so what? It's not optimal, but does it anywhere claims TCP is doomed because it's not optimal in certain area's?

    There are myriads of things that aren't optimal on the Net, it doesn't mean they have been here for years and will be for years to come, nor that it is a necessity to switch, if the only thing lacking is that it's not optimally suited.
  • Re:Link is dead (Score:1, Insightful)

    by Nightreaver ( 695006 ) <lau...l@@@uritzen...dk> on Thursday October 21, 2004 @11:53AM (#10587844) Homepage
    Well, all links have been Coralized by the auther (read more about Coral [nyu.edu]) in the hope of withstanding a slashdotting, but Coral is still still under development, so I would assume it's here the problem lies.
  • by l3v1 ( 787564 ) on Thursday October 21, 2004 @11:54AM (#10587856)
    It doesn't seem reason enough to build a new protocol for TCP replacement just because it is "old". Protols ar not living beings you know, aging doesn't show on their cells. Of course troubles always has been popping up over the years, but nothing unsolvable (and nothing in the likings of IP).

    All in all, it's good to have new alternative solutions and new technologies at hand, but to state such things as replacing TCP 'cause its age (maturity, that is :D ) is a bit winged to say the least.

  • by Ars-Fartsica ( 166957 ) on Thursday October 21, 2004 @11:55AM (#10587859)
    Just look at the adoption rates on IPv6. No one is going to touch a new protocol at this stage. Its not even clear that this is needed. Point me at a specific TCP pain point that is specifically and obviously reducing internet adoption...any takers?
  • Yawn (Score:5, Insightful)

    by Rosco P. Coltrane ( 209368 ) on Thursday October 21, 2004 @11:55AM (#10587864)
    YAWN Protocol --> Yet Another Wonderful New Protocol

    ASCII is still around, despite its numerous shortcomings. There's this small thing called "backward compatibility" that people/consumers seem to love, for some reason. Well, same thing for TCP/IP. Even IPv6 has trouble taking off in the general public, despite being essentially just a small change in the format, so never mind the YAWN Protocol this article is about...
  • by Anonymous Coward on Thursday October 21, 2004 @11:59AM (#10587915)
    Well some of use are at work and not sitting in our mother's basement. Most companies don't just leave every outgoing port open and they surely are not going to open a port for us to read an article. Coral is a nice idea with a flawed implementation.
  • by netwiz ( 33291 ) on Thursday October 21, 2004 @12:02PM (#10587954) Homepage
    well, assuming that the dropped frames aren't sequential in large number, some kind of ECC (think RAID5 for IP) could alleviate this issue. Granted, you'd be sending three packets for every two packets worth of data, but you could lose any one of them and still be okay.

    However, I don't think most people would necessarily enjoy 50% larger payloads required to make this work. It could be tuned back, but for every decrease in overhead, the effect of losing a frame gets worse. In the end (and this is purely speculative, as I've no real data or math to back this up) it may be that TCP remains more effective with better throughput.

    I'll be honest, I don't see/experience the kinds of lag and retransmission problems that are described in the article, and any large streaming transfers to my home or desk regulary consume 100% of my available bandwidth. So for me, TCP works just fine.
  • Re:Old!=bad (Score:5, Insightful)

    by RAMMS+EIN ( 578166 ) on Thursday October 21, 2004 @12:05PM (#10587984) Homepage Journal
    TCP does have its shortcomings.

    As they mention on their site, TCP's bandwidth usage is dependent on the latency of the link. This is due to the fact that sliding windows (the number of packets that are allowed to be out on the network) have a limited size. TCP sends a windowful of packets, then waits until one is acknowledged before sending another one. On high-latency links, this can cause a lot of bandwidth to go unused. There is an extension to TCP that allows for larger windows, addressing this problem.

    Another problem with TCP is slow start and rapid back-off. IIRC, a TCP connection linearly increases its bandwidth, but will halve it immediately when a packet is lost. The former makes for a very slow startup, whereas the latter causes a connection to slow down dramatically, even though a lost packet can be due to many factors other than congestion. Slow start has been addressed by allowing connections to speed up quicker, about rapid back-off I'm not sure.

    The solution this company provides seems to play nice with TCP by varying the transmission speed in waves. Apparently, this improves speed over TCP's back-off mechanism, but it obviously doesn't provide optimal bandwidth utilization.
  • by shic ( 309152 ) on Thursday October 21, 2004 @12:08PM (#10588015)
    When considering protocols for information transport, it is very important to be absolutely sure what assumptions you are making. There are a number of non-independent factors which influence the suitability (and hence efficiency) of network protocols to application demands. Bandwidth, for example, is related to but doesn't define the statistical distribution of latencies; maximum packet rate and their relationship to packet size. The channel error rate (and statistical distributions of packet failures) are again linked to fragmentation and concatenation of transmitted datagrams - and this in turn affects latencies when considering "reliable" transport protocols. Routing policy and symmetry of physical links introduces yet more tradeoffs which need to be considered - not to mention the potential problems evaluating if the burden of protocol computations outweighs the advantage of an improved strategy for a given physical link. (And I'm not even going to mention security!) When considering protocols the most important thing to consider is the model they assume of the communications infrastructure on which they are to be deployed. TCP is likely the optimal solution given the assumptions TCP makes... if you change those assumptions to more closely fit a particular network infrastructure you will likely get better performance only on that infrastructure, but far worse performance where your new assumptions do not hold. I used to be interested in the idea of dynamically synthesizing protocols to best suit the actual physical links in a heterogeneous network... however my ideas were met with extreme disinterest; I felt my critics demanded I present a protocol which beats TCP under TCP's assumptions - and no amount of hand-waving and explanation would convince them this was a silly request. I still think the idea has merit - but having wasted 3 years of my life trying to push it uphill, I've found other interesting (and far more productive) avenues to pursue.
  • by Raphael ( 18701 ) on Thursday October 21, 2004 @12:17PM (#10588134) Homepage Journal

    You are almost correct, except that doing the error correction on the whole file instead of a single (or a couple of) packets allows the file to be transmitted even if one or several packets are dropped.

    However, this kind of error correction is only good if you are exchanging rather large files. They cannot claim to replace TCP if their protocol cannot be used for any kind of interactive work. Take for example SSH or HTTP (the protocol that we are using for reading Slashdot): they are both implemented on top of TCP and they require TCP to work equally well for exchanging small chunks of data (keypress in an SSH session, or HTTP GET request) and exchanging larger chunks of data (HTTP response).

    While their new protocol would probably work well for the larger chunks of data, it is very likely to reduce the degrade the link performance for the smaller bits exchanged during an interactive session. So they are probably good for file sharing. But TCP has many other uses...

    Also, they mention that they can be TCP-friendly by transmitting in "waves". I doubt that these "waves" can be very TCP-friendly if their time between peaks is too short. One thing that TCP does not do well is to adapt its transmission rate to fast-changing network conditions. So if they allow the user or the application designer to set the frequency of these waves, they could end up with a very TCP-unfriendly protocol.

  • by skiman1979 ( 725635 ) on Thursday October 21, 2004 @12:20PM (#10588178)
    Their website of the so called "experts" is down, it's slashdotted! (ironic?)

    I've seen this "irony" thing mentioned before for this topic. The article discusses a protocol replacement for TCP that is supposed to be "better," but their site went down. If this new protocol was being actually used, then maybe it would be ironic. But for that to happen, their web server would have to use it, and the rest of us (who want to view the site) would have to use the protocol too. Where's the irony here? That would be like saying that ssh is bad because our FTP clients can't connect to it.

  • by n6mod ( 17734 ) on Thursday October 21, 2004 @12:20PM (#10588180) Homepage
    Acutally, there are likely to be a mountain of patent issues. Not sure how legitimate these will turn out to be.

    Peter's paper was originally presented at the same conference where Digital Fountain also presented approximately the same thing. (Building on the LT codes that they had been shipping for years)

    I'm quite sure that DF has a stack of patents on their version.

    This may get interesting.

    (Disclaimer: DF laid me off in 2003)
  • by jsebrech ( 525647 ) on Thursday October 21, 2004 @12:24PM (#10588270)
    This doesn't provide anything like what TCP provides, namely a connection between two network nodes that allows transfer of arbitrary data with guaranteed reliability, with automated congestion control for optimized use of available network resources.

    As far as I can tell (their website could use some more straightforward actual content), this is more like bittorrent, where a file is cut up into blocks, the blocks get distributed across the network, and anyone interested in the file then reconstructs it from available data from all sources, not necessarily having to get the entire file correctly from a single source. Only it does it more efficiently than bittorrent.

    The two protocols target very different uses. TCP excels in interactive use, where the data is sent as it is generated, and no error is tolerable in the single sender-to-receiver link. Bittorrent (and other distributed network protocols) target batch jobs, where throughput is more important than reliability (because reliability can be reconstructed on the client through clever hashing schemes), and where responsiveness is entirely irrelevant.

    So, this could not possibly replace TCP, since it does not do what TCP is most useful for. At the same time, the criticisms aimed at TCP by the rateless designers are valid, but well known, since TCP is indeed poorly suited for high-volume high-throughput high-delay transmissions of prepackaged data.

    Still, good job to them for trying to come up with better protocols for niche or not-so-niche markets. I wish them all the best.
  • Their key error (Score:5, Insightful)

    by RealProgrammer ( 723725 ) on Thursday October 21, 2004 @12:27PM (#10588332) Homepage Journal
    Nevertheless, our extended real-life measurements show that highest throughput is generally achieved at speeds with anywhere between 3% and 5% loss.

    That's for just them. What if all hosts on the entire Internet were by design stuffing packets at a 3-5% error rate? Meltdown, that's what. Their "real-life" measurements do not scale, suffering from the usual assumed linearity of new designs for complex systems.

    Sometimes people fall in love with their new ideas, thinking that the rest of the world missed something obvious.

  • Re:nonsense (Score:3, Insightful)

    by ebuck ( 585470 ) on Thursday October 21, 2004 @12:33PM (#10588459)
    Who flamebaited this? It's not derogatory, and it's a valid opinion. TCP's shortcomings are going to constantly be mitigated by better router and switch hardware.

    The article is little more than an advertisment for their socket management software, so it's no more news than what SCO produces, but from what I can gather...

    There's no way to obtain data that was truly lost, so they are using an error correcting encoding of the data (and wasting some of the bandwith in the process) They're not really doing something radical, it's just UDP/IP. They just have some software that silently handles the data loss and unpacking which sits on top of a UDP/IP socket. So they've moved the problem from layer 3 into the application layer (layer 7).

    Their entire product places emphasis the amount of network utilization, but little else. I imagine that they're wasting a portion (but who knows how much) of their "utilized" bandwith, and they're certainly wasting a lot of CPU time encoding and decoding the packet payload. Thier sales pitch is basically "TCP/IP bad, UDP/IP better" but they never clearly address how much CPU time this solution requires or how much of the "utillized" bandwith is wasted.

    Using UDP/IP with retransmission in software has been done many times. Look at FTP.
  • by 10101001 10101001 ( 732688 ) on Thursday October 21, 2004 @12:33PM (#10588470) Journal
    Two things. One, while modern TCP/IP stacks have hypothetically rather unspoofable connection establishment, programs like nmap show that not enough OSs take advantage of that in a meaningful way allow for short bursts of fake connection and do some action spoofs. Two, while it sounds like there's various issues with this rateless protoocol, the idea of a standard, faster protocol for larger pipes doesn't mean an end to TCP. It does mean that people have realized that single tcp connections have issues saturating bandwidth at very high speeds and that alternative protocols for that situation seem appropriate. To act like an alternative to tcp is somehow a doom to tcp is to act like no one uses udp because tcp has all these wonderful features that overcome limitations to udp. There's clearly the case that a new standard protocol that does have better file transfer features wouldn't be such a bad idea to consider as a third primary protocol. In OS terms, I'd liken it to how a Windows NT/X environment has messages (udp), streams (tcp), and random file access (new protocol, which will probably not be rateless).
  • by Have Blue ( 616 ) on Thursday October 21, 2004 @12:39PM (#10588568) Homepage
    If they want it to replace TCP, it will have to be completely unencumbered, and that includes viral licenses. Under BSD, commercial adoption would be possible.
  • STCP (Score:2, Insightful)

    by ericzundel ( 524648 ) * on Thursday October 21, 2004 @12:41PM (#10588610) Homepage Journal
    A "real" replacement to TCP is STCP [ucl.ac.be]

    You get datagrams like in UDP but they are sent re liably.... or not depending on what you want. I believe this has been quietly implemented in CISCO routers and Unix OS vendors network stacks over the past few years.

  • by Raphael ( 18701 ) on Thursday October 21, 2004 @12:45PM (#10588672) Homepage Journal
    First, the protocol is built on top of UDP, meaning that it can be implemented at the application layer.

    Or rather: the protocol is built on top of UDP, meaning that it will only be implemented at the application layer. There are other fine protocols built on top of UDP, such as RTP/RTCP used for streaming. Are there any operating systems implementing these protocols in the kernel? I don't think so. Are there libraries providing a convenient API allowing application developers to use these protocols easily? A few, but not that many. It is likely that any other protocol implemented on top of UDP would have a similar fate.

    TCP is cooperative - it will share 'fairly' any congested channel. It will lose to a more greedy protocol.

    You don't seem to understand that most network operators (those who play with the routers at your ISP or somewhere else in the backbone) want protocols to be fair. If too many people start using a protocol that is unfair, then many ISPs will start tweaking their routers and firewalls in order to limit the bandwidth available to the unfair protocol. ISPs have to keep their customers happy (or at least happy enough to pay their bill) and an unfair protocol would make many other customers unhappy.

  • by billstewart ( 78916 ) on Thursday October 21, 2004 @12:46PM (#10588694) Journal
    As far as I can tell from the documentation, this isn't really a TCP replacement. It's basically a set of efficient forward error correction codes, so if you've got a relatively wide open pipe you can transmit on without much packet loss, it can blast away and recover from occasional losses, but it doesn't do any actual congestion control.
    • They have a "TCP-friendliness" option that varies the transmission rate in a way that TCP windowing can probably cooperate with, so you can set the rate knobs to something less than full blast,
    • but nothing they've documented appears to address the problem of multiple users of this application trying to use a transmission path at the same time, and
    • they also don't document anything that does path rate discovery - so it may work fine if you've got a couple of small pipes feeding a fat network, but if you've got a fat pipe on the sending end and a skinny pipe on the receiving end, they don't document anything that figures out what rate is safe to transmit at.
    They also don't document when you would want to use this and when you would want to use TCP and when you would want to use this on top of TCP.
  • It's hooey... (Score:5, Insightful)

    by Frennzy ( 730093 ) on Thursday October 21, 2004 @01:16PM (#10589190) Homepage
    TCP doesn't use RTT to 'calculate congestion'.

    This is a load of fluff, trying to capitalize on the 'p2p craze'. There are plenty of TCP replacements out there, that actually make sense. As far as TCP not being able to utilize 'today's bandwidth', again...hooey. Gigabit ethernet (when backed by adequate hardware, and taking advantage of jumbo frames) moves a HELL (two orders of magnitude) of a lot more data than your typical home broadband connection...using TCP.

  • Re:Their key error (Score:1, Insightful)

    by Anonymous Coward on Thursday October 21, 2004 @01:20PM (#10589251)
    Sometimes people fall in love with their new ideas, thinking that the rest of the world missed something obvious.

    And sometimes they are right.
  • by Harik ( 4023 ) <Harik@chaos.ao.net> on Thursday October 21, 2004 @02:11PM (#10590089)
    And their "solution" (forward error correction) really means: introduce 5% packet loss in perfect conditions so even in an up-to-5% packet loss environment you get 100% transmission speed.

    (Hint: redundant data is the same as packet loss from a time standpoint)

  • Re:What? (Score:5, Insightful)

    by panZ ( 67763 ) <matt68000@hotmail.com> on Thursday October 21, 2004 @02:13PM (#10590116)
    You are absolutely correct. I was looking for someone to post this argument before doing it myself. Thanks! Mod this guy up!

    TCP doesn't use round trip time to calculate a link speed. In fact, it doesn't just the opposite. It uses a sliding window method so it can send many packets before any one has ACKed. This is done to soften the blow that round trip times will have on your send rates!

    TCP regulates its send rates by slowly sending faster and faster, then when a packet is dropped, it drops its rate fast. Slow increases and exponential backoffs make for VERY efficient link utilization on reliable networks with many active nodes whether it be a fast office LAN or world wide network.

    Their method appears to just spray data without paying attention to what other nodes are doing. It sounds like it is much better suited for point to point communications on unreliable networks. E.g. cellular data networks, packets get dropped much more frequently because of interference rather than congestions. TCP might back off too quickly in this condition because it is optimized to deal with congestion. Their protocol might be great for that first/last hop between your cell phone and the cell tower but otherwise, it undermines the great balance that TCP achieves on our amazing little internet.

  • Re:nonsense (Score:4, Insightful)

    by Aragorn992 ( 740050 ) on Thursday October 21, 2004 @03:25PM (#10590985)
    They are suggesting a better protocol which is still in the labs for the most part.

    And what makes this a better protocol? Its vast history of being a solid, reliable protocol with its massive amount of industrial testing? Oh wait thats TCP.

    Frankly, new stacks that look better than old ones are a dime a dozen. Until you test them in the real world you will never know and why bother changing TCP when it does a damn fine job right now?

    IPv6 was implemented for the sole reason (primarily) of addressing the shortage of IP addresses.
  • by Woody77 ( 118089 ) on Thursday October 21, 2004 @03:33PM (#10591069)
    But not from a computation standpoint. Yes, you lose some bandwidth due to FEC, but if you don't drop the packets, you also don't need to compute the missing data from the FEC data.

    AFAIK, the point of FEC is that you need to send less data than you actually lose, but that's getting into lots of wierd math theory that I'm still coming up to speed on with FEC (looking at it for an application at work).

    The other advantage of FEC over NACK that I can see is that on a high-latency connection, or a server dealing with lots of clients, the server doesn't need to get involved. The client can compute the missing piece, maybe in less time than the resend would take.
  • Re:nonsense (Score:2, Insightful)

    by bogomipz ( 807251 ) on Thursday October 21, 2004 @03:47PM (#10591216)
    Using UDP/IP with retransmission in software has been done many times. Look at FTP. Umm, since when is TCP not software?
  • by IBitOBear ( 410965 ) on Thursday October 21, 2004 @04:35PM (#10591805) Homepage Journal
    It is the old-math speak usage of "/" as in "three over four" is "3/4"

    So TCP/IP is "Transmission control protocol over internet protocol" or just "TCP over IP".

    While you don't see it much, this also inferrs "HTTP/TCP/IP" and so forth.

    So just as "voice over IP" doesn't make voice and IP the same, TCP over IP doesn't really relate one to the other.

    After all, if you use dialup, the first hop of your connection is typically TCP or UDP over PPP (Point to Point Protocol) and the ISP acts as a gateway from the PPP to the IP semantics.

    (I'm guessing the immediate parent knew this, but this reply "hangs best" here in the thread. 8-)
  • by johne_ganz ( 750500 ) on Thursday October 21, 2004 @06:36PM (#10593105)
    Up front, I believe that their product/solution/answer to the "TCP Problem" is specious, at best.

    First order of business is "What is the problem?" This is never really adequately defined, near as I can tell. Some vague references to some "problems" with TCP without any objective data to support the statements. Since a solution is being proposed to a problem, we need some way to measure the problem objectively. How else can you tell if your proposed fix is better or worse than the baseline? Since the problem is never defined in any detail, and certainly never to a point of which you can measure, and the fact that there is no objective data, makes the whole thing totally suspect right from the start. In a nutshell, extraordinariy claims without extraordinary proof

    Regardless, there are some nuggets here and there that we can pick apart (from the website):

    #1 Rateless Internet is an Internet transport protocol implemented over UDP, meant as a better replacement of TCP. TCP's legacy design carries a number of inefficiencies, the most prominent of which is its inability to utilize most modern links' bandwidth.

    #2 This problem stems from the fact that TCP calculates the congestion of the channel based on its round-trip time. The round-trim time, however, reflects not only the congestion level, but also the physical length of the connection. This is precisely why TCP is inherently unable to reach optimal speeds on long high-bandwidth connections.

    #3 A secondary, but just as impairing, property of TCP is its inability to tolerate even small amounts (1% - 3%) of packet loss. This additionally forces TCP to work at safe and relatively low transmission speeds with under 1% loss rates. Nevertheless, our extended real-life measurements show that highest throughput is generally achieved at speeds with anywhere between 3% and 5% loss.

    Point #1 is a claim without any evidence to back it up. Now most of us may think there is a hint of truth to this from experience. The danger lies in making an assumption that the problem is TCP. It could be a bad implementation of the stack you're using, some kind of packet loss in the path, or you're just trying to send too much data given the conditions of the test (ie, a Pentium 90MHz trying to fill a 10Gb/sec pipe). There's just too many variables that influence the amount of link utilization to make such a generalized statement that "TCP is inherently unable to reach optimal speeds on long high-bandwidth connections". There was a time when two Sun machines sitting next to each other couldn't utilize the entire capacity of a half duplex ethernet, for example. Yet it's not uncommon to have more bandwidth to your home via cable/dsl and make full use of it even on cross country/inter continental links these days without a whole lot of change to the fundamentals of TCP. Therefore, it's unreasonable to conclude that TCP is the cause of any of the problems, perceived or real, considering the history of TCP and it's scaleability over time and total lack of any effort to rule out other causes.

    Point #2 is a case of where a little bit of knowledge is much more deadly than none at all. I'd be willing to chalk this one up to a marketing person's copy than any particular design insight by the developers, but it's spooky that it's there. At a fundamental level, queueing theory tells us that the average time for a unit to be processed in a queue is related to the depth of the queue. If there's nothing, or effectively nothing, in the queue when your packet arrives, there is no delay in sending the packet. However, the delay grows as the depth of the queue grows. This shows up as variability in the round trip times of packets, relative to each other. Also known as "jitter". Therefore, no jitter means there are no queueing delays, which means that there isn't much traffic competing for access to the links. High jitter means there are a lot of packets competing for access to a link, and therefore much more likely to be c

  • by Michael Hunt ( 585391 ) on Thursday October 21, 2004 @11:11PM (#10594832) Homepage
    <quoth the poster>After all, if you use dialup, the first hop of your connection is typically TCP or UDP over PPP (Point to Point Protocol) and the ISP acts as a gateway from the PPP to the IP semantics.</quoth>

    No. Your connection to your ISP is IP encapsulated in PPP frames. PPP is a session-based, authenticated layer 2 framing type, loosely based on HDLC. The PPP frames are discarded at the ISP's NAS/LNS, which then forwards the inner IP goo to the correct destination (usually its default route.)

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...