Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet Media

Replacing TCP? 444

olau writes "TCP, the transfer protocol that most of the Internet is using, is getting old. These guys have invented an alternative that combines UDP with rateless erasure codes, which means that packets do not have to be resent. Cool stuff! It also has applications for peer-to-peer networks (e.g. for something like BitTorrent). They are even preparing RFCs! The guy who started it, Petar Maymounkov, is of Kademlia fame."
This discussion has been archived. No new comments can be posted.

Replacing TCP?

Comments Filter:
  • by Andreas(R) ( 448328 ) on Thursday October 21, 2004 @11:42AM (#10587666) Homepage
    Their website of the so called "experts" is down, it's slashdotted! (ironic?)

    Here is a summary of their technology copied from their website:

    Rateless Internet The Problem

    Rateless Internet is an Internet transport protocol implemented over UDP, meant as a better replacement of TCP. TCP's legacy design carries a number of inefficiencies, the most prominent of which is its inability to utilize most modern links' bandwidth. This problem stems from the fact that TCP calculates the congestion of the channel based on its round-trip time. The round-trim time, however, reflects not only the congestion level, but also the physical length of the connection. This is precisely why TCP is inherently unable to reach optimal speeds on long high-bandwidth connections.

    A secondary, but just as impairing, property of TCP is its inability to tolerate even small amounts (1% - 3%) of packet loss. This additionally forces TCP to work at safe and relatively low transmission speeds with under 1% loss rates. Nevertheless, our extended real-life measurements show that highest throughput is generally achieved at speeds with anywhere between 3% and 5% loss.

    The Solution

    By using our core coding technology we were able to design a reliable Internet transmission protocol which can circumvent both of the fore-mentioned deficiencies of TCP, while still remaining TCP-friendly. By using encoded, rather than plain, transmission we are able to send at speeds with any packet loss level. Rateless coding is used in conjunction with our Universal Congestion Control algorithm, which allows Rateless Internet to remain friendly to TCP and other congestion-aware protocols.

    Universal Congestion Control is an algorithm for transmission speed control. It is based on a simple and clean idea. Speed is varied in a wave-like fashion. The top of the wave achieves near-optimal throughput, while the bottom is low enough to let coexisting protocols like TCP smoothly receive a fair share of bandwidth. The time lengths of the peaks and troughs can be adjusted parametrically to achieve customized levels of fairness between Rateless Internet and TCP.

    The Rateless Internet transport is now available through our Rateless Socket product in the form of a C/C++ socket library. Rateless Internet is ideal for Internet-based applications, running on the network edges, that require high bandwidth in a heterogenous environment. It was specifically built with peer-to-peer and live multimedia content delivery applications in mind.

  • by daserver ( 524964 ) on Thursday October 21, 2004 @11:45AM (#10587725) Homepage
    The link in the post is coralised [nyu.edu]. Maybe someone is blocking your outgoing port 8090 for you :-)
  • by DAldredge ( 2353 ) <SlashdotEmail@GMail.Com> on Thursday October 21, 2004 @11:46AM (#10587739) Journal
    Not all of Digital's network tech was sane...

    DEC MMJ

    Digital Equipment Co.'s proprietary Modified Modular Jack. It is identical to a standard USOC 6-position jack, except that the locking tab has been offset to the right to prevent an MMP (modified modular plug) from fitting into a USOC jack. AMP5-555237-2 Plug
  • by RealAlaskan ( 576404 ) on Thursday October 21, 2004 @11:51AM (#10587814) Homepage Journal
    If it's not an open protocol (if they charge for use) ...

    It doesn't look good. Their webpage [rateless.com] says:``We are planning to release Rateless Codes for free, non-commercial use, in open source under the GNU Public License.'' They are seriously confused about the terms of their own license. The GPL doesn't lend itself to ``free, non-commercial use'': it lets the licensee use and distribute freely, at any price, for any use.

    Either they'll loose that ``non-commercial use'' crap, or this will go nowhere.

  • by jfengel ( 409917 ) on Thursday October 21, 2004 @11:54AM (#10587852) Homepage Journal
    The front page says "not patented". They could claim it as a trade secret, but not if they plan to introduce an RFC. So even if they don't wish to open their sources, there will be open implementations very quickly.

    Assuming there aren't any underlying IP issues. I'm not aware of any patents on the forward error correcting codes they're using, but that doesn't mean they don't exist. And assuming some jackass doesn't say, "Here's a thing and it's not patented; I'll patent it and then license it back to the guys who did the work!" The patent office seems to do a lousy job of checking non-patent prior art.
  • by JohnGrahamCumming ( 684871 ) * <[slashdot] [at] [jgc.org]> on Thursday October 21, 2004 @11:55AM (#10587872) Homepage Journal
    It was quite a complicated set of protocols and IIRC the final versions were using TCP/IP as their transport layer and below. The versions before that were using OSI.

    http://en.wikipedia.org/wiki/DECnet

    John.
  • by Anonymous Coward on Thursday October 21, 2004 @11:58AM (#10587911)
    Tried their "Rateless Copy" utility, transferring a 5.8 mb binary file from my web server in Texas to my local connection in Toronto.

    With Rateless Copy: time between 31-41 seconds, average of 200k/s, the resulting file is corrupted. Tried it again to ensure, same result.

    Without rateless copy (http file download) 8 seconds, average of 490k/s, the resulting file works fine as expected.

    Sorry, but I don't think it's all that great.
  • by Another MacHack ( 32639 ) on Thursday October 21, 2004 @11:59AM (#10587926)
    You'd be right if they were talking about an error correcting code designed to repair damage to a packet in transit.

    They're actually talking about erasure correction, where each symbol is a packet as a whole. In a very simple form, you send a packet sequence like this:

    A B (A^B) C D (C^D)

    So if you lose packet A, you reconstruct it from (B ^ (A^B)) = A. This simple scheme increases the number of packets sent by 50%, but allows you to tolerate a 33% loss, presuming you don't lose bursts of packets. There are more sophisticated schemes, of course, and there are various tradeoffs of overhead versus robustness.
  • by hamanu ( 23005 ) on Thursday October 21, 2004 @12:06PM (#10588000) Homepage
    You are a fuckng pompus ass...
    no wait, sorry, you hit one of my buttons.

    ECC codes CAN and DO cover packet loss if you interleave the ECC information accross packets, instead of just generating it for a single packet. In this scenario lost packets are called "Erasures", and their coding is "rateless erasure coding". It will work just fine.
  • by Oscaro ( 153645 ) on Thursday October 21, 2004 @12:07PM (#10588010) Homepage
    Also, (from the site [nyud.net] we know that

    Rateless Socket will be made available to select companies and institutions on a per-case basis. To find out more, please contact us.
  • Re:Old!=bad (Score:3, Informative)

    by BridgeBum ( 11413 ) on Thursday October 21, 2004 @12:08PM (#10588019)
    TCP is a good protocol, but it is far from perfect. It does have weakness in it's windowing mechanism which can vastly reduce the throughput over long distances/mild latency with long durations. (Think FTP.) This is probably the problem they are refering to with #1.

    For #2, this could simply be retransmission problems. TCP has a strict sequential ordering to packets, so a packet lost in the stream could cause other packets to be discarded, forcing more retransmissions than were technically required. This could be considered a 'weakness' of the protocol, since under some circumstances it could be desireable to receive the packets in a random order and allow assembly at the endpoint. (Think BitTorrent) So it's not unreasonably to examine advances in networking protocols. As for wide spread adoption....well, that's another story all together.
  • Re:Coral Cache? (Score:0, Informative)

    by Anonymous Coward on Thursday October 21, 2004 @12:12PM (#10588061)
    because YOU are still using TCP to connect to them to view their website. if we all were using their Rateless Internet Protocal, perhaps they wouldn't be /.ed.

    [/poor humor]
  • by AaronW ( 33736 ) on Thursday October 21, 2004 @12:13PM (#10588072) Homepage
    While there are a number of issues with TCP, I think it would be much better in the long run to work on fixing TCP rather than replace it. That way all the existing apps can take advantage of the fixes.

    One thing that bothers me is I see ISPs applying policing to their subscriber's bandwidth. Policing is quite unfriendly to TCP, unlike, say, shaping. With policing, a router decides either to pass, drop, or mark a packet based on if it exceeds certain bandwidth constraints. Shaping, on the other hand, will buffer packets and introduce additional latency, thus helping TCP find the sweet spot. Of course shaping will also drop, since nobody provides infinite buffer space.

    TCP is relatively easy to extend. There are still some free flag bits and additional fields can be added to the TCP header if needed.

    -Aaron
  • by freqres ( 638820 ) on Thursday October 21, 2004 @12:18PM (#10588151)
    I have to agree with you about having doubts about this. Vinton Cerf [wikipedia.org] and the others that came up with TCP were/are real smart fellars and put a lot of thought into the design. Thanks to guys like Shannon [wikipedia.org], Hamming [wikipedia.org], Hartley [wikipedia.org] and Nyquist [wikipedia.org], we have a pretty thorough understanding of the limits of computer networks and sending/receiving information. Unless some wacky/spooky new way is figured out in the far reaches of physics, I don't see big improvements happening in the fundamental protocols.
  • by Bluefirebird ( 649667 ) on Thursday October 21, 2004 @12:18PM (#10588154)
    (NOT) Everyone knows that TCP has problems and for many years people have been developing transport protocols that enhance or replace TCP.
    These guys haven't invented anything new. There are many flavours of TCP with different congestion mechanisms and there is a special kind of transport protocol that solves most problems...
    I'm talking about SCPS-TP, supported by NASA and it performs very well with high bit-error links (like satellites) and it also copes with high delay. The good thing about SCPS-TP is that it's compatible with TCP, because it basically an extension of TCP.
    There is another problem with using UDP based transport protocols... they usually have low priority in routers (probably because you can use UDP for VoIP...)
  • by buck68 ( 40037 ) on Thursday October 21, 2004 @12:24PM (#10588253) Homepage

    This work seems to be about two things (which I am not sure I see a strong connection between): lowering transport latency, and using available bandwidth better. The latter has been the subject of many papers in the last few years. There are now several serious proposals of how to fix TCP with respect to long fat pipes. They don't seem to support the idea that retransmissions are harmful. So I'm going to talk about the first issue, transport latency.

    The idea of using error-correcting codes (ECC) to eliminate the need for retransmissions is an interesting one. The main benefit is to reduce transport latency (the total time it takes to send data from application A to B). Here is another paper [mit.edu] proposes has a similar idea, applied at a different level of the network architecture.

    The root problem here is that network loss leads to increases in the transport latency experienced by applications. In TCP, the latency increases because TCP will resend data that is lost. That means at least one extra round-trip-time per retransmission. This "Rateless TCP" approach uses ECC so that the lost data can be recovered from other packets that were not dropped. In this way, the time to retransmit packets may not be needed. I say may, because there will be a loss rate threshold which will exceed the capability of the ECC, and retransmission will become necessary to ensure reliability. But, as long as the loss rate is below the threshold, then retransmissions will not be necessary. Note that the more "resilient" you make the ECC (meaning supporting a higher loss threshold), the more work will be needed at the ends. So you are not eliminating latency due to packet loss, you are simply moving it away from packet retransmission into the process of ECC. However, if you've got good ECC, the total latency will go down.

    The ECC approach may be a nice middle ground. But, it the ultimate solution to minimize latency is probably through a combination of active queue management (AQM) and early congestion notification (ECN). Unlike ECC, this approach really would aim to eliminate packet loss in the network due to congestion, and therefore completely eliminate the associated latency. Either ECC or regular TCP would benefit. In a controlled testbed using AQM and ECN, I've completely saturated a network with gigabits of traffic, consisting of thousands of flows, and had virtually no packet loss.

    It should also be noted that retransmission is NOT the dominant source of transport latency in of TCP. I am a co-author on a paper [ieee.org] that shows another way (other than eliminating retransmission) to greatly reduce the transport latency of TCP. The basic idea is that the send-side socket buffer turns out to be the dominant source of latency (data sits in the kernel socket buffer waiting for transmission). In the above paper, we show how a dynamic socket buffer (one that tracks the congestion window) can dramatically reduce the transport latency of TCP. We allow applications to select this behaviour through a TCP_MINBUF socket option.

    -- Buck

  • Re:Old!=bad (Score:5, Informative)

    by amorsen ( 7485 ) <benny+slashdot@amorsen.dk> on Thursday October 21, 2004 @12:24PM (#10588264)
    TCP has a strict sequential ordering to packets, so a packet lost in the stream could cause other packets to be discarded

    That's what selective ack is for. It's pretty old hat at this point, everyone has it implemented.

  • by RealAlaskan ( 576404 ) on Thursday October 21, 2004 @12:32PM (#10588439) Homepage Journal
    The General Public License gives you a license to use that code under certain terms.

    Almost true. It gives you a license to distribute that code, under certain terms. No license is needed to use a copy. Read section 5 of the GPL [gnu.org] for an elegant confirmation of that.

    When they say "free, non-commercial use", and they talk about the GPL, they are making sense. The Linux kernel, which is GPL, may use their implementation.

    Sorry, you're way off here. Redhat sells Linux, and their customers use it for commercial purposes. Therefore, the Redhat version of the Linux kernel can't use their implementation. The issue is moot, however, since the non-commercial restriction would make their license non-GPL-compatible. Their implementation couldn't be included even in a kernel which would see non-commercial use only.

    Microsoft cannot use their implementation in any of its Windows OSes ...

    Wrong. If MS wanted to make a version which they would give free of charge to private individuals and charitable organizations, they could use it, since that would be entirely non-commercial, both in distribution and in use.

    ... without releasing the OS kernel under terms which are compatible with the GPL.

    For MS, that would be a problem, but it's the GPL part, not the non-commercial part, that would bite them. As I said above, the non-commercial restriction would render this license incompatible with the GPL.

    The GPL does not lend itself to "commercial use", because the "standard" model of software licensing is paying a price per use/copy of the binary, and the GPL doesn't work this way.

    Sorry, you've mis-construed non-commercial. Non-commercial typically means not used in commerce. Believe it or not, is is possible to sell software for non-commercial use. If you run your business using the software, that's commercial use. By the way, you should tell Redhat and Novell that bit about the GPL not working for ``paying a price per use/copy of the binary'', that'll be a revelation to them.

  • Well... (Score:5, Informative)

    by jd ( 1658 ) <[moc.oohay] [ta] [kapimi]> on Thursday October 21, 2004 @12:33PM (#10588458) Homepage Journal
    You could always use the existing "Reliable Multicast" protocols out there. Not only do those work over UDP, but you can target packets to multiple machines. IBM, Lucent, Sun, the US Navy and (yeek!) even Microsoft have support for Reliable Multicast, so it's already got much better brand-name support than this other TCP alternative.


    So others can have fun slashdotting other technologies, here are some websites. There are probably others, but this should keep those who do really want to move away from TCP happy.



  • Re:Old!=bad (Score:5, Informative)

    by ebuck ( 585470 ) on Thursday October 21, 2004 @12:42PM (#10588632)
    Most modern TCP/IP implementations use a sliding window algorithim internally, which caches the out of order packets while it's trying to re-get the missing packet.

    So you're really only concerining yourself with ordered delivery when the delay in the natural order is greater than that of the sliding window (the cache of received packets that you're not going to tell the software about just yet)

    This takes care of trivial things where packets arrive "just barely" out of order, but in a true packet loss situation, can block the channel for quite some time.

    So the strict sequence you refer to is the sequence the packets are presented to the next layer (usually the application in Ethernet), not the sequence the packets are received by the machine.
  • by mbone ( 558574 ) on Thursday October 21, 2004 @12:51PM (#10588775)
    TCP uses packet loss (NOT round trip times - where did that come from?) to signal congestion, and thus to implement congestion control. This does not work well in a typical wireless (802.11X, 802.16, etc) environment, where packet losses are to be expected. TCP also does not co-exist well with lots of UDP streaming.

    So, there IS a need for a TCP replacement. However, the one being developed in the IETF [ietf.org] is DCCP [ietf.org]. Basically, the idea is separating congestion control and packet loss replacement.

    Maymounkov and company say that they are preparing RFC's (which implies that they intend to submit this to the ITEF), but have not yet done so. So, maybe if they do, and if they offer an unencumbered license, their technology could be used, but it is way too soon to tell.
  • What? (Score:5, Informative)

    by markov_chain ( 202465 ) on Thursday October 21, 2004 @12:51PM (#10588778)
    I don't know what their reasoning is, but both their claims about TCP seem incorrect.

    1. TCP does not use round trip time to calculate any "congestion levels." It increases the connection rate until packets get dropped, presumably because some router in the middle got overloaded.

    2. Packet loss is used as a signal to TCP to slow down because it tried to send too fast. The lost packets are subsequently retransmitted, so TCP can indeed not only tolerate but recover from packet loss. The only real case they have is packet loss due to reasons other than TCP's own aggressive sending rate, such as UDP traffic, wireless links, etc.

    Given these concerns, I can't help but think that they are inventing a protocol that works well only if used on a small scale. TCP is designed to back down if it thinks it's sending too fast, and is not really optimal. One can always hack a pair of TCP nodes to not play by the rules and get more than the fair share, but the problem is that that solution wouldn't work if it were adopted network-wide.

  • by Raphael ( 18701 ) on Thursday October 21, 2004 @12:56PM (#10588854) Homepage Journal
    I'd rather add a thin error checking addition and ask for a retransmission for the occasional dropped packets.

    Congratulations, you have just re-invented TCP with selective acknowledgments (a.k.a. SACK, RFC 2018 [ietf.org], published in October 1996).

    For more discussion on this topic, see also: http://www.icir.org/floyd/sacks.html [icir.org].

  • Re:Old!=bad (Score:5, Informative)

    by Gilk180 ( 513755 ) on Thursday October 21, 2004 @01:04PM (#10588968)
    Actually, TCP increases exponentially until the first packet is dropped. It backs off to half, then increases linearly, until another packet is dropped, backs off to half ...

    This means that a TCP connection uses ~75% of the bandwidth available to it (after all this stabilizes). So if there is only a single tcp connection over a given length, it will be 75% full at best. However, the whole reason for doing a lot of this is to allow many connections to coexist. If you transmit as fast as possible, you will get the highest throughput possible, but you will end up with a lot of dropped packets and won't play nice with others.
  • by shadowmatter ( 734276 ) on Thursday October 21, 2004 @01:12PM (#10589119)
    I suspect the exact details of the erasure codes they're using can be found here [ucla.edu], which by no coincidence is written by Petar Maymounkov and David Mazieres (both the authors of the original Kademlia paper [ucla.edu]). A good primer for erasure codes would be to read up on Reed-Solomon codes [ucla.edu], which seek to provide similar end-results as erasure codes.

    Rateless codes are important because they are the first step to a "digital fountain" model of data transfer. The idea is this: say you have a file of 10KB. Using erasure codes, you can "oversample" this file -- that is, turn this file into 40 1KB chunks. Now by collecting ANY 10 of these 1KB chunks, you can reconstruct the original file.

    The reason the digital fountain is so cool (and the reason for its name) is because it would make the most amazing BitTorrent client ever. Why? In BitTorrent, you're doing things like choking/unchoking, aggressively seeking the rarest pieces, and hoping that some guy who has the ONLY COPY of piece X doesn't leave the network before it is replicated. Using Erasure Codes, everyone would just blindly get whatever data is sent to them (where each data unit is an "oversample"), and blindly forward it on all outbound links. But if you're getting data in this manner, isn't there a chance you'd get the same piece of data twice (which is like getting the same piece twice in BitTorrent, which is no use)? If you oversample enough, you can make the chance of this happening approach 0. Note in this scheme, there's no choking or unchoking here (so you'll never go through a period of time on a rare torrent where EVERYONE is choking you, and your download rate is 0KB/s), and you don't have to worry about that one guy leaving the swarm with some critical piece -- not just because you can always reconstruct it, but also because that piece is no more critical than any other.

    It's beautiful, really. So why aren't we doing it? Simple -- erasure codes are computationally expensive to compute, although
    they're getting easier as new sorts of erasure codes are being developed, and PCs get faster. The first big breakthrough was with Tornado codes [ucla.edu], if I remember correctly. Anyway, we'll see what the future holds...

    - shadowmatter
  • You are correct: you can overcome a burst error of any length with a suitable ECC.

    However, what do you do when you encounter a burst error longer than your ECC design limit? A more complex code may not be the answer for reasons of computational complexity.

    The solution (employed with great success in CD media) is interleaving: instead of sending A1A2A3 B1B2B3 C1C2C3 you send A1B1C1 A2B2C2 A3B3C3. Let's assume the middle of that, A2B2C3, got lost, and you can only recover an error of one third that length. No worries! You've interleaved your data!!

    You deinterleave what you receive from A1B1C1 A3B3C3 into A1<error>A3 B1<error>B3, and C1<error>C3. Each of these can be corrected.

    There are whole families of such interleaved error correcting codes. You can even interleave interleaved codes! CIRS means "cross-interleaved Read Solomon" and covers many of the techniques employed commercially (particularly in CD media).

    But, there is a price to pay, and that is latency: you have to have all the data to deinterleave so, instead of waiting for, say, three packets, you have to wait for nine.

    Such codes are well-suited for streaming transmissions (if you can tolerate the latency), but I'm not sure that they'd work well for interactive applications.

  • by Anonymous Coward on Thursday October 21, 2004 @01:25PM (#10589333)
    Works GREAT! I've used it for some of my customers who need DECnet access and are running workstations and DECnet-enabled applications on Linux. The inaddr structures are virtually identical from the old Digital DECnet-PC libraries, so code is pretty portable.

    However, DECnet Phase IV only has a 16-bit address address range. You get larger networks using "hidden areas", which are similar to the 10.* and 192.168.* reserved addresses. It also requires that the NIC MAC address imply the DECnet address,
    which drops requirement for ARP support and means all interfaces have the same DECnet address.

    DECnet Phase V, as other posters have noted, has kindof moved from the OSI to TCP/IP type networking.

    DECnet is a nice protocol, but it just doesn't scale easily to Internet-sized networks.
  • by ANTI ( 81267 ) * on Thursday October 21, 2004 @01:41PM (#10589588) Homepage
    Here it was even worse.

    Just transfered a ~600MB (Linux CD) ISO from Europe to Australia.
    rateless-copy did about 400K/s
    scp managed 2.8M/s

    At least both files were intact
  • by Floody ( 153869 ) on Thursday October 21, 2004 @01:51PM (#10589768)
    This is exactly why FTP uses UDP for its data transfer. So use FTP.

    Right! Well, except for the fact that FTP uses TCP port 20 for data (or control port-1).
  • Re:Old!=bad (Score:3, Informative)

    by j1m+5n0w ( 749199 ) on Thursday October 21, 2004 @01:53PM (#10589809) Homepage Journal
    TCP's bandwidth usage is dependent on the latency of the link. This is due to the fact that sliding windows (the number of packets that are allowed to be out on the network) have a limited size.

    TCP's window size is not a fixed size. There is a maximum, but most "normal" connections are well below it. (Links with very high bandwidth and high delay may reach this limit, which can be increased somewhat with the right TCP extensions.) TCP's window size, in fact, regulates its bandwidth consumption.

    Another problem with TCP is slow start and rapid back-off. IIRC, a TCP connection linearly increases its bandwidth, but will halve it immediately when a packet is lost.

    That's not a bug, it's a feature! AIMD (additive increase, multiplicative decrease) is used because it's been found to work for most people. The multiplicative decrease part may seem drastic (cutting the window in half whenever a packet is lost), but it does do a good job of preventing severe packet loss due to congestion.

    When multiple connections are sharing a link, TCP ends up favoring the lower latency connection. This happens because when a packet gets lost (often this is because a link is not fast enough to handle the data being sent over it), the corresponding sender fails to receive an acknowledgement, and reduces it's window size in half. Window size is incremented by some small value each time a full window of data is acknowledged. The window size of low latency connections grows much more quickly than high latency connections, so the low latency connection will have a larger window most of the time.

    This is a known limitation of TCP, but I'm naturally suspicious of anyone who claims to have "fixed it" without offering specific details. Squeezing a little more bandwidth out of TCP is far less important than preventing the Internet from becoming unusable whenever a link becomes congested.

    -jim

  • by rick446 ( 162903 ) on Thursday October 21, 2004 @02:06PM (#10590012) Homepage
    Depends on which distro. SourceMage, for instance, bzip2-compresses their ISOs to ~130MB. And that's for a 650MB image, IIRC. Knoppix 3.6, OTOH, contains a 2GB file system compressed down to 700MB, so I wouldn't expect it to compress much (if at all).
  • Re:Old!=bad (Score:4, Informative)

    by Thuktun ( 221615 ) on Thursday October 21, 2004 @02:25PM (#10590293) Journal
    That's what selective ack is for.

    And that only works within the TCP window size. For discretely-sized data transfers on the order of megabytes and gigabytes, delaying transfer of one TCP window (on the order of dozens of kilobytes) is not the best way to get maximum throughput.
  • by dustman ( 34626 ) <dleary@@@ttlc...net> on Thursday October 21, 2004 @02:29PM (#10590352)
    Boy are you idiot

    Thanks... I'm not sure if you're confused or trolling, but I respond anyway:

    Microsoft as long as they didn't change the stack could use it in their kernel without releasing the full kernel code.

    Wrong. The reason the LGPL exists is to address this specific case. If the stack were LGPL licensed, then MS could include it, and only be required to submit any changes they made to the LGPL'ed stack itself.

    As the Stack wouldn't be a derivative of the whole kernel.

    How could you possibly argue that their stack was a derivative of the MS kernel? I assume you meant to say that the MS kernel was a derivative of the stack... But then you are 100% wrong here. If MS used this code, then their kernel would be a derivative work, there's basically no way to get around it.

    Technically, the FSF specifically states that if two programs run in operation via a "pipe", that it doesn't consider them linked. Any form of dynamic linking, they are considered linked, so you couldn't load the stack as a dll or driver and consider yourself not a derivative work. There is perhaps a tiny bit of leeway if you're a lawyer, where you could say "the FSF thinks loading a driver makes the kernel a derivative work, but I don't, so let's argue that in court".

    Now to be really nice MSFT might have to make available ... an LGPL style wrapper

    You are way off base. GPL apps are specifically allowed to include LGPL stuff. This is a special case exemption from the normal rules of the GPL. LGPL stuff is *not* allowed to include GPL stuff.

  • Re:What? (Score:3, Informative)

    by Percy_Blakeney ( 542178 ) on Thursday October 21, 2004 @06:16PM (#10592939) Homepage
    TCP does not use round trip time to calculate any "congestion levels."

    That's true, but TCP does implicitly rely upon the RTT in its ordinary operation; it does not increase the size of its congestion window until it sees an ACK, which implies a delay equal to the RTT. So, TCP doesn't use RTT in an explicit calculation but RTT does affect how quickly you're able to ramp up your utilization of the link after a packet loss.

    TCP can indeed not only tolerate but recover from packet loss

    Yes, but the question is how quickly does it recover from packet loss? This is a huge problem, especially with the advent of 1-10 gigabit ethernet.

  • Re:HSLink! (Score:1, Informative)

    by Anonymous Coward on Thursday October 21, 2004 @08:51PM (#10594085)
    HSLink was just a modem transfer protocol that supported simultaneous download and upload. That capability has always existed in modems, but traditional protocols like X-, Y- and Zmodem never supported more than 1 transfer at a time. Other simultaneous download and upload protocols were BiModem and Smodem.

    Could something like that be applied to dial-up internet connections?
    Dial-up Internet connections are inherently two-way. Some problems in simultaneous download and upload arise from the way TCP/IP handles acknowledging packets - if you're uploading at full speed the ACK packets don't get through as well as they should and download speeds are affected. With some OS'es (like OpenBSD) you can prioritize ACK packets higher and overcome this problem, but anyway.. HSLink wasn't magic.

Prediction is very difficult, especially of the future. - Niels Bohr

Working...