Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Wireless Networking Technology Hardware

Turbo Codes Promise Better Wireless Transmission 212

captain igor writes "IEEE is running a story about two French professors that have created a new class of encoding, called 'Turbo Codes,' that will allow engineers to pass almost twice as much data through a given communications channel, or equivalently, the same amount of data at half the power. The new codes allow the Shannon Limit (the theoretical maximum capacity of a channel) to be approached to, currently, within .5 dB. Scientists hope that this breakthrough will revolutionize wireless communications, especially with the coming reclamation of large swaths of the EM spectrum." As the article points out, such codes are in use now, but seem poised for much wider implementation.
This discussion has been archived. No new comments can be posted.

Turbo Codes Promise Better Wireless Transmission

Comments Filter:
  • Finally!!! (Score:4, Funny)

    by JustinXB ( 756624 ) on Wednesday March 10, 2004 @10:34AM (#8521244)
    Finally, Millo will be able to complete Synapse for Bill Gates.

    Oh wait...

  • Odd wording (Score:5, Informative)

    by GMontag ( 42283 ) <gmontag@guymontag. c o m> on Wednesday March 10, 2004 @10:35AM (#8521250) Homepage Journal
    The story is a bit misleading. IEEE is running a story about two French professors that have created a new class of encoding is an odd way of stating that it was invented eleven years ago, as the story states:

    It happened a decade ago at the 1993 IEEE International Conference on Communications in Geneva, Switzerland. Two French electrical engineers, Claude Berrou and Alain Glavieux, made a flabbergasting claim: they had invented a digital coding scheme that could provide virtually error-free communications at data rates and transmitting-power efficiencies well beyond what most experts thought possible.

    It also seems to be making a natural progression into new areas, beginning at satellite transmissions 11 years ago and making it's way into other digital wireless applications along the way.

    Thanks to timothy for almost clearing this up :)
    • by nosphalot ( 547806 ) <nosphalotNO@SPAMnosphalot.com> on Wednesday March 10, 2004 @11:00AM (#8521489) Homepage
      Yea, I thought that was odd. When I worked at Motorola about 4 years ago we had a turbo encoder/decoder in a prototype of a 3G cellphone.

      Wonder how long until the story about a revolutionary new compression scheme called LZW hits the frontpage?

    • It gets better (Score:5, Interesting)

      by s20451 ( 410424 ) on Wednesday March 10, 2004 @11:00AM (#8521496) Journal
      Briefly, the big problem in data communication is achieving the Shannon limit, which is the maximum theoretical data rate at which information can be transmitted with arbitrarily low probability of error. Shannon proved his result in 1948, but until the Turbo guys, nobody knew how to achieve it.

      The main problem is that optimal decoding of any non-trivial code is NP-hard, which has been known for about 30 years now (i.e., the only known algorithm has exponential complexity in the code length). The Turbo breakthrough was to show that a suboptimal decoder with O(n) complexity for code length n could nonetheless achieve excellent results. This is the so-called "Turbo principle".

      There is an even "newer" class of codes called Low-Density Parity-Check Codes that can beat turbo codes. Turbo codes have a small gap to the Shannon limit, and these new codes can potentially eliminate the gap. Small gains are a big deal; the rule of thumb is that 1 dB of gain is equal to a million dollars of annual revenue for a wireless provider.

      The twist is that these LDPC codes were actually proposed in a 1963 PhD thesis, but were disregarded as beyond the computational abilities of the time. They were only "rediscovered" in 1996, after the Turbo code furore.

      • Re:It gets better (Score:3, Insightful)

        by avalys ( 221114 ) *
        1 dB of gain is equal to a million dollars of annual revenue for a wireless provider

        So, turbo codes have brought the gain differential (sorry, don't know the proper term) to .5 dB. The .5 dB that's left would bring in only $500,000 of revenue?

        That doesn't sound like terribly much, considering how much money those companies are pulling in already.
        • Re:It gets better (Score:4, Insightful)

          by Short Circuit ( 52384 ) <mikemol@gmail.com> on Wednesday March 10, 2004 @11:49AM (#8521994) Homepage Journal
          Just because a company's revenue continues to grow doesn't mean that company's profit margin is a constant percentage... A multi-billion dollar corporation could very easily be making only five-hundred thousand in profit. Despite the size of the comany, adding another five hundred grand to their profit is still twice what they were making.
        • Re:It gets better (Score:3, Informative)

          by Foxxz ( 106642 )
          We're talking about signal loss. Every 3db your signal doubles (or halves if you're talking about loss). If your losing 3db of signal information that means you're only using half of the theoretical bandwidth avalible in a frequency. Which means you can only have half the theoretical amount of people using the same cell phone frequency. Therefore, as you use better encoding algos to lessen the loss in db you can cram more people into the same frequency and get a better density ratio for your signals. So 1db
      • Re:It gets better (Score:5, Interesting)

        by GuyZero ( 303599 ) on Wednesday March 10, 2004 @11:21AM (#8521693)

        The twist is that these LDPC codes were actually proposed in a 1963 PhD thesis, but were disregarded as beyond the computational abilities of the time. They were only "rediscovered" in 1996, after the Turbo code furore.

        The article also mentions that the latency associated with turbo codes is too high for most voice applications and that LDPC codes, while more computationally intensive, have a low latency. (At least, that's what I remember from the article).

        I thought it was funny that their sponsor, Alcatel or whoever, never patented it in Asia so NTT has been using turbo codes in Japan for years, free.

        • Re:It gets better (Score:5, Informative)

          by Phil Karn ( 14620 ) <karn@@@ka9q...net> on Wednesday March 10, 2004 @06:22PM (#8526496) Homepage
          Actually, the latency associated with even an ideal code is dictated by Shannon. The closer you want to approach the Shannon limit, the larger the code block must be. The larger the code block, the longer it takes to transmit. So at low data rates, you can wait a long time just sending the encoded block even if you can instantly decode it once you get it.

          These large block sizes are not a problem at high data rates or on very long deep space links where the propagation delay still exceeds the block transmission time, but they are a problem with heavily compressed voice where low latencies are required.

          One way to decrease the average latency associated with a forward error correcting code is to attempt a decode before the entire block has been received. If the signal-to-noise ratio is high, the attempt may succeed; if not, you wait, collect more of the frame and try again, and you've only lost some CPU cycles. This is called "early termination", and it's one of the tricks done in Qualcomm's 1xEV-DO system, now deployed by Verizon as BroadbandAccess. 1xEV-DO is probably the first widespread commercial application of turbo codes. A 128-byte code block is used to get good coding gain. This relatively large block size is practical because 1xEV-DO is primarily designed for Internet access rather than voice.

          • This relatively large block size is practical because 1xEV-DO is primarily designed for Internet access rather than voice.

            Isn't EV-DO designed _only_ for data access? I thought the DO stood for Data Only. (I currently work on IS-2000 1xrtt applications right now, and I have very little knowledge of EV-DO and EV-DV interworkings)
      • LDPC (Score:3, Insightful)

        by nil5 ( 538942 )
        You must remember that LDPC codes rely upon block (Codeword) lengths of many bits, e.g. over 10,000 bits long in order to achieve performance better than turbo codes. So your parity check matrix is enormous.

        I'm sure there are some efficient implementations, but for certain applications having packets that long can be prohibitive.
        • by !ucif3r ( 713159 ) on Wednesday March 10, 2004 @11:59AM (#8522073) Homepage
          I have been dying for an article in my field to post on! Actually new classes of LDPC codes based on finite geometries have shown that you can construct them at almost any size (say 1000 bits, much more practical). These 'algebraic' LDPC codes perform much better than both Turbo codes and other LDPC codes mostly because they have more structure and are not generated by random computer searches.

          Also they don't require the more time consuming decoding algorithm needed for Turbo Codes (although they can be used). Check 'em out, you can search for papers on IEEE Xplore. Also someone is working on research to show Turbo Codes are LDPC codes! Crazy.

          I almost crapped my pants when I saw that statement that Turbo codes were new. ;-)

          -Take that Lisa's beliefs!
          • Hey, I just finished my PhD on LDPC codes. Where do you work?
          • by TheSync ( 5291 ) on Wednesday March 10, 2004 @05:37PM (#8526045) Journal
            Right, Gallager worked out LDPC codes in 1963. Then they were forgotton for 20 years until people realized that Digital Fountain's "Tornado" codes were LDPC codes.

            LDPC codes will be behind DVB-S2, the new transmission system for digital satellite video distribution. Since they approach the Shannon limit so closely, there will be no DVB-S3.

            I should say that the IEEE article is a little over-hyped, in that these codes really only buy about 2-4 dB additional gain, concatenated RS and convolutional coding were pretty close to the Shannon limit in AWGN, but those last couple of dBs were nice, but the remaining 0.5-1 dB beyond LDPC & Turbo Codes isn't worth much.

            Much more important now are ways to handle "fast fading" channels found in mobile environments, this is what is driving OFDM.

            Also, both Turbo Codes and LDPC codes are really computationally intensive to decode. They are currently only decoded at speeds below 20 Mbps, generally implemented as (expensive) FPGAs. We won't see real cheap ASICs for another year or two.
            • Yes, I hear that Space-Time coding is the 'new deal' for Rayleigh type fading channels. A few people in my dept. have been doing research into these codes. I should probably read the work whenever I don't have other projects, assignments, TA duties, marking, etc. to do. ;-).

              What amazes me is how far behind these companies are in actually implementing this technology. 10-years old is actually new.

              A friend of mine was saying that even when new research developments are implemented in new hardware the IT
  • Haha! (Score:4, Funny)

    by lukewarmfusion ( 726141 ) on Wednesday March 10, 2004 @10:35AM (#8521256) Homepage Journal
    Sounds a lot like this story [slashdot.org]...

    Double your hard drive space, your bandwidth, data transfer, penis size...
  • Lucky guys! (Score:4, Funny)

    by lovebyte ( 81275 ) * <lovebyte2000@g[ ]l.com ['mai' in gap]> on Wednesday March 10, 2004 @10:37AM (#8521265) Homepage
    Clever AND good looking [ieee.org] !
  • News? (Score:3, Informative)

    by lightspawn ( 155347 ) on Wednesday March 10, 2004 @10:39AM (#8521295) Homepage
    Like the article says, these codes were introduced in 1993. This would have made a good story - back then.

    The problem is that turbo codes are so computationally intensive that using them in consumer electronics is only new becoming feasible.

    • Re:News? (Score:3, Insightful)

      Like the article says, these codes were introduced in 1993. This would have made a good story - back then.

      The problem is that turbo codes are so computationally intensive that using them in consumer electronics is only new becoming feasible.


      Did you know about it? Did most people know about it? If something is making it more feasible to use, then its news. Dr. Evil wanted to put laser beams on sharks *years* ago, but no one's really done it. If something made it possible to put frikking laser beams on sh
      • Well, they were mentioned in one of my college classes way back when so I would guess that for anyone that considers this interesting (and understandable) news already knew about it.
      • Re:News? (Score:5, Insightful)

        by ajagci ( 737734 ) on Wednesday March 10, 2004 @11:02AM (#8521510)
        I knew about it, as did many other people. But you have to realize that coding theory is a pretty funny and insular field. Related techniques had been used in other fields for many years prior to that discovery. Most people who work in this general area of statistics simply don't think about coding and aren't interested in it. One of the obstacles is that people who build communications systems generally are engineers thinking about fast, low-level processing; their first reaction to anything non-trivial and new is that it's too slow to be implemented in practice.

        Turbo coding is ultimately not much of a theoretical breakthrough, but a compromise and algorithmic hack that happens to work fairly well for real-world problems and is expressed in a language that people who work on communications systems understand. But that's nothing to be sneezed at, since it will ultimately mean that we will get higher data rates and other benefits in real-world products.
      • Re:News? (Score:3, Funny)

        by Zakabog ( 603757 )
        I'm sorry the frikking sharks with the frikking laser beams on their heads aren't here yet but for now enjoy our ill-tempered sea bass.
    • Re:News? (Score:2, Insightful)

      It's not that they are just now feasible, it's that few people outside of satcom were hip to them. Processor performance is not much of a factor because you don't do turbo coding in software. It's accelerated in hardware and is performed in line as the datastream passes through the mod/demod sections. There are chips that are not too expensive that do this or a turbo coding core can be dropped into the ASIC that holds the mod/demod sections. That makes it pretty cheap for a consumer device.
    • Re:News? (Score:4, Funny)

      by Waffle Iron ( 339739 ) on Wednesday March 10, 2004 @10:59AM (#8521478)
      Like the article says, these codes were introduced in 1993.

      Ahh, that would explain the "Turbo" thing: It comes from the era of Borland compilers and 486 clone boxes. If it were invented today, it would surely be called something like "iCodes".

      • If you read the article (Not sure if that's possible to do without paying $$$ unless you're an IEEE member, in which case you have it in dead-tree format and can access it for free online), the reason they were called "turbo" codes was because one of the creators of turbo codes was apparently a big automotive racing fan.

        Part of the turbo coding system involves a feedback loop between two seperate decoders at the receiver. The feedback from each decoder helps the other decoder make a better decision about
    • by akajerry ( 702712 )
      I always enjoy the moment in history when theory becomes practice.

      1904 Einstein predicts the energy released from nuclear fission (E=MC^2). ~1938 first atom split, the equation was correct.

      FM radio was therorized for many years, but until Amstrong came up with his Phase Lock Loop none could make an FM radio capable of broadcasting more than about a hundred feet.

      CDMA has been around for a while too, Qualcomm doesn't own the pattent on it, just some techniques for practically implementing it which wasn't
  • TURBO! (Score:4, Funny)

    by Junior J. Junior III ( 192702 ) on Wednesday March 10, 2004 @10:39AM (#8521303) Homepage
    Finally, by forcing more air into the cylinders than ordinary air pressure would allow, we will be able to achieve more efficient combustion, which will in turn allow us to transmit more data using radio frequencies. ...Don't you just hate it when terminology gets mis-applied to stuff it has nothing to do with?

    Dang it, this software isn't made out of platinum, either.
    • Re:TURBO! (Score:4, Interesting)

      by nosphalot ( 547806 ) <nosphalotNO@SPAMnosphalot.com> on Wednesday March 10, 2004 @11:04AM (#8521525) Homepage
      Actually, turbo is an appropriate name for the way these codes work. If I remember correctly, its been 4 years since it was explained to me, as the data leaves the encoder, some of it gets routed back into the first stage to act as a hint for encoding the next stage of data. So the data exhaust, helps compress the data intake, much like a mechanical turbo.

      • nosphalot is correct: The Turbo breakthrough was that they determined that by using feedback, just as in an engine turbo, you could get arbitrarily close to Shannons limit.

        The key idea is that this feedback gives you an infinite impulse response, i.e. in theory all bits ever transmitted through such a channel will continue to affect it for ever after.

        Even if you do limit the feedback time to more reasonable levels, you can still get a very useful increase in channel capacity.

        It is also important to notic
    • Actually, Turbo is a great word for this. It's a system with great chaos, and the root word "Turb", means chaotic, confusion. If anything, a Turbocharger, the device you're talking about, uses the wrong word, because it has nothing to do with a Turbulant system (it's more about taking hot exaust gasses to turn an impeller to move more cool air into the engine).
      • Actually, "turbo" in turbocharger derives from the fact that a turbocharger uses an exhaust-driven turbine to provide power to the compressor.

        As explained earlier, turbo codes were named because they use a positive feedback loop to improve their performance, just like a turbocharger, even though in the strictest sense of using the word "turbo", it is inappropriate since "turbo" implies the use of a turbine somewhere.
  • Can this be applied to wired transmissions too?
    • Re:Question... (Score:3, Informative)

      by Garak ( 100517 )
      Yep, it has been, thats what adsl uses I belive...
    • I suppose there are too few errors in wire transmission to justify the extra complexity and latency of these codes.
      • Not true at all, over a distance there can be alot of error in wire transmission. ADSL is a good example of this, with adsl your trying to get alot of signal through unshielded pair, you gota keep the power down to keep it from interfearing with other pairs, you also got alot of noise caused by the other pairs, so you have a low SNR, which these codes can help get alot of data through.
        • Yep you're right :)

          Maybe there is a problem with the latency or the processing power required. It could not be used in UMTS for voice communication because latency would have been too big (see the article).

          Surely it will be used as soon as it becomes cheap to do so for the operators :)
  • Reception Quality. (Score:3, Interesting)

    by Omni Magnus ( 645067 ) on Wednesday March 10, 2004 @10:46AM (#8521357)
    I live in a rural area, where we are sometimes lucky that we have a signal, let alone a good one. This could improve the reception for us a lot. Either that, or it doubles the battery life of the cell phones. Either way, I am happy. Although I wonder if this coding could be used in wireless devices as well. Hopefully this could somehow be used to help limit the battery drain of WiFi on a laptop/PDA.
  • by Doppler00 ( 534739 ) on Wednesday March 10, 2004 @10:50AM (#8521401) Homepage Journal
    Could turbo codes be used with a 56K modem giving somewhere around 80kbps of bandwidth?
    • by distributed ( 714952 ) on Wednesday March 10, 2004 @11:07AM (#8521551) Journal

      well they are not exactly a compression technique... but basically an error correcting technique.... like hamming. And thus help you transfer more reliably at the same power.

      see this small writeup [ed.ac.uk]

      besides changes like the one you talk about would require hardware changes... as i said its not compression

    • No. A POTS line has 64000bps of bandwidth (minus 8000bps for signalling). That is the maximum number of bits you can push across the connection per second.
    • No. But if you assume that 56K is the maximum amount of information that can be carried on a phone line at current power levels, then if all the phone companies change all their equipment and everyone buys a new modem, then turbo codes would let us nearly 112K without changing the power level. Or we could do the same thing without turbo codes by doubling power level.

      Or we could reach close to 1,000K by calling it DSL.
    • by Ungrounded Lightning ( 62228 ) on Wednesday March 10, 2004 @02:57PM (#8524150) Journal
      Could turbo codes be used with a 56K modem giving somewhere around 80kbps of bandwidth?

      Nope. (They'd actually reduce the data rate if used.)

      Turbo codes are members of the class "Forward Error Correction" codes, which are used to correct errors that creap in during signal propagation from a sender to a receiver over a noisy channel. They work by sending EXTRA bits (typically 3 times as many with turbo), then processing what you get to correct the errors.

      Turbo codes are typically used on noisy analog channels - such as radio links. Because they make the data bits less susceptable to noise you can reduce the amount of power you use to transmit them. It's like saying the same sentence three times in a noisy room, rather than scraeming over the noise level just once, so the guy at the next table can hear exactly what you said.

      Turbo code (and other FEC codes) send more bits. But because the underlying data is so much less likely to be received incorrectly the sender can can reduce the amount of power used for each bit by MORE than enough to make up for the extra bits. You can trade this "coding gain" either for more BPS at a given power level or a lower power consumption at a given BPS.

      But adding bits also increases the amount of information you're sending - either by broadening the bandwidth or more finely dividing the signaling symbol (for instance: more tightly specifying and measuring the voltage on a signal). So if your bitrate is limited by the channel capacity rather than the noise level you're stuck. The coding scheme will "hit the wall" and after that the extra bits come right out of your data rate. Not a problem with Ultra Wideband, or with encoding more bits-per-baud to get closer to a noise floor. But a big problem with telephone connections.

      If you had a pure analog telephone connection they would be useful for increasing your bandwidth utilization. And in the old days you did. And analog modems struggled to push first 110 BPS, then 300, then 1200 through it despite the distortion and noise. Then they got fancy and cranked it up to 9.6k and beyond by using DSPs.

      But these days you don't have an analog connection all the way. Your analog call is digitized into 8,000 8-bit samples per second and transmitted at 64,000 BPS to the far end. No matter WHAT you do to your signal you can't get more than that number of bits through it.

      (This is actually very good, by the way, if the connection is more than a couple miles long. The digital signal is propagated pretty much without error, while an analog signal would accumulate noise, crosstalk, and distortion. So for calls outside your neighborhood you usually get a better signal-to-noise ratio with the digital system for the long hops than with an analog system. That's why cross-continent calls these days sound better than cross-town calls in the '50s.)

      In practice it's worse than 64,000 BPS - because the system sometimes steals one of the bits from one sample in six for signaling about dialing, off-hook, ringing, etc. And you don't know WHICH sample. So you can only trust 7 of the bits. 56,000 BPS max. (It's a tad worse yet, because some combinations of signals are forbidden due to regulatory restrictions on how much energy you can put on a phone line - a particular signal could slightly exceed the limit. This makes the actual bit rate a little lower. And you have to sacrifice a little bandwidth to keep the receiver synchronized, too.)

      And you only get the approximately 56k in the downlink direction, because to use it you have to have a digital connection at the head end to make full use of the digital transmission. At your uplink you can't be sure enough of the sampling moment to create a waveform that would force exactly the right bit pattern out of the A-to-D converter. So you have to fall back to a modulation scheme that allows some slop when you're transmitting at the POTS end of the link. (If you had a digital connection at both ends - i.e. ISDN - yo
  • Not really news... (Score:5, Informative)

    by bsd4me ( 759597 ) on Wednesday March 10, 2004 @10:51AM (#8521409)

    Turbo codes aren't really new. As the article states, they were invented in 1993. I have a big stack of papers on them at home.

    Turbo codes have a few problems, though. One, they are a pain to implement and consume a lot of resources. Two, turbo codes are SNR dependent, which makes them harder to use in varried channels.

    The article also makes it seem like there were no coding advances since Shannon published his orginal paper on channel capacity. Ungerbock's papers on trellis coded modulation (TCM) from 1987 or so radically altered digital communications, and he should have been mentioned. Some of the current turbo code research is trying to unify TCM and Turbo Coding (TTCM), which has great promise once it is practical.

  • Error correction again seems like one of the bottom less pits... like trying to achieve zero kelvin... perfect vacuum and of course this [zyvex.com] landmark talk by feynman.

    Another thing that worries me is why all prepostrous claims are met with so much resistance.... relativity, quantum mechanics, secure-crashfree-windows(oops)...
    strange world we live in.

    • Another thing that worries me is why all prepostrous claims are met with so much resistance....

      If it's true, it will be accepted in the end. Scientists don't want to live in a fantasy world. It seems logical that preposterous claims are met with resistance, otherwise every nutcase who makes an outrageous statement would cause a massive waste of time as people try to validate it.

      And anyway, nothing about turbo codes is preposterous. It's been around for years, and I really don't understand why this arti

  • Moores Law? (Score:3, Interesting)

    by scum-e-bag ( 211846 ) on Wednesday March 10, 2004 @10:53AM (#8521422) Homepage Journal
    Is there a similar Law to Moores law (or a combination of such) that could be applied to this compression of data and the effective use of the spectrum? As time goes I feel we are going to see the need for the available ammount of wireless transmission medium to increase. How long before we hit the theoretical limit of data transmission and the planet is saturated?

    Just interested...
    • Re:Moores Law? (Score:3, Informative)

      by metamatic ( 202216 )
      Compression is weird.

      You can calculate the best possible compression scheme for handling arbitrary data. That is, the compression algorithm which, if you fed it every possible combination of input data, would compress the data the best. The algorithm is called Huffman coding.

      The problem is, in actual use Huffman coding is crap. Why? Because real data isn't random, it doesn't cover the entire space of possible data.

      So useful compression algorithms take advantage of the non-randomness of actual data, to do
      • Re:Moores Law? (Score:5, Informative)

        by pclminion ( 145572 ) on Wednesday March 10, 2004 @11:39AM (#8521888)
        That is, the compression algorithm which, if you fed it every possible combination of input data, would compress the data the best. The algorithm is called Huffman coding.

        No. Huffman coding is only optimal if the block size grows to infinity. In that case, it can approach the Shannon limit. For finite data sets (i.e., all data sets), arithmetic coding performs slightly better on a per-symbol basis, because it is able to use fractional numbers of bits to represent symbols.

        Huffman is never used on its own, except as a demonstration of basic compression algorithms. It's commonly used as the final step in entropy-coding the output symbols of another compressor, commonly a lossy compressor, like JPEG, MP3, or MPEG.

        The problem is, in actual use Huffman coding is crap. Why? Because real data isn't random, it doesn't cover the entire space of possible data.

        In actual coding, it is crap, but not for the reasons you listed. It's crap because it's infeasible to allow the block size to grow without bound, since the number of possible codes increases exponentially.

        As someone else has pointed out in the threads, if you allow your compression scheme to be useful only for particular files, and allow the compression and decompression software to be arbitrarily complex, there's essentially no limit to how tight you can compress data.

        There's a very clear limit, given by the Shannon entropy (which can vary widely, depending on the transformations you apply to the data). It is also limited by the quite obvious pigeonhole principle. You cannot pick an algorithm which will compress a given data set arbitrarily well.

        So in general, there's no way to tell what the technical limit is on compression.

        No. The limit is definitely the Shannon entropy. You cannot magically represent data using fewer bits than necessary (and Shannon's entropy tells you how many that is). What changes between compression algorithms is the transformations which are applied to the data, or in other words, the different ways of looking at it which reveal a lower entropy than other ways of looking at it. However, the limit is always the Shannon entropy. The transformations can toss out data which is irrelevant. Your playing card example is a good one -- the information which is relevant is which card it is and possibly its angle. You've applied a transformation to the data -- transformed it from a bitmapped image to a series of card values and positions. You can then compress those pieces of information with Huffman codes (if the entropy allows it).

        • Yeah, I skipped the discussion of block sizes and memory requirements, just like people generally skip such things when talking about Turing machines.

          Your distinction between "transformation" and "compression" is unclear and appears arbitrary. For the purposes of the original question, "put data in, get less data out that represents original data losslessly" is compression; whether the method used is technically known as compression or coding or transformation or notational re-engineering is irrelevant.
          • Yeah, I skipped the discussion of block sizes and memory requirements, just like people generally skip such things when talking about Turing machines.

            In the case of a Turing machine, it's not really relevant to mention how long the tape is (for example). It's definitely relevant to mention block sizes when discussing Huffman compression, because it has a major impact on its effectiveness.

            Your distinction between "transformation" and "compression" is unclear and appears arbitrary.

            I wasn't drawing a d

          • That still leaves your claim: "The problem is, in actual use Huffman coding is crap. Why? Because real data isn't random, it doesn't cover the entire space of possible data. So useful compression algorithms take advantage of the non-randomness of actual data, to do significantly better than Huffman". Which is just plain wrong. Any form of compression, including Huffman, only works because "real data isn't random".
      • >You can calculate the best possible compression scheme for handling arbitrary data. That is, the compression algorithm which, if you fed it every possible combination of input data, would compress the data the best.

        That algorithm is a simple "copy" operation, compressing the input to an output that is the same size.
        It is not possible to construct a compression algorithm that will compress every possible combination of input data with a better result.
    • Re:Moores Law? (Score:5, Informative)

      by Garak ( 100517 ) <chrisNO@SPAMinsec.ca> on Wednesday March 10, 2004 @11:19AM (#8521658) Homepage Journal
      This technique is only for one very small specfic part of the specturm, a communication channel. Communcation channels can very from a few khz in size to a few Mhz, I belive 6Mhz is the biggest which was orginally for TV but channels orginally desinated for TV are not being used for 802.11b and g.

      The total spectum goes from 30hz right up past light at some crazy big number. Frequecies below 30Mhz are called HF, these frequceys are pretty full but the total band is only 30 Mhz, thats only 6 TV channels, or alot of small 30khz voice channels. This band is so saturated because its not limited to line of sight and its possible to communicate right around the world on it. Its also the easyest to transmit and recive on because filters for these low frequcencies have very high Q(very narrow bandpass, basicly tune better). At these freq. things are already pretty saturated but are strictly regulated along with most of the spectum up to 300Ghz.

      Frequcencys from 30Mhz-1Ghz have been the primary frequecys for local communications for the past 50 years or so. Ever since super-hetrodyning has been around, that allows you to "move" a freq down to a lower intermediated frequecy where you have high Q filters. This range is pretty full too, not much room for new communications, aircraft, broadcast tv and radio, police, cell phones, everything is in this range, old UHF tv channels were resently given up to make room for cellphones. These bands are strickly los, well rf los which is a little better than optical line of sight, lower frequencys refract and difract more than higher frequncy light..

      Above 1Ghz has been very expensive to use up untill recently. With the advent of new semiconductors and processes its now viable to use these frequecys. There is lots of bandwidth up here to use, alot of it is tied up in radar and with the miltary which is just legal BS because 99.999% isn't being used. Its more political than techinical. There is also quite a bit of it being wasted by older technology that use like 20Mhz for a 6Mhz signal.

      Also alot of companies own rights to these microwave frequenies(above 1Ghz) and have swiched to fiber optics and they are just sitting their idle.

      Frequencys above 1Ghz can be used again and a again in diffrent areas provided they are seperated by mountains or the curvature of the earth. The problem with 802.11b is that there are only 3 channels that don't overlap.

      Also with frequencys above 1 Ghz you can use a dish to focus the signal in one area rather than using more power in all directions. This way you will only interfear with people using the same frequcys in the direction that your dish is pointing. This flashlight vs a room lamp, the flashlight can light up a small area far away and so can the room lamp, but the room lamp uses more power because it also lights up the full room.

      In short there is tones of spectum left, we just need to get some of it back from the military and wasteful users.
    • FYI, this applies not to compression of data, but to error correction. It's assumed with turbo codes that there is no redundancy in the information being encoded. (In reality there may be, but that's a completely different problem.)

      i.e. a communications system is always optimized to maximize performance with a bitstream that is assumed to be non-redundant.

      Turbo codes are a method for error correction, not compression. In fact, they do the exact opposite of compression - they ADD redundant data.

      As to a
  • Delay (Score:3, Informative)

    by kohlyn ( 17192 ) on Wednesday March 10, 2004 @10:58AM (#8521462)

    As other posters have commented, turbo codes have been around along time. Any Digital Modulation Course will cover it.

    Turbo Codes do have one major drawback ... they add a huge delay into the data stream. It works by overlapping data on the same bit (sound implausible, but that's how it works), and it essence sends the same data several times. Turbo codes are used with some satellites, and add an extra 10-20 minutes delay to the communication, for satellites this isn't too bad ... for cell phones this is dreadful.
    • Re:Delay (Score:2, Informative)

      10-20 minutes delay!

      What are they using for the decoding? Monkeys turning hand cranks?

      Modern turbo coding cores (and I mean modern as in several years ago) introduce a latency of only a couple frames.
    • Whoa, way false (Score:5, Informative)

      by Srin Tuar ( 147269 ) <zeroday26@yahoo.com> on Wednesday March 10, 2004 @12:22PM (#8522307)

      Gee- thats funny. I work a a satellite internet company, and we use Turbo-codes in our FPGA's. The delay from TPC is in the low milisecond range- trust me.

      Anything close to one second would have been unacceptable.

      I dont know where you got your numbers from.
      • Cut him some slack, he just wrote minutes instead of milliseconds ;-)
      • Re:Whoa, way false (Score:3, Informative)

        by Matt_Bennett ( 79107 )
        I remember reading one of the original papers on turbo coding- (maybe 5 or 6 years ago) and turbo codes are great, but they add latency (which was the point of the original poster in this thread)- the more latency they add, the better they get- they spread the "energy" of a bit of information over a long time period- as you approach infinite latency (useless, of course), you approach Shannon's limit. A little bit of noise only messes up a little bit of the energy in that bit of information, most of the inf
  • High-powered? (Score:4, Informative)

    by Lehk228 ( 705449 ) on Wednesday March 10, 2004 @10:58AM (#8521466) Journal
    After partially reading the FA It seems that this scheme is particularly well suited for what it is currently doing, Sat Comm and deep space probes where you have alot of computational and analysis power at the recieving end but not a good way to ask for a re-transmission, when you have a multiple second (or multiple hour) latency in your communication system but really, this is not going to be in a cell phone for quite a while, cell phones are built to make lots of connections from one tower, not to make your phone get insane coverage or to make your battery last longer. especially the stuff abour predicting bit value based on analog value, so now we turn every digital reciever into a digital reciever + Analog to Digital reciever in x bits.... honestly i think most consumer applications are not worth the added cost. keep in mind it has been 11 years since this tech came out... communications companies aren't stupid... it WOULD be on the market if it was feasable/affordable.
    • In the specialized market for data communications in the long-range but noisy HF spectrum (roughly 3-30MHz), there's at least one modem that relies on analog sampling of the data signal.

      It's a really clever hack -- when the protocol asks for a retry, the modem compares the analog levels of the second copy of the packet to the analog levels of the first copy. At first I thought they were signal-averaging, but a telecom engineer told me it's doubtless more sophisticated.

      So, even before applying ECC to corre
  • Turbo Codes? (Score:3, Insightful)

    by Czernobog ( 588687 ) on Wednesday March 10, 2004 @10:59AM (#8521477) Journal
    People have been working on them for ages and yes there are significant advantages.
    However, the latest word on source and channel coding though, is Space-Time Coding. Especially convolutionsal S-T codes are very very promising and quite naturally perform even better than block S-T codes...

    What I don't understand is why this now? It's like running a feature on GSM, instead of writing about TDD or FDD in 3G, or even the discussion going on about how 4G will shape out to be...

  • Yesterday, Nextel finalized a deal to add a pretty big chunk of spectrum to their business (in exchange for some sort of administration requirements for police and other channels which currently fit in that band.

    Are there any of you who could comment on whether this will reduce the value of such a chunk of the spectrum?
  • LDPC codes (Score:5, Interesting)

    by auburnate ( 755235 ) on Wednesday March 10, 2004 @11:05AM (#8521540)
    Lots of /.ers have been quick to point out that turbo codes have been around since 1993. However, the IEEE article points out that LDPC ( low density parity check) codes were invented in the early 1960s. Researchers have gotten the LDPC codes to outperform the turbo codes, and to top it off, the LDPC patents have all expired, meaning no royalty fees like turbo codes. My first slashdot post ... be gentle!!!
    • Re:LDPC codes (Score:3, Interesting)

      by Wimmie ( 446910 )
      Lots of /.ers have been quick to point out that turbo codes have been around since 1993. However, the IEEE article points out that LDPC ( low density parity check) codes were invented in the early 1960s. Researchers have gotten the LDPC codes to outperform the turbo codes, and to top it off, the LDPC patents have all expired, meaning no royalty fees like turbo codes.

      The patent issue might by one of the reasons why it is probably not as widespread in use today. In Space communication various methods of FE
    • The neat thing about Turbo codes is that they're being used in the 3G cell phone standards (along with various other codes.) Turbo codes are powerful because they are iterative: they make several approximations at the information sent across the wire (or lack of wire, heh).

      This allows the system to do a good job of guessing what the original message is quickly. If you're interested in Turbo Codes, one of my former professors has done a lot of work with them, and has links to other turbo code sites on h
    • Re:LDPC codes (Score:3, Interesting)

      by TheSync ( 5291 )
      Be careful, there are some specific LDPC codes that have been patented (such as the Digital Fountain "tornado" codes).
  • by Anonymous Coward
    I hope they get cracking on

    Up, Up, Down, Down, Left, Right, Left, Right... etc.
  • by Smallpond ( 221300 ) on Wednesday March 10, 2004 @11:11AM (#8521589) Homepage Journal
    "much as when, in a crowded pub, you have to shout for a beer several times"

    I was wondering when they would get to the practical applications.
  • TURBO! (Score:2, Funny)

    by deltwalrus ( 234362 )
    Didn't the word "turbo" go out several years ago, along with the useless PC case button of the same name?
  • Shannon's Limit.... (Score:5, Informative)

    by dustinmarc ( 654964 ) on Wednesday March 10, 2004 @11:33AM (#8521815)
    I've seen that some people are curious about Shannon's limit, so I though I would give a little insight to it. It starts, with Nyquist's theorem which states:

    max data rate = 2H log V bits/sec

    Here, H is the bandwith available, usually through a low-pass filter, and V is the number of discrete signal levels (V = 2 in a straight binary system).

    This equation however is for a noiseless channel, which doesn't exist. So Shannon updated the formula for a channel which contains a signal-to-noise ratio. This turns out to be:

    max bits/sec = H log (1 + S/N)

    Here the log is again base 2, H is still the bandwith (in Hertz) and S/N is the signal-to-noise ratio in dB. Notice that the parameter V is missing. This is because no matter how many discrete symbols you have, or how often you sample them, this is still the maximum number of bits/sec that you will be able to attain.

    Most coding schemes, or modulation techniques as they are also called, rely on shifting signal amplitude, frequency, and phase to transmit more than one bit per symbol. The problem is that the more symbols there are, the harder it is to detect them correctly with the addition of noise. Basically, when you fire up your modem, and you hear all that weird buzzing and beeping, a lot of that is time being spent for your modem trying to determine just how noisy the channel is, and what the best modulation scheme it can use is while still being able to detect symbols correctly.
  • by fygment ( 444210 ) on Wednesday March 10, 2004 @12:00PM (#8522082)
    .... was the fate of Shannon [st-and.ac.uk]. He is the father/creator of information theory which he shared with the world in a brilliant paper. [bell-labs.com] He was afflicted by Alzheimer's disease, and he spent his last few years in a Massachusetts nursing home.

  • by UnknowingFool ( 672806 ) on Wednesday March 10, 2004 @01:30PM (#8523196)
    If turbo codes get used in cell phones, does that mean that new cell phones will have a Turbo button? Great, now I have to use advanced duct-tape technology so that the turbo button is always activated.
  • Not new (Score:2, Informative)

    by bullestock ( 556584 )
    If you RTFA, you will discover that turbo coding is not new - e.g. it is currently used on at least one well-known packet-switched satellite phone system.

    The article simply says that turbo coding is about to go mainstream.

  • by pe1chl ( 90186 ) on Wednesday March 10, 2004 @03:46PM (#8524719)
    Why are they not all migrating to VMSK, the magical modulation that promises to exceed the Shannon limit by orders of magnitude? Approaching it by .5dB seems to be not very much of an accomplishment, compared to that.

    Checkout http://www.vmsk.org/ where all the claims are made. Supposedly you could compress all communication to 1Hz bandwidth, or so.

    [of course I do not believe this]
  • Interesting article (Score:2, Informative)

    by cavebear42 ( 734821 )
    I RTFLA and the attached pdf [ieee.org]. First, let me say that the PDF sums it up, you can avoid the article. Second, I would like to know, according to this pdf, how did the system rate a +8 on a scale of -7 to +7?
  • by Anonymous Coward
    The original paper is here

    Claude Berrou, Alain Glavieux, and Punya Thitimajshima, "Near Shannon Limit Error-Correcting Coding and Decoding:Turbo-Codes", in Proceedings of IEEE International Conference on Communications, (Geneva, Switzerland), pp. 1064--1070, May 1993.
    http://gladstone.systems.caltech.edu/EE/Courses/EE 127/EE127B/handout/berrou.pdf [caltech.edu]

    (with 400 citations, as counted by CiteSeer ResearchIndex
    http://citeseer.ist.psu.edu/context/31968/0 [psu.edu]
    -- yeah, it is since 1993 .. not "new", but currently sti

Put your Nose to the Grindstone! -- Amalgamated Plastic Surgeons and Toolmakers, Ltd.

Working...