Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
Communications Programming Technology

60 Years of Hamming Codes 66

swandives writes "In 1950 Bell Labs researcher Richard W. Hamming made a discovery that would lay an important foundation for the modern computing and communications industries — coming up with a method for performing computing operations on a large scale without errors. Hamming wrote about how self-checking circuits help eliminate errors in telephone central offices. He speculated the 'special codes' he proposed — which became known as Hamming codes — would only need to be applied to systems requiring unattended operation for long periods or 'extremely large and tightly integrated' systems where a single failure would incapacitate the entire installation. Hamming code was the first discovery in an immense field called coding theory. This article looks back on the history of Hamming codes, their applications, and includes interviews with Todd Moon, Professor of electrical and computer engineering at Utah State University and David MacKay, Professor of natural philosophy in the department of Physics at the University of Cambridge and chief scientific adviser to the UK Department of Energy and Climate Change. An interesting read, about a little-known but fundamental element of information theory."
This discussion has been archived. No new comments can be posted.

60 Years of Hamming Codes

Comments Filter:
  • Easy to understand (Score:2, Interesting)

    by elbow_spur ( 749878 ) on Thursday November 25, 2010 @02:50PM (#34344266)
    Hamming codes are practical things, while Shannon's analyses of codes were more abstract (though still hugely useful and important)
    Consider the checksum bit. It helps to catch errors but there are 2 problems. First, if there is a double error (more likely if the checksum is on a longer string), then the error isn't caught Second, even if we know there is an error, we can't recover, but have to resend.
    The easiest error-correcting code is to replace every bit with a triple copy of itself. So
    101 becomes 111000111
    This way, we can recover from any single error, but the scheme is very inefficient.
    Hamming's simplest code takes a 4 bit message and adds 3 very special parity bits (think partial checksums) arranged in a clever way so that any one bit error can be isolated and corrected.
    That's the basic idea. The details are many places, such as,4) []
  • by rrohbeck ( 944847 ) on Thursday November 25, 2010 @03:30PM (#34344574)

    Shannon only proved that those codes exist. Hamming gave the first examples. So it's fair to say that Shannon was about information theory, not coding theory.

  • by refactored ( 260886 ) <> on Thursday November 25, 2010 @05:22PM (#34345266) Homepage Journal
    ... is excessively vague and handwaving requiring literally hours and pages of close work to fill in the gaps between the equation N he shows and the next equation N+1 that he says follows from N.

    Yup, a brilliant guy I'm sure, but not the guy I want teaching me. At the end of a course, (call me greedy), _I_ want to know how to do everything in the course, not merely have a warm fuzzy WEE-WOW feeling that something exciting just went by that I can't quite reproduce.

    Give me Richard Hamming's books instead of Feynman's any day. Ok, they won't make you go "WEEE WOW!!!", but on the other hand you will have an excellent understanding of the material AND be able to reproduce and USE any result in them yourself.

"To take a significant step forward, you must make a series of finite improvements." -- Donald J. Atwood, General Motors