Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Using Redundancies to Find Errors 338

gsbarnes writes "Two Stanford researchers (Dawson Engler and Yichen Xie) have written a paper (pdf) showing that seemingly harmless redundant code is frequently a sign of not so harmless errors. Examples of redundant code: assigning a variable to itself, or dead code (code that is never reached). Some of their examples are obvious errors, some of them subtle. All are taken from a version of the Linux kernel (presumably they have already reported the bugs they found). Two interesting lessons: Apparently harmless mistakes often indicate serious troubles, so run lint and pay attention to its output. Also, in addition to its obvious practical uses, Linux provides a huge open codebase useful for researchers investigating questions about software engineering."
This discussion has been archived. No new comments can be posted.

Using Redundancies to Find Errors

Comments Filter:
  • Here's a text link (Score:4, Informative)

    by Anonymous Coward on Wednesday January 22, 2003 @11:27PM (#5141214)
    PDF usually crashes my computer (crappy adobe software). So here's a convenient text link!

    http://216.239.37.100/search?q=cache:yuZKW8CjTqIC: www.stanford.edu/~engler/p401-xie.ps+&hl=en&ie=UTF -8 [216.239.37.100]
  • More details
    Appeared in FSE 2002. Finds funny bugs by looking for redundant operations (dead code, unused assignments, etc.). From empirical measurements, code with such redundant errors is 50-100% more likely to have hard errors. Also describes how to check for redundancies to find holes in specifications.

    Link to PostScript file for easy viewing/printing
    File [stanford.edu]
  • by Anonymous Coward on Wednesday January 22, 2003 @11:28PM (#5141218)
    This really old Slashdot logo [slashdot.org] still in use over on Team Slashdot's page [distributed.net] on distributed.net.
  • Patience! (Score:5, Funny)

    by houseofmore ( 313324 ) on Wednesday January 22, 2003 @11:29PM (#5141220) Homepage
    "...dead code (code that is never reached)"

    Perhaps it's just shy!
  • html (Score:3, Informative)

    by farnsworth ( 558449 ) on Wednesday January 22, 2003 @11:30PM (#5141228)
    html version is here [216.239.53.100].
  • Given enough lints, all bugs are shallow!
  • redundant? (Score:3, Interesting)

    by 1nv4d3r ( 642775 ) on Wednesday January 22, 2003 @11:32PM (#5141236)
    To me 'redundant' implies duplication of something already there. (a=1; a=1;)

    a=a; and dead code aren't so much redundant as they are superfluous. It's still a sign of possible errors, for sure.
    • Re:redundant? (Score:2, Insightful)

      by drzhivago ( 310144 )
      Rendundant code would be more akin to something like this:


      if (a==1)
      {
      [some chunk of code]
      }
      else
      {
      [same (or almost exact) chunk of code]
      }


      where the same code block appears multiple times in a file/class/project. By having the same block of code appear multiple times the chance of a user-generated error increases. Easily fixed by moving the repeated code into a parameterized function.
    • Re:redundant? (Score:4, Insightful)

      by Karl Hungus, nihilis ( 644003 ) on Wednesday January 22, 2003 @11:52PM (#5141355)
      Steve McConnell in the excellent book Code Complete talks about this sort of stuff. One of the big things was unused variables. Completely harmless, but a good indication that the code may have problems. If whoever is maintaining the code didn't bother to remove useless variables, what else are they not bothering to take care of?

      It's not, keep the code squeaky clean because cleanliness is next to godliness, it's keep the code clean so it's easy to read. Keep it clean because it's a discipline that will pay off when it's time to spot/fix the real errors in the code.

    • And, excuse me for replying again, but ALL GOOD CODE SHOULD HAVE AT LEAST ONE a=a; STATEMENT! Lest we forget our rational principles.
  • so run lint and pay attention to its output.

    pardon me, but DUH???

  • by webword ( 82711 ) on Wednesday January 22, 2003 @11:33PM (#5141247) Homepage
    Unfortunately, this paper doesn't really offer any practical advice. Is is probably a little useful to very good, or great programmers. However, for new or moderately good programmers, it probably won't be very useful. It is certainly interesting in the academic sense, but I always want to see more practical advice. (I suppose that good practical advice flows down from good theoretical advice.)

    What are some of the best ways to learn to avoid problems? I know that experience is useful. Trial and error is good, mentoring is good, education is good. What else can you think of? What books are useful?

    Also, I wonder about usability problems. In other words, this article mainly hits on the problems of "hidden" code, not the interface. I'd like to see more about how programmers stuff interfaces with more and more useless crap, and how to avoid it. (Part of the answer is usability testing and gathering useful requirements, of course.) What do you think about this? How can we attack errors of omission and commission in interfaces?
    • by trance9 ( 10504 ) on Wednesday January 22, 2003 @11:46PM (#5141328) Homepage Journal

      I think the lesson here is basically that the compiler is your friend. Turn on all the error checking you possibly can in your development environment and pay attention to every last warning.

      If there is something trivial causing a warning in your code--fix it so it doesn't warn, even though it wasn't a "bug". If your compiler output is always pristine, with no warnings, then when a warning shows up if it's a bug you'll notice.

      Kind of common sense if you ask me--but maybe that's just a lesson I learned the hard way.
    • I think the lesson is just what they're saying. Applying these tests on a body of code is a good way to find high-level errors, even though the tests just check for low-level mistakes.

      That seems pretty practical to me.

      The issue you raise about interfaces is only tangentially related. There, you get in to the problem (a very real problem) of confusing coding. This paper does not deal with the issue of whether the code is written well from the point of view of other programmers who need to work with it.

    • What do you think about this? How can we attack errors of omission and commission in interfaces?

      There are many products that do code coverage testing such as PureCoverage [rational.com]. Basically they analyze code as it is running and make note of which parts of your code have been executed and which haven't. This is very useful in making sure all possible code paths are tested, and if a code path isn't hit at all it gives you a good indication that you've got a 'dead' patch of code that you might want to look more closely at. Sadly such tools are generally very expensive.

    • I'd like to see more about how programmers stuff interfaces with more and more useless crap, and how to avoid it.

      Well, identifying it as useless crap is a good first step.

      And for managers:

      while(economy.isDown())
      if(newInterface.isUselessCrap())
      {
      fireEmployee();
      hireNextTelecomRefugee();
      }

      If you want a serious answer, one reason programs get filled with so much useless crap is because 80% of programmers program so they can collect a paycheck. They don't give a flying fuck if their code is good or not. That was a big eye-opener for me when I first got out of school. I couldn't believe how many people just didn't care.

      If you are interested at all in not muddying the interface, you are most of the way there. Give it some thought, consult with your peers, and try to learn from mistakes.

      Don't be afraid to refactor code every so often, because, schedule or no schedule, new requirements move the 'ideal' design away from what you drew up last month. That's (to my mind) the second largest contributor. Even good coders crumble to cost and schedule, and band-aid code that just plain needs to be rethought. In some environments, that's a fact of life. In others you will have to fight for it, but you can get code rewritten.
      • by sbszine ( 633428 ) on Thursday January 23, 2003 @12:31AM (#5141535) Journal

        Don't be afraid to refactor code every so often, because, schedule or no schedule, new requirements move the 'ideal' design away from what you drew up last month. That's (to my mind) the second largest contributor. Even good coders crumble to cost and schedule, and band-aid code that just plain needs to be rethought. In some environments, that's a fact of life. In others you will have to fight for it, but you can get code rewritten.

        In my experience, programming for an employer is the process of secretly introducing quality. This usually consists of debugging and refactoring on the sly while your pointy-haired boss thinks you're adding 'features'.

        Is it just me, or this the way it's done most places?

        • by ConceptJunkie ( 24823 ) on Thursday January 23, 2003 @10:02AM (#5142999) Homepage Journal
          Sometimes it's worse. I quit a job after 15 months, because I was constantly getting in trouble for trying to fix the cause of the problems rather than the symptoms.

          I was working on a 300,000-line Windows application, and I am not exaggerating here, it was about 5/6 redundant code. 100-line functions would be copy-and-pasted a dozen times and two lines changed in each. Plus, there were numerous executables to this project and often the same code (with minor variations of course) would exist in different executables.

          It was originally written in Borland C++ and the back-end was ported to Visual C++, but all the utility and support functions still existed in _both_ the Borland code and Microsoft code. Worse, they were not identical. Even worse, there was substantial use of STL, which doesn't work the same in Borland C++ (which is an ancient version... circa 1996) and Visual C++.

          That and the fact that using strcpy would have been a step up in maintaining buffer integrity, usually they just copied buffers and if one #define was different from a dozen others in completely different places, memory would be toast and we all know how that manifests.

          Worse, there was UI code written by someone who completely confused parent windows and base classes, such that the child window classes had all the data elements for the parent window, because they were derived from the parent window class!

          I spent an entire week once reviewing every single memcpy, etc, in the entire codebase (which was spaghetti-code in the extreme) just to eliminate all the buffer overruns I was discovering. THe program used a database with about 100 tables (a nightmare of redundancy in itself) and there was a several thousand-line include file of all the field sizes, maintained by hand (with plenty of typos). Eventually, I wrote a util to generate that include file automatically, which of course wasn't appreciated.

          I was trying to overcome these difficulties while being barraged with support calls, sometimes critical because this was software to manage access control systems for buildings, meaning I spent 80% of my time fighting fires. You know, situations like: "Our system is down... you need to fix this bug because we've had to hire guards for each door until we can get it back up again."
          There was only one other person working with me, and he quit in disgust and was not replaced for about 3 months.

          Finally, after stuggling mightily (in time, effort and at the expense of emotional well-being) to overcome the sheer incompetence put into this project (parts of which were 10 years old), I finally gave notice after it looked like my unscrupulous boss (who wrote a lot of this code) was doing everything he could to make it look like my fault to the client (even though they knew better and really liked working with me, precisely because I was not a BS-artist)... and after 15 years over never needing more than two weeks to find good work, I have been unemployed since May 2002.

          There's a moral here, but it escapes me.

    • On the contrary. This paper is most practical.
      Just use this mechanism and you can find thousands of errors in an already tested system.
      What impressed me most was something like this.
      if( complex statement );
      do=that;

      Notice the semicolon! This kind of errors are very hard to spot and they can stay in the code forever.
      I will propose to use a code-checker like this in our software to improve the quality.
    • I have only one piece of advice I give in answer to this kind of question. If you observe something in an application (and this applies to big ones more than little ones, but it is worthwhile for all software) that you cannot explain, it is a problem, almost certianly a bug. But a problem waiting to bite you in the ass nonetheless.

      We spent almost two years with software that was live and on about three occasions during that time we observed a problem that was moderately serious (critical but for the redundancy of the architecture). Now eventually we found the problem, design flaw/bug, but the evidence for the problem was there for all to see a year before the system went live, even in testing. An apparently benign difference in counters between different instances of the same component, that none of us could adequately explain, since they should all have been the same, But all the other counters were identical (these other counters were of a coarser grain and counting different things). If we had found the cause of the difference at the time we would have found the flaw at the time (guaranteed, since it was obvious once one knew the source of the deviation).

      That one is the best example that I can think of to demonstrate the "If you can't explain it, it's broken" aspect of software behaviour
    • "Unfortunately, this paper doesn't really offer any practical advice."

      It looks like it's intended for the people who program GCC, perl, kaffe, etc., as they can use this information to build better checking into their respective compilers, rather than for programmers.
  • by chewtoy-11 ( 448560 ) on Wednesday January 22, 2003 @11:33PM (#5141249)
    Writing repetitive code only once offers the same benefits as using Cascading Style Sheets for your webpages. If there is a serious error, you only have to track it down in the one place where it exists versus every single place you re-wrote the code. Also, it makes adding features much simpler as well. I'm an old school procedural programmer that is making the rocky transition to OOP programming. THIS is where it starts coming together...
    • I'm an old school procedural programmer that is making the rocky transition to OOP programming

      I agree whole-heartedly with what you say about code re-use. However I wouldn't see this as being a feature solely of OOP. Get the design right, and you can have some equally tight, highly-reusable procedural code.

    • by ArthurDent ( 11309 ) <meaninglessvanity@gmail. c o m> on Thursday January 23, 2003 @08:29AM (#5142451) Homepage Journal
      While OOP can be one method of solving repetitive code, good design is always the best way to solve it. What I've found is *any* time you're tempted to use the cut and paste functions within code, you need to ask yourself: Is there a common function that I can factor out to make only one opportunity for errors rather than two?

      You'll have more functions and the code might be a little harder to follow for the unfamiliar, but it will be much easier to debug if there is only one function that does a particular task.

      Ben
  • by kruetz ( 642175 ) on Wednesday January 22, 2003 @11:34PM (#5141250) Journal
    They also found that:

    Russian errors cause code
    Incorrect code causes errors
    Missing code causes errors
    Untested code causes errors
    Redundant codec causes redundancies
    Driver code causes headaches
    C code causes buffer overflows
    Java code causes exceptions
    Perl code causes illiteracy
    Solaris code causes rashes
    Novell code causes panic attacks
    Slashdot code causes multiple reposts
    Slashdot articles cause poor-quality posts
    Microsoft code causes exploits
    Apple code causes user cults
    Uncommented code causes code rage
    RIAA code causes computers to stop functioning
    (Poor idea causes long, desperate post)
  • lint is horrible (Score:3, Insightful)

    by Anonymous Hack ( 637833 ) on Wednesday January 22, 2003 @11:35PM (#5141259)

    It really is. It's a redundant holdover from ye old BSD versions. Granted, there are one or two times i've used it when -Wall -pedantic -Werror -Wfor-fuck's-sake-find-my-bug-already doesn't work, but a lot of the time it comes up with a LOT of complaints that are really unnecessary. Am i really going to have to step through tens of thousands of lines of code castind the return of every void function to (void)? Come on.


    • Why the hell would a void function have a return to begin with?
      • Perhaps he just ends with a return :)
      • I think the fellow was talking about casting the return of calls like printf, they return a value, but most of the time you are not interested in it.
      • by Anonymous Hack ( 637833 ) on Thursday January 23, 2003 @12:06AM (#5141421)

        Lint? Lint compains if you call a function that returns an int and you ignore the int. This is particularly irritating in the case of strcpy() and similar functions where you would normally do:

        strcpy(buf, "hello");

        except you're supposed to do:

        (void) strcpy(buf, "hello");

        or...

        buf = strcpy(buf, "hello");

        And that's just the beginning...

    • by RhettLivingston ( 544140 ) on Wednesday January 22, 2003 @11:50PM (#5141349) Journal
      I enforced a policy of eliminating all Lint info messages on a 1.5 million line, from scratch project. And, I do mean from scratch. we wrote our own operating system, ANSI c Library, and drivers and ran it on hardware that we designed and produced. In the first 2 years of deployment, only five bugs were reported. Lint was only part of the reason, but it was a total part.
    • Re:lint is horrible (Score:3, Informative)

      by Ed Avis ( 5917 )
      Check out Splint [splint.org] (formerly LCLint). Whereas traditional lint is less needed now that compilers have -W switches, splint has a whole bunch of extra stuff which gcc won't warn about. The only trouble with it is that by default, it wants you to add annotations to your program to help it be checked (for example if a function parameter is a pointerm you can annotate whether it is allowed to be null). If you go along with that then splint can give lots of help in finding places where null pointer dereferences could happen, and other bugs. But even if you don't want to annotate and you use the less strict checking it's still a handy tool. (OK, maybe C99 has some of this stuff too, but splint has more.)
    • by pmz ( 462998 ) on Thursday January 23, 2003 @11:51AM (#5143757) Homepage
      lint is horrible

      No, it isn't. I took a legacy application that I began maintaining and used lint to eliminate hundreds of lines of code and several real never-before-detected bugs. It also encouraged me to remove dozens of implicit declarations and redundant "extern" statements in favor of real header files. The application really is better for it, and to do this work without lint would have been very very tedious. Granted, my experience is with Sun's compiler's lint, so I can't say whether other implementations are as good.

      ...it comes up with a LOT of complaints that are really unnecessary.

      Actually, all of lint's complaints are about a potential problem. You just have to decide what is worth the time to fix.

      Using lint is a deliberate process that should take several days or weeks for a large application (on the first time through). After that initial investment, using lint is still an important part of the ongoing health of the program, but it should become less and less of an effort each time.
  • Additional support was provided by DARPA under contract MDA904-98-C-A933 [darpa.mil]

    Must be for that new lean, mean killing machine they've been asking for.
  • Redundent code means the coder wasn't thinking. Hence more bugs.
  • Three letters: NIH.

    Now, if you'll excuse me, I've got to get back to my text editor project.

  • by Amsterdam Vallon ( 639622 ) <amsterdamvallon2003@yahoo.com> on Wednesday January 22, 2003 @11:39PM (#5141293) Homepage
    Isn't this the job of that smart dude down the hall who runs Lunix computers and reads some Slash Period website or something?

    Well, at least that's how I finish all my projects.
  • by 1nv4d3r ( 642775 ) on Wednesday January 22, 2003 @11:46PM (#5141329)
    I've seen this (they were fired the next month):

    // and now to boost my LOC/Day performance...
    x += 0;
    x += 0;
    x += 0;
    x += 0;
    x += 0;
    x += 0;

    It actually caused a bug 'cuz they accidentally left the '+' off one of the lines. What an idiot.

  • Redundancy? (Score:3, Funny)

    by kubrick ( 27291 ) on Wednesday January 22, 2003 @11:48PM (#5141341)
    Ummm... surely if any story should be duped in the near future, it's this one. Please submit story suggestions accordingly.

  • Saw his talk at FSE (Score:5, Interesting)

    by owenomalley ( 103963 ) <omalley&apache,org> on Wednesday January 22, 2003 @11:53PM (#5141362) Homepage
    I saw Dawson's talk at FSE (Foundations of Software Engineering). He uses static flow analysis to find problems in the code (like an advanced form of pclint). The most interesting part of his tool is in the ranking of the problem reports. He has developed a couple of heuristics that sort the problems by order of importance and they supposedly do a very good job. Static analysis tools find most of their problems in rarely run code, such as error handlers. Such problems are problematic and sometimes lead to non-deterministic problems, which are extremely hard to find with standard testing and debugging. (This is especially true, when the program under consideration is a kernel.) Dawson also verifies configurations of the kernel that no one would compile, because he tries to get as many possible drivers at the same time as he can. The more code, the better the consistency checks do at finding problems.

    By making assumptions about the program and checking the consistency of the program, his tool finds lots of problems. For instance, assume there is a function named foo that takes a pointer argument. His tool will notice how many of the callers of foo treat the parameter as freed versus how many treat the parameter as unfreed. The bigger the ratio, the more likely the 'bad' callers are to represent a bug. It doesn't really matter which view is correct. If the programmer is treating the parameter inconsistently, it is very likely a bug.

    He also mentioned that counter to his expectations, the most useful part of his tool was to find 'local' bugs. By local, I mean bugs that are local to a single procedure. They are both easier for the tool to find, more likely to actually be bugs, and much easier for the programmer to verify if they are in fact bugs.

    He analyzed a couple of the 2.2.x and 2.4.x versions of the kernel and found hundreds of bugs. Some of them were fixed promptly. Others were fixed slowly. Some were fixed by removing the code (almost always a device driver) from the kernel. Others he couldn't find anyone that cared about the bug enough to fix it. He was surprised at the amount of abandonware in the Linux kernel.
    It is extremely frustrating that Dawson won't release his tool to other researchers (or even better to the open source community at large). Without letting other people run his tool (or even better modify it), his research ultimately does little good other than finding bugs in linux device drivers. *heavy sigh* Oh well, eventually someone WILL reimplement this stuff and release it to the world.

    On a snide comment, if he was a company he would no doubt have been bought by Microsoft already. Intrinsa was doing some interesting stuff with static analysis and now after they were bought a couple of years ago, their tool is only available inside of Microsoft. *sigh*
    • It is extremely frustrating that Dawson won't release his tool to other researchers (or even better to the open source community at large).

      There's probably a bunch of reasons why he hasn't done this. The most likely one is that he's using it as a research tool, and he doesn't want someone else to beat him to the punch in his research. A second is that it's probably not really in a fit state for sharing as yet (the tool is not the goal of the research, after all).

      He's got a bunch of papers up describing how the tool works, so it can be reimplemented. Also, if he's like most academics, he'll probably talk your ear off if you ask him how it works. :)
      • by CH-BuG ( 55283 )
        A second is that it's probably not really in a fit state for sharing as yet (the tool is not the goal of the research, after all).

        This is not a valid reason, I know a lot of people (including me) that would be happy to improve his research code into something useful for the community at large. I remember a paper describing a gcc extension to write semantic checks (for instance, reenable interrupts after disabling them). This program found an amazing number of bugs in the linux kernel. I really wish I could have something like that at hand!
  • I suppose they've hit my pet peeve. I've seen many simple problems turned into hideous monstrosities with many bugs by people trying to handle bugs that can't ever happen and imaginary special cases because they were never taught how to abstract a function. Perhaps it can't be taught. In 20+ years of programming, its been a very rare time when I've picked up code and not been able to cut out large chunks without replacing them.
  • Until i realized that at 164% zoom all the equals signs looked like minus signs. I spend several minutes trying to figure out how that code could compile at all before i thought to turn the zoom up to 200% :)

    Interestingly enough, 163% zoom doesn't cause the problem, nor does 165%. After a bit of experimenting i couldn't find any other isolated case that had the same results. There's a sudden transition to illegibility at 131%, but everything below that is also illegible. 164% is just odd, strange that that happened to be picked as the default when i opened it.

  • Is that why there are so many double postings of articles on Slashdot? Trying to use redundancies to find errors?
  • At the risk of being modded redundant (hah!) I have to point out that a correlation between sloppy coding and errors does, in fact, exist. Many of us who write software have suspected this for a long time, and it is good to know that our hypothesis is supported by concrete research from the academic community, who seem to have finally proven that "redundancies seemed to flag confused or poor programmers who were prone to other error types."

    Hopefully, we can expect much more of such valuable breakthroughs from the academic community in the future, complete with papers full of badly formatted C code!

  • ...but seriously tho, I've always found that it's best if you go out of your way to make sure that code is duplicated as little as possible. Sometimes it takes some major refactoring to move a method when you discover that it's needed someplace else, but it's almost always worth it in the time saved testing, debugging and keeping the methods in sync.
  • Linux for research (Score:2, Informative)

    by Champaign ( 307086 )
    Using Linux for academic research is hardly a new idea. In my group alone one of the profs has been publishing papers and giving talks about research using Linux since 2000.

    An example of such is http://plg.uwaterloo.ca/~migod/papers/evolution.pd f [uwaterloo.ca] - about the evolution of Linux
  • heed all warnings (Score:3, Insightful)

    by fermion ( 181285 ) on Thursday January 23, 2003 @12:36AM (#5141556) Homepage Journal
    There are two practical upshots of this that I use in my own code. First, it is best to treat all warnings as bugs. Warnings are an indication that the compiler or programmer can get confused with the code. Neither is a good situation. Code that generates warnings should be rewritten in a more understandable manner. Some would say this stifles creativity. This may be true, but we can't all be James Joyce, and, as much as we may like to read his work, few of us would enjoy wading through such creative code.

    Second, use standard idioms. For some, that may mean learning the standard idioms. These should become second nature. Programmers should express their creativity in the logic, structure, and simplicity of the code, not the non standard grammar. Standard forms allow more accurate coding and easier maintenance.

  • ...What's been happening to Slashdot's servers for the past 3 hours? Did Kuro5hin take them down?
  • Smatch (Score:4, Interesting)

    by Error27 ( 100234 ) <error27@g[ ]l.com ['mai' in gap]> on Thursday January 23, 2003 @03:00AM (#5141595) Homepage Journal
    The Stanford Checker is great. I was blown away when I read their papers last year. Their checker is not released yet, so I wrote a similar checker (smatch.sf.net [sf.net]) based on their publications.

    The poster mentions Lint, but I did not have any success using Lint on the kernel sources. The source is too unusual.

    Also Lint does not look for application specific bugs. For example, in the Linux kernel you are not supposed to call copy_to_user() with spinlocks held. It took me about 10 minutes to modify one my existing scripts [sourceforge.net] to check for that in the Linux kernel last Monday. It found 16 errors. (It should have found more but I was being lazy.)

    A lot of the time, you can't tell what will be a common error until you try looking for it. One funny thing was that there were many places where people had code after a return statement. On the other hand, I didn't find even one place where '=' and '==' were backwards.

    It's fascinating stuff playing around with this stuff. I have been learning a lot about the kernel through playing around with Smatch.
    • Re:Smatch (Score:4, Interesting)

      by JohnFluxx ( 413620 ) on Thursday January 23, 2003 @03:39AM (#5141686)
      Did you get the bugs that you found fixed?
      It wasn't clear if you submitted it.

      Btw, I'm mulling with the idea of writing a write-time checker that would do a lot of this sort of stuff but as you are coding. That way your favourite editor can underline errors (like a spell checker does).
      One of the things I was most interested in doing was number-ranges. Basically if you have a for-loop that loops x between 0 and say 10, then inside that for-loop you know x=[0-10]. Then you can check if you access an array outside of those bounds.
      Do you have any idea how useful this would be? Or any ideas if it has been done, or anything?

      It is an area that really interests me, but that I have no knowledge about :)

      JohnFlux
  • by mrflip ( 11712 ) <flip@m r f l i p . com> on Thursday January 23, 2003 @03:21AM (#5141641) Homepage
    This made me immediately think of the 'redundant/unused code' conundrum in biology (Sorting through DNA [slashdot.org]). Surprisingly little of the DNA string seems to be 'active' code, where active means 'codes for a gene.' Most of it is either has no use, or uses that are not clear to us now. One pedestrian use for this 'dead' code is simply to separate the genes logically and spatially; this reduces the probability of simultaneous defects in multiple genes.

    DNA code also has high redundancy, which allows error-correcting transcription and other hacks ( see Parity Code And DNA [slashdot.org] or DNA's Error Detecting Code [slashdot.org])

    In both cases factors yielding robust DNA code are found to indicate bad digital computer code.

    flip

    (background: Ars Technica's Computational DNA primer [arstechnica.com]

  • by rufusdufus ( 450462 ) on Thursday January 23, 2003 @03:54AM (#5141719)
    So many people have made silly comments about this being obvious, useless or whatever. This is probably because they did not actually READ the paper.

    The paper is not about obvious code redundancy bugs, it is about subtle errors which are not as simple as just duplicate code. It is about code that *appears* to be executed but actually is not.

    Go take a look at the examples and see how long it takes you to notice the different errors...now imagine have a thousand pages of code to peruse..would you catch it? Many of them probably not.

    The conclusion of the paper is basically, errors cluster around errors; finding a trivial unoptimal syntactical constructions tends to point to real bugs.

    Where there's smoke, there's fire.

  • This is widely known:

    1. Every programs contains bugs.
    2. Every program contains redundancies and so can be made smaller without changing behavior.
    Therefore the empty program is redundant but still buggy.
  • CQual [berkeley.edu]

    It's been used to find security holes.
  • by Skuto ( 171945 ) on Thursday January 23, 2003 @04:58AM (#5141856) Homepage
    They made fools out of themselves with this one:

    if (!cam || !cam->ops)
    return -ENODEV; /* make this _really_ smp-safe */

    if (down_interruptible(&cam->busy_lock))
    return -EINTR;

    if (!cam || !cam->ops)
    return -ENODEV;

    Their comment: 'We believe this could be indication of a novice programmer...blabla...shows poor grasp of the code'.

    BZZZZZZZZZT

    Nice try kids, but unlike you, this piece of code was probably written by an experienced guy that has actually written code for parallel systems before. Since it's tricky, you would be excused if not for the 'novice programmer' comment above and the fact that the code itself says it's there for SMP safety.

    Here's a hint: UNTIL you acquire the lock on 'cam', any other process can change the value, including at the point BETWEEN the first check and the acquisation of the lock.

    --
    GCP
    • Gee, reread yourself... Sorry but they're 100% right. If they could acquire the lock, cam and cam->ops cannot be NULL due to the first check.

      BTW, in 2.4.18, the code now looks like this for this same function:

      struct cam_data *cam = dev->priv; int retval; if (!cam || !cam->ops) return -ENODEV; DBG("cpia_mmap: %ld\n", size); if (size > FRAME_NUM*CPIA_MAX_FRAME_SIZE) return -EINVAL; /* REDUNDANT! */ if (!cam || !cam->ops) return -ENODEV; /* make this _really_ smp-safe */ if (down_interruptible(&cam->busy_lock)) return -EINTR;

    • unlike you, this piece of code was probably written by an experienced guy that has actually written code for parallel systems before.

      I suggest you check out Dawson Engler's resume; he has almost certainly done 10x more parallel-systems development than you have. This particular code example might be a bad one, because the analysis that supports the author's conclusion [slashdot.org] is omitted from the article, but the basic point is still valid: code that contains duplicate condition checks like those in the example is more likely to contain real bugs than less duplicative code, and the "low-hanging fruit" can be identified automatically. It's not hard at all to see how deeper analysis, different rules, or annotation could do a better job of weeding out false duplicates without compromising the tool's ability to flag legitimate areas of concern.

      You're arguing about low-level implementation, when the author was trying to make a point about a high-level principle. That's the hallmark of an insecure junior programmer.

    • by pclminion ( 145572 ) on Thursday January 23, 2003 @11:08AM (#5143413)
      You're one of those guys who thinks that anyone who doesn't grasp precisely all the different technical fields you do, must be an fool.

      These researchers obviously have a good hold on compiler technology, since they implemented their checkers with xgcc. They also seem to understand logic quite well, since their code uses and extends on gcc's control-flow analysis algorithms. And they do, actually, understand what's going on here.

      As for your particular example, the check really is redundant, but it was almost definitely intentional. It's true that another processor could change the cam variable between the first check and the lock -- but taking the first check out would have no impact on the functionality or correctness of the code. It's just a performance enhancement so that the routine can exit early in the error case, without the overhead of locking the lock. Removing the bit of redundant code would just add a little overhead to the error case.

      In short, their checker found a true redundancy. They may have not realized its purpose since they don't have specific experience with this kind of parallel programming, but it's a redundancy. If you had actually read the paper instead of merely glancing over it, you would have seen that their checker respects the volatile nature of variables declared as such -- the checker is fully aware that a second thread can change the value between one operation and the other -- and it still figures out that the check is redundant.

      Here's a hint: don't go around claiming people are fools unless you've got some evidence. These guys had hundreds and hundreds of bugs to go through, and expecting them to perfectly analyze every last one of them is unfair.

      Oh, and -10 points for using "BZZZZZZT".

  • Focusing on Linux has a certain appeal, but it causes the article to miss out on a powerful tool available to Java developers: AspectJ [eclipse.org]. AspectJ (and other aspect-oriented tools - see [aosd.net].) lets you reorganize object-oriented code, establishing uniformity where you need it, greatly compacting your code base, and eliminating redundancy.
  • Error in paper (Score:4, Insightful)

    by CaptainCarrot ( 84625 ) on Thursday January 23, 2003 @05:35AM (#5141928)
    In the spirit of open-source style community debugging, I'd like to point out that one of their examples doesn't illustrate what they say it illustrates. It's the discussion of Figure 10 in Section 5, "Redundant Conditionals". For those who just can't bring themselves to RTFA, I reproduce the figure here. Pseudocode and editorial comments are as in the paper.

    /* 2.1.1/drivers/net/wan/sbni.c:sbni_ioctl */
    slave = dev_get_by_name(tmpstr);
    if(!slave && slave->flags & IFF_UP && dev->flags & IFF_UP))
    {
    ... /* print some error message, back out */
    return -EINVAL;
    }
    if (slave) { ... }
    /* BUG: !slave is impossible */
    else {
    ... /* print some error message */
    return -ENOENT;
    }

    This is characterized in the caption as an example of "overly cautious programming style" for the redundant "else" in the "if (slave)..." statement. The text goes on to mention that these are "different error codes for essentially the same error, indicating a possibly confused programmer."

    The mistake the authors make here is that the two error codes are intended to flag rather different conditions, which the caller of this function may well be set up to handle in different ways. I'm not familiar with Linux device drivers, so I'm making some guesses here about the purpose of what various components of this code are doing. dev_get_by_name appears to be looking up the name of a device in a table and returning a pointer to a structure containing information about that device. Clearly, ENOENT is intended to indicate that no device of the name supplied was found in the table, and EINVAL was supposed to indicate that a device was found, but that there was some condition that invalidated the information.

    I don't think this programmer was confused, but he *was* sloppy, likely rushed, and trying to add a feature in the fewest lines possible. The probably scenario was as follows: The check for (slave) being non-NULL was in place. Then a programmer comes along to add the checks against the IFF_UP mask. (IFF_UP: I Find Fuck-UPs?) The code that's executed if "slave" is non-NULL *shouldn't* be executed if this check finds a problem, so he puts the check first. But he doesn't want to dereference a NULL pointer, so he does the reflexive C thing and places a check for the nullity of "slave" at the beginning of the logical expression. (The first term in an "&&" operation is evaluated first, and if it's false the rest of the expression is ignored.) He simply failed to notice the effect on the "else" clause following "if (slave)". Or perhaps he didn't see it; the author of the paper cut out all the code there, and the "else" could have been many lines down the page.

    What the programmer should have written was:

    if (slave) {
    if (!(slave->flags & IFF_UP && dev->flags & IFF_UP)) {
    /* print some error message, back out */
    return -EINVAL;
    }
    /* omitted code goes here */
    }
    else {
    /* print some error message */
    return -ENOENT;
    }

    Perhaps this makes it clearer that the function was in fact trying to check for two very distinct error conditions here.
  • by AlecC ( 512609 ) <aleccawley@gmail.com> on Thursday January 23, 2003 @05:55AM (#5141973)
    Also, in addition to its obvious practical uses, Linux provides a huge open codebase useful for researchers investigating questions about software engineering.

    And confirms the "many eyes make few bugs" feature of Open Source.

    A pity these guys won't release their tester for general use. However, according to other posters, there are similar tools which are Open Source. Would it be worthwhile setting up a facility on Sourceforge to run such tools automatically on the compile farm? Obviously, Project owners would have to sign up for the service, but the habit of regularly running checkers is surely to be encouraged.

    It would probably only work for people like me who treat all warnings as serious. I believe in the "Quiet, dark cockpit" as Airbus put it - if the normal state is no messages but all messages are enabled, you hear of problems much earlier in the development cycle.

  • it sounds like a cliche', but I think this is a useful one, even for beginning programmers.

    If a programmer learns to be extremely picky about redundancy, and structures most of his code with the goal to be non-redundant, he will automatically avoid the great majority of programming problems. He will have highly maintainable code, as his code will be perfectly factored (as the XP folk would say), and he will get a very decent bottom up design without even having to think about it (perfectly factored code often entails good abstraction, and abstraction at the right level).

    It isn't a panacea but I sincerely believe it is one of the most important things for any programmer. Most programmers will claim that avoiding redundancy is "obvious", but very few apply this rule consistently and thoroughly.

  • Couldn't you also search for seeminly copy and pasted code blocks as a sign of errors (or code that at least needs refactoring)
  • Dead code (Score:2, Insightful)

    by xyote ( 598794 )
    Dead code is most likely found in really old code that has been modified many many times. Does really old code that has been modified many many times have lots of bugs? Quite likely.


    Is this dead code going to get removed?


    No.


    Why not?


    Because, one, it's only an opinion that it's dead code. There could be some obscure case that no one imagined that could use it. Two, if some programmer removed it and it turned out that it was needed or the programmer screwed up the removal, the programmer would be blamed and take a lot of grief for it. If it ain't broke, don't fix it.


    Now, it could be that the dead code doesn't work properly for the obscure case. But how could you tell? Do you want to write a test case for code that no one can figure out how it gets invoked?

  • by Paul Lamere ( 21149 ) on Thursday January 23, 2003 @07:49AM (#5142296) Homepage Journal
    Try this: pmd [sourceforge.net]
  • by wowbagger ( 69688 ) on Thursday January 23, 2003 @08:06AM (#5142354) Homepage Journal
    I have a saying:
    If a line of code doesn't exists, then it cannot contain a bug.


    Like more aphorisms, you can argue this, but my point is this - every line of code in a program is a potential bug. Every line of code requires a bit more grey matter to process, making your code just that much more difficult to understand, debug, and maintain.

    So I ruthlessly remove dead code. Often, I'll see big blocks like this:


    #ifdef old_way_that_doesnt_work_well
    blah;
    blah;
    blah;
    #endif


    And I will summarily remove them. "But they were there for archival purposes - to show what was going on" some will say. Bullshit! If you want to say what didn't work, describe it in a comment. As for preserving the code itself - that is what CVS is for!

    By stripping the code down to the minimum number of lines, it compiles faster, it checks out of and in to CVS faster, and it is easier to understand and maintain.

    I will often see the following in C++ code:


    void foo_bar(int unused1, int unused2)
    {
    unused1 = unused1; // silence compiler warning
    unused2 = unused2; // silence compiler warning
    }


    And I will recode it thus:

    void foo_bar(int , int )
    {

    }


    That silences the "unused variable" warning, and makes it DAMN clear in the prototype that the function will never use those parameters. (True, you cannot do this in C.)

    Code should be a lean, mean state machine - no excess fat. (NOTE - this does NOT me remove error checking, #assert's, good debugging code, or exception handlers).

The unfacts, did we have them, are too imprecisely few to warrant our certitude.

Working...