Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Automated Linux Error Checking 25

Caydel writes "In a recent message to the Linux Kernel Mailing List (LKML), Ben Chelf, CTO of Coverity, Inc. announced an internal framework to continually scan open source projects for source defects and provide the results of their analysis back to the developers of those projects. The linux kernel is one of 32 open source projects monitored by Coverity. Coverity is looking for a few group-nominated maintainers to access the reports, in order to patch the bugs found before they are announced to the general public. For those not familiar with Coverity, they are a small company out of Stanford who monitor source code correctness through automatic static source code analysis."
This discussion has been archived. No new comments can be posted.

Automated Linux Error Checking

Comments Filter:
  • LinuxSoft (Score:3, Funny)

    by ExE122 ( 954104 ) * on Monday March 06, 2006 @12:12PM (#14858412) Homepage Journal

    Linux has caused an error and will be terminated

    • Send Error Report
    • Don't Send
  • ....he makes some polite, reasonable replies [iu.edu] to the answers to his post. Nice to see.
  • static_analysis++ (Score:5, Interesting)

    by tcopeland ( 32225 ) * <tom AT thomasleecopeland DOT com> on Monday March 06, 2006 @12:20PM (#14858512) Homepage
    I've worked on an open source Java code analysis [sf.net] project for the past few years; static analysis can be a very handy tool. Having an automated check for things that aren't even bugs, but are just overly wordy code blocks:
    public boolean foo(int x) {
      if (x>2) {
        return true;
      } else {
        return false;
      }
    }
    is quite helpful. Changing the above code to "return x>2" will save four bytes and will read a bit smoother, too. There are many other examples [blogs.com] of this sort of thing.

    Lots more on all that in my book [pmdapplied.com] - there's a downloadable free chapter there on using static analysis to improve JUnit tests if you want to get a feel for things.

    • Mod parent up. Tom, I just love PMD and have used it in several projects. I've put it in the context-menu of JDeveloper, ran it on the commandline and used it to report on nightly builds. It's easy and doesn't get in the way with a gazillion useless warnings.
    • by jreiser ( 590600 )
      If it saves 4 bytes then the compiler's optimizer needs some improvement. The code as written has some advantages for debugging of control flow at low optimization levels. [I did quite a lot of such by-hand code improvement--25 years ago.]
      • > the compiler's optimizer needs some improvement.

        Quite true! But until Sun improves the Java compiler, it's nice to have PMD around to catch such things. And anyhow, the readability improvements might make that change worthwhile, regardless of bytecode savings.
  • by jd ( 1658 ) <imipak@ y a hoo.com> on Monday March 06, 2006 @12:33PM (#14858634) Homepage Journal
    Coverity has been scanning the Linux kernel for a while now and have been sending in periodic bug reports. I cannot be sure they are scanning all architectures, though, as I've been trying for some time to get a MIPS64 version of their checker and that has been proving difficult. In consequence, I am unsure how much of the architecture code and architecture-specific drivers are currently being monitored. As unusual architectures don't get much testing under normal conditions, I hope Coverity can clear this up - preferably by guaranteeing that they cover the more obscure parts of the kernel.


    Historically, Coverity's software has been used for some considerable time. It was first used under the name of the "Stanford Checker" and made an absolutely staggering difference. I believe it was first used in the 2.1.x era, though Linux historians can feel free to correct me on that.


    Because it is not a run-time check, but a compile-time check, it is unclear to me what (if any) tests they have for violation of invariants, probable infinite loops, validating the parameters of functions passed as pointers, and other strictly run-time gremlins.


    Because its documentation does say it's only really good for large programs, it is also unclear how effective it will be for debugging strictly external drivers or small pieces of support code. Many of the libraries and utilities out there for Linux are way too small to be reliably tested with this program. (IIRC, they recommend a minimum of something like half a million lines of code.)


    Although the Linux kernel has been tested, bugs in compilation will render testing worthless. I do not know how extensively either GCC or the binutils package have been checked. I'm not even sure there is anything in binutils big enough to be checked. Presumably Gnu libc should also be tested - otherwise it's unclear if the checker or the compiler can be trusted... and how do you make sure that the absence of errors in libc aren't due to a problem in the checker caused by a bug in libc?


    (Checking the checker is relatively easy. Checking longer loops is a harder problem, as there are more interdependencies.)

    • by iabervon ( 1971 ) on Monday March 06, 2006 @01:47PM (#14859428) Homepage Journal
      Since it's all static checks, it should be able to check different architectures than the program runs on without any problem. Remember that this isn't even supposed to find cases where the program behaves consistantly, but does something that's wrong (like, the PCI spec doesn't actually let you do what you're doing).

      They actually do a lot with violation of invariants: it's looking for cases where it can't prove that the invariant is maintained. Of course, there's always the chance of false positives, where the code is sure to work, but only for some tricky reason that the checker can't figure out, but these are often cases where the code should be made more obvious anyway, because somebody's likely to accidentally change it in ways that don't work if it's subtle. There's also a lot of extra-detailed type checking along particular code paths, so it can identify, for example, what it's safe to turn a particular void * into in a particular function, and then verify (e.g.) that ext3_release_file is only called on struct files where private_data is a struct dir_private_info. (Which is true, because nothing could create a struct file with something else there and with the ext3 fops, or create another set of fops with ext3_release_file)

      Checking external drivers shouldn't be a problem. The reason they need a lot of code is because they're determining invariants from actual usage, and it needs the actual usage to cover the possibilities well (e.g., it tries to find a lock that almost always protects a particular bit of data, and then gives errors on violations; if you only have a couple of places that access something, and they don't match, it can't tell which is the real lock and which spots are wrong). External drivers are small, but their behavior can be compared against the rest of the kernel, because they do a lot of calling functions that the base kernel defines and uses. The hard case is really something like libpng, where it has a wide API, and there's no way for the program to determine what rules users of the library are told by the documentation to follow. (In this case, it may make sense to check the library combined with a number of common programs and use that to generate the checker's expectations of library usage.)

      It's also not particularly useful to check things like gcc, because the checker doesn't have any way to determine if the compiler is compiling correctly or not; it can only tell if it might do crazy things in odd situations. Compilers are generally written sufficiently conservatively that problems tend to be safe-but-does-the-wrong-thing code, and the bugs you'd find with the checker would generally be "internal compiler error" bugs, not ones that lead to incorrect output (or any output at all...)
      • by jd ( 1658 )
        Compilers are generally written sufficiently conservatively that problems tend to be safe-but-does-the-wrong-thing code, and the bugs you'd find with the checker would generally be "internal compiler error" bugs, not ones that lead to incorrect output (or any output at all...)

        Dunno why, but nearly all the errors I've found are of the "internal compiler error" kind. (Why can't I have a NORMAL error for once?) For Fortran code, for example, I'm generally using G95 because gfortran barfs on a lot of the comput

        • The gcc faq covers this. Internal compiler error when compiling the kernel almost always means there's something wrong with your computer. In fact it's considered a great way to stress test new memory/motherboard/cpu etc
        • That's probably better, really. It's at least easier to debug than having your program misbehave in subtle ways. Or not so subtle ways; I've had gcc generate code that tried to use an instruction incorrectly, causing it to jump to somewhere totally random (GCC 3.3.2 for the AVR trying a tablejump).
  • by iabervon ( 1971 ) on Monday March 06, 2006 @02:20PM (#14859771) Homepage Journal
    Since Linux development is all in the open, even more than most OSS projects, due to using git so extensively, they should be able to check stuff that hasn't yet been merged into the mainline, and therefore report new bugs before they actually affect anyone. For that matter, they'd be able to identify the commit with contains the bug. If they wanted to be really slick, they could search for the patch being posted to the mailing list and reply inline to the posting with a report, just like human patch-readers do.

    Interestingly, the discussion on the mailing list so far has been primarily complaining that the announcement failed to take into account the fact that Coverity has been doing this already for a while. In fact, the only thing that's new is that they've put together internal infrastructure that lets them also handle other projects conveniently, and have therefore moved their kernel result info. It looks a bit like they sent a newly-scanned-project-directed form letter to the project they were already scanning, which is clearly a bit of a faux pas.

  • I've generally been a believer in static analysis tools. I remember using lint quite a bit of code back in the day and remember it not only finding basic syntax errors, but it educated me more about the code itself and even found some bizarre bugs that normal peer reviews would not have found. The code was better for it, even if it took days to really get through all the analysis output and do something about it.
  • It enlightens me highly that the wxWidgets framework ( http://wxwidgets.org/ [wxwidgets.org]) also belongs to this group of the 30 top projects. It shows me that concentrating on wxWidgets in my application development guideline project wyoGuide (http://wyoguide.sf.net/ [sf.net]) is the right step and gives me confidence that applications written this way will have a usable GUI and a superb code quality. It gives me pleasure that my way is correct and hope that the future of free choice of computer (http://wyoguide.sf.net/papers/Cro [sf.net]
  • Great now DHS can go back to library policing? Not. DHS could still be involved in something useful with cyber security, maybe they could even look at how good coverity's process is. Just don't tell me DHS will still spend millions to just duplicate work.. now they learn what a community really is.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...