Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
Bug Open Source Programming IT

Coverity Report Finds OSS Bug Density Down Since 2006 79

eldavojohn writes "In 2008, static analysis company Coverity analyzed security issues in open source applications. Their recent study of 11.5 billion lines of open source code reveal that between 2006 and 2009 static analysis defect density is down in open source. The numbers say that open source defects have dropped from one in 3,333 lines of code to one in 4,000 lines of code. If you enter some basic information, you can get the complimentary report that has more analysis and puts three projects at the top tier in quality of the 280 open source projects: Samba, tor, OpenPAM, and Ruby. While Coverity has developed automated error checking for Linux, their static analysis seems to be indifferent toward open source."
This discussion has been archived. No new comments can be posted.

Coverity Report Finds OSS Bug Density Down Since 2006

Comments Filter:
  • Fewer but bigger (Score:1, Interesting)

    by Anonymous Coward on Wednesday September 23, 2009 @02:31PM (#29519235)

    Why would Samba and Linux have got so unstable over the years, then?

  • by MosesJones ( 55544 ) on Wednesday September 23, 2009 @02:34PM (#29519283) Homepage

    The question of course is "Is 4000 good, average or bad?" can't be answered because closed source companies just aren't going to publish this sort of information.

    So what we can say is that the quality of OSS is trending upwards, but we can't say whether this makes it better, equivalent or worse than close source competitors.

    What are the odds on any of them taking up the challenge?

  • Survivorship bias (Score:5, Interesting)

    by vlm ( 69642 ) on Wednesday September 23, 2009 @03:04PM (#29519689)

    Survivorship bias []

    The projects that were alive back then, and now, are obviously more mature, thus would have fewer bugs. Unless you believe in spontaneous generation of bugs at a constant rate in unchanged code (in my experience, actually not too unbelievable for old C++ compiled by the newest G++ due to specification drift)

  • by jc42 ( 318812 ) on Wednesday September 23, 2009 @10:51PM (#29524909) Homepage Journal

    There can be some serious "methodology" problems in many of the definitions of "bugs", that can seriously confuse the bug counters.

    An example that I like to use is a project I worked on in the late 1990s. An important part of the package that I delivered included a directory of several hundred C source files, mostly small, with at least one bug in each. The project's leaders got some chuckles out of mentioning this at meetings, commenting that they had no intention of letting me fix any of the bugs, since they were an important contribution to the project. This produced much confusion among the higher ups, who took some time understanding what was going on and how to account for it.

    Some readers might have guessed what my task was: Building a regression-testing suite for the C compiler. The directory in question was for testing the diagnostics in the compiler. Each source file had one or more carefully designed "bugs". The makefile ran the C compiler on each, and sent the stderr output to a validator that verified that the compiler had successfully identified the bug and produced the right error message.

    We had a bit of fun confusing people by asking them whether these test files really contained "bugs" or not. According to the C standard, they certainly did. But according to the test procedure, these weren't bugs; they were tools for testing the compiler. If they were "fixed", the test scripts would no longer be able to validate the compiler's error messages.

    The higher-ups did finally understand the value of this, and agreed that although this batch of files were full of "bugs", they shouldn't be counted as such in the bug reports.

    I also sometimes listed my job as the project "bugger". It's always fun to construct new words by stripping prefixes off words that usually have them. But I wasn't sure what term was best for the task of making sure that a routine actually contains the bug that the specs say it should have. "Debugging" doesn't seem right when the job is making sure that the right bug is there.

    (Actually, I mostly thought that the project had a minor management problem, since any competent software development manager should understand the value of making sure that the software's error messages are correct and useful. But we all know how rarely this is actually done well. How often does your compiler point to the right place in the code when it produces an error message? And how often does the message describe the actual error?)

Mathemeticians stand on each other's shoulders while computer scientists stand on each other's toes. -- Richard Hamming