Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Bug Open Source Programming IT

Coverity Report Finds OSS Bug Density Down Since 2006 79

eldavojohn writes "In 2008, static analysis company Coverity analyzed security issues in open source applications. Their recent study of 11.5 billion lines of open source code reveal that between 2006 and 2009 static analysis defect density is down in open source. The numbers say that open source defects have dropped from one in 3,333 lines of code to one in 4,000 lines of code. If you enter some basic information, you can get the complimentary report that has more analysis and puts three projects at the top tier in quality of the 280 open source projects: Samba, tor, OpenPAM, and Ruby. While Coverity has developed automated error checking for Linux, their static analysis seems to be indifferent toward open source."
This discussion has been archived. No new comments can be posted.

Coverity Report Finds OSS Bug Density Down Since 2006

Comments Filter:
  • Re:Three? (Score:1, Insightful)

    by Anonymous Coward on Wednesday September 23, 2009 @02:33PM (#29519271)

    TFA says four.

    So, not only are the /. summaries merely paragraphs copied from the article nowadays, they're paragraphs copied incorrectly.

  • Re:Three? (Score:5, Insightful)

    by eldavojohn ( 898314 ) * <eldavojohn@noSpAM.gmail.com> on Wednesday September 23, 2009 @02:59PM (#29519627) Journal

    TFA says four.

    So, not only are the /. summaries merely paragraphs copied from the article nowadays, they're paragraphs copied incorrectly.

    So if my summary was "merely paragraphs copied from the article" then where did I get the 1 in 3,333 and 1 in 4,000 numbers from?

    Also, if all I did was copy/paste the article, I'd be plagiarizing and -- not only that -- I would have copy/pasted the correct count of the projects in Rung 3 status. Instead I skimmed the report and was thinking "Rung 3" when I wrote that sentence the three was put in instead of the four. Doesn't make me any less wrong but I hate anonymous non-constructive criticism that's modded up. I apologize for my human error, obviously the human editor also missed it. Since you're anonymous, I can't assume you're human and beg you to relate to my plight of errors. I'm sure my error made the summary completely unreadable. I'm also certain that you've published hundreds of articles on Slashdot without so much as a single error in any of them.

    You do know that the number of submissions I've had recently, almost all have had some flaw or error in them. Simply because I realize there's no reward for fact checking. And there's no penalty for getting an error published. So assuming the summary sells to eyeballs and there's no error large enough to get it rejected the next thing is timing. I've written submissions that have been beat out by a few minutes and I get marked "dupe" by firehose. So that pushes me from taking 10-15 minutes to create a summary to 2-3 minutes. Oh well, the worse penalty is if I respond to the article (like this) I'm modded down by righteous moderators. Doesn't really bother me.

    If the editors aren't catching the errors and I've got no incentive to reduce the errors, do you think they're going to go away?

  • Re:Umm yeah (Score:4, Insightful)

    by Volante3192 ( 953645 ) on Wednesday September 23, 2009 @03:13PM (#29519791)

    If they check 1 line of code every second it would take 133,101.85 years to check 11.5 billion lines of code. At 1000 lines of code every second you are looking at 133.10 years to check that much code. At 4000 lines of code every second (e.g. 4GHz) you are looking at 33.2 years to check that much code.

    And if they were only using one system to do this, I'd imagine that would be a problem. I wonder, though, if you spread the processing across, oh, say, 512 processors, if you could get that time down under a month...

  • Re:Three? (Score:1, Insightful)

    by Anonymous Coward on Wednesday September 23, 2009 @03:27PM (#29520065)

    You have an excuse. Mistakes happen.

    Mistakes like this is why we have editors. The post you replied to was somewhat out of line, though as a general rule I'd say they would have been more accurate than they were in this case. Most submissions ARE copied directly from TFA.

    The real issue is that this was a blatantly obvious, easy-to-catch mistake. We're not talking about to/too or their/there issues that a technically-oriented person may not pick up on at first glance; we're talking about something that the original writer probably would not catch [psychologically, it's hard to tell for yourself you made the mistake until enough time has passed for you to forget writing it] but for which the editor would, in any professional system, get reamed for missing.

  • Re:Umm yeah (Score:5, Insightful)

    by eldavojohn ( 898314 ) * <eldavojohn@noSpAM.gmail.com> on Wednesday September 23, 2009 @04:01PM (#29520743) Journal

    A: We know they didn't check the code by hand.

    Of course not, do you know what static code analysis [wikipedia.org] is? I repeatedly said that in the summary.

    B: The methodology didn't classify defects (cosmetic, seucrity, minor, major. etc.)

    From the report, which is linked to in the article and you obviously didn't care to read before criticizing:

    NULL Pointer Deference
    Resource Leak
    Unintentional Ignored Expressions
    Use Before Test (NULL)
    Use After Free
    Buffer Overflow (statically allocated)
    Unsafe Use of Returned NULL
    Uninitialized Values Read
    Unsafe Use of Returned Negative
    Type and Allocation Size Mismatch
    Buffer Overflow (dynamically allocated)
    Use Before Test (negative)

    They then go on to discuss Function Length and Complexity Metrics.

    C: The numbers aren't normalized nor broken by application size.

    I don't understand how this is statistically relevant. The summary I gave lists by static code defect per line of code and looks at function length. Of course a project with 4 million lines of code would have more defects than one of 4 thousand lines of the code. The lines of code is the normalization!

    D: The use of a bug reporting database needs to be measured in regards to a baseline filing\fix % not a total volume (as we need to correlate new lines of code being added)

    Does it make any difference to the end user whether 90% of the project is new lines of code or 9% of the project is new lines of code?

    It reads like something from the Onion.

    You didn't read the report so you can't really speak.

    Dear Lord journalism is dead...

    Says the poster who didn't read or understand the report.

  • Re:Three? (Score:3, Insightful)

    by evanbd ( 210358 ) on Wednesday September 23, 2009 @05:51PM (#29522649)
    We're bitching about the slashdot editors, not you. It's their job to catch submitter mistakes. That is what an editor does. The really annoying thing is they're as likely to "edit" the summary to introduce mistakes as to remove them.

Kleeneness is next to Godelness.

Working...