Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming Open Source

Code Quality: Open Source vs. Proprietary 139

just_another_sean sends this followup to yesterday's discussion about the quality of open source code compared to proprietary code. Every year, Coverity scans large quantities of code and evaluates it for defects. They've just released their latest report, and the findings were good news for open source. From the article: "The report details the analysis of 750 million lines of open source software code through the Coverity Scan service and commercial usage of the Coverity Development Testing Platform, the largest sample size that the report has studied to date. A few key points: Open source code quality surpasses proprietary code quality in C/C++ projects. Linux continues to be a benchmark for open source quality. C/C++ developers fixed more high-impact defects. Analysis found that developers contributing to open source Java projects are not fixing as many high-impact defects as developers contributing to open source C/C++ projects."
This discussion has been archived. No new comments can be posted.

Code Quality: Open Source vs. Proprietary

Comments Filter:
  • Re:Not a surprise (Score:1, Interesting)

    by Anonymous Coward on Wednesday April 16, 2014 @06:33PM (#46774503)

    Only if they're actually is sufficient enough people looking at the code. OpenSSL proves that there isn't.

  • by mtippett ( 110279 ) on Wednesday April 16, 2014 @06:44PM (#46774589) Homepage

    The report doesn't really go into an important measure.

    What is the defect density of the new code that is being added to these projects?

    Large projects and old projects in particular will demonstrate good scores in polishing - cleaning out old defects that are present. The new code that is being injected into the project is really where we should be looking... Coverity has the capability to do this, but it doesn't seem to be reported.

    Next year it would be very interesting to see the "New code defect density" as a separate metric - currently it is "all code defect density" which may not reflect if Open Source is *producing* better code. The report shows that the collection of *existing* code is getting better each year.

  • Useless analogy (Score:5, Interesting)

    by Virtucon ( 127420 ) on Wednesday April 16, 2014 @07:08PM (#46774777)

    This is a useless analogy. Code Quality is a function of both skill and the stewardship of the team supporting the code. Tools help as well but you can write some elegant, high quality code regardless of the language chosen. You can also write some real shit too but ultimately how many defects a piece of software has comes down to the design and testing that goes along with it. Some bodies of work get rigorous testing and it's not like OpenSSL's recent problem wasn't about deficient design it was about a faulty implementation. Faulty implementations in logic happen all the time and there are some bugs that just take awhile to become known. I mean even with test driven development and tools for code analysis probably couldn't have found this particular issue but considering how long it was in the code base without somebody questioning it goes back to not only stewardship by the team but the rest of the world who are using the code. If anything this situation points out that FOSS can have vulnerabilities just like proprietary software however the advantage is that with FOSS you can get it fixed much more quickly and because other people can see the implementation it can become scrutinized by folks outside the team that develops and maintains it.

    In the case of Heartbleed the system works. A problem was found, it was fixed it's now just a matter of rolling out the fix and regressions are put into place to help insure that it doesn't happen again. The repercussions of what it means is that another gaping hole in our privacy was closed and that "bad guys" may have stolen data, rollout the fix ASAP. Your guess is as good as mine as to what was stolen is a matter of research and conjecture at this point. I doubt that the bad guys will tell us what they gained by exploiting it. Let's also be sure that until the systems with the bug are patched, they're vulnerable so cleanup on aisle 5.

    To be honest it's a bit naive if we all assume that FOSS software that handles security doesn't have potential vulnerabilities. Likewise it's also naive to assume that proprietary code has it licked as well given the revelations of NSA spying for the past year. Given that there are numerous nefarious companies that sell vulnerabilities to anybody who can pay for it, that means unless you're buying them you probably will never know what is exposed until somebody trips over it. What this means for everybody that you can depend on is when those vulnerability-selling companies are out of business can assume that your software is free of the easier to exploit vulnerabilities; governments will always use all their tools to get intelligence including subverting standards and paying off companies who can give them access to what they want.

  • Re:heartbleed (Score:5, Interesting)

    by Anonymous Coward on Wednesday April 16, 2014 @07:11PM (#46774809)

    I'm not going to yell about the openSSL guys.

    I'm going to be honest here, they deserve yelling at, and I'm an open source fan. The error they made is exactly the same mistake that everyone else has made in years past when dealing with SSL: x509 and the SSL protocol demands [lengthofstring][string], "pascal" style. This is how everyone (open and closed source) got hit with that domain validation bug where the certificate said "(26)bank.com\0.blahblahblah.com". Certificate signers looked at the domain on the end of the string "blahblahblah.com" and validated it. Client programs treated it like a C string and thought it was a certificate for "bank.com". Not a single person anywhere said "whoa there, null bytes are not part of a valid hostname!"

    The attack asks server to respond with "(65535)Hello" and the server replies with 65535 bytes of data. Falling for this attack is exactly like the guy who points and laughs at the person who just fell off their bike, seconds before falling off their own bike. They should have known better, especially with how high-profile these attacks were in the past.

    The bit about writing their own malloc implementation, poorly, was just icing on the cake.

  • Re:Not a surprise (Score:5, Interesting)

    by K. S. Kyosuke ( 729550 ) on Wednesday April 16, 2014 @07:25PM (#46774959)
    It's called stochastic sampling. There are "never enough" stochastic samples if you want to get to zero error, but given an arbitrary acceptable error, there are usually acceptable sample numbers and sampling strategies.
  • by Squiggle ( 8721 ) on Wednesday April 16, 2014 @07:29PM (#46774993)

    I would expect both "open source" code to be of approximately equal quality to proprietary code. In each ideology you will get people who care (about quality), and people who don't, in approximately equal proportions, the same with skill, ingenuity and passion for the work.

    The difference is that proprietary software is constrained by the number of developers able to view and work on the code. An open source project may have a similar number, or smaller set of core developers, but a much larger pool of developers that can spot problems, suggest alternatives, fix the one bug that is affecting them, etc. Having a more diverse set of developers will increase the chances that the software improves.

    You could also make an argument about the motivations of the developers. Open source projects are often a community of people passionate about what they are building and have a strong incentive to make their code readable by others. By the nature of open source a developers reputation is on the line with every bit of code they make public. I've met far more developers scared to make their horrible code public than those worried about getting fired for equivalently horrible code.

  • Re:Managed langauges (Score:5, Interesting)

    by Anonymous Coward on Wednesday April 16, 2014 @08:05PM (#46775275)

    You mean this one [securitytracker.com], lol?

    Solution: The vendor has issued a fix for the Android OpenSSL implementation and has distributed patches to Android Open Handset Alliance (OHA) partners.

    Oh, that notorious piece of Java code, OpenSSL!

  • Re:Managed langauges (Score:5, Interesting)

    by Darinbob ( 1142669 ) on Wednesday April 16, 2014 @08:18PM (#46775397)

    I also think that with a low level language that more developers are aware of potential problems than developers using high level languages. In some sense I think this is also due to the types of programs being developed. C/C++ today is common used for embedded systems, operating systems, runtime libraries, compilers, security facilities, and so forth. So systems programmers versus application programmers versus apps programmers. The system programmers are forced to take a close look at the code and must be mindful of how the code affects the system. I think that if you had such a comparison done back in the 80s that the numbers would be different because many more application programmers were using C/C++.

    Ie, interview for a systems programmer: do you know about priority inversion, do you understand how the hardware works, do you know the proper byte order to use, what does the stack look like, etc.
    Interview for the modern applications programmer: have you memorized the framework and library facilities.

  • by Anonymous Coward on Thursday April 17, 2014 @07:10AM (#46777731)

    Actually it's the reverse. Fengguang Wu does automatic defect reporting so new bugs get found and reported within a week. We've had great success with this.

    But if we delay then here is the timeline:
    - original dev moves to a new project and stops responding to bug reports (2 months).
    - hardware company stops selling the device and doesn't respond.
    - original dev gets a new job and his email is dead (2 years).
    - driver is still present in the kernel and everyone can see the bug but no one knows the correct fix.
    - driver is still in the kernel but everyone assumes that all the hardware died a long time ago. (Everything in drivers/scsi/ falls under this catagory. Otherwise it takes 8 years).

    Each step of the way decreases your chance of getting a bug fix. I am posting anonymously because I have too much information about this and don't want to be official. :)

Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker

Working...