Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming Open Source

Code Quality: Open Source vs. Proprietary 139

just_another_sean sends this followup to yesterday's discussion about the quality of open source code compared to proprietary code. Every year, Coverity scans large quantities of code and evaluates it for defects. They've just released their latest report, and the findings were good news for open source. From the article: "The report details the analysis of 750 million lines of open source software code through the Coverity Scan service and commercial usage of the Coverity Development Testing Platform, the largest sample size that the report has studied to date. A few key points: Open source code quality surpasses proprietary code quality in C/C++ projects. Linux continues to be a benchmark for open source quality. C/C++ developers fixed more high-impact defects. Analysis found that developers contributing to open source Java projects are not fixing as many high-impact defects as developers contributing to open source C/C++ projects."
This discussion has been archived. No new comments can be posted.

Code Quality: Open Source vs. Proprietary

Comments Filter:
  • Not a surprise (Score:4, Insightful)

    by Tontoman ( 737489 ) * on Wednesday April 16, 2014 @06:18PM (#46774379)
    Sunlight is the best bleach.
    • by Anonymous Coward

      You can't fix a problem that you can't see!

    • Yeah, tell that to the OpenSSL team, it will cheer them up.

  • Managed langauges (Score:5, Insightful)

    by Anonymous Coward on Wednesday April 16, 2014 @06:24PM (#46774427)

    Java project developers participating in the Scan service only fixed 13 percent of the identified resource leaks, whereas participating C/C++ developers fixed 46 percent. This could be caused in part by a false sense of security within the Java programming community, due to protections built into the language, such as garbage collection. However, garbage collection can be unpredictable and cannot address system resources so these projects are at risk.

    This is especially amusing in light of all the self-righteous bashing that C was getting over OpenSSL's problems. Seems it's true that using a "safe "language just makes the programmer lazy.

    • by Anonymous Coward

      Resource leaks are hardly as critical as "undefined behavior" (read: buffer overflows and all kinds of other nastiness).

      At best a resource leak gets you a DoS.

      • by Anonymous Coward

        You're right. Exploiting the swiss cheese of the JVM is a much better target.

        • by Anonymous Coward

          Swiss cheese security, that is.

        • by Anonymous Coward

          Guess which language the JVM is mostly written in? Dumbass.

    • Re: (Score:3, Insightful)

      Resource leak in Java = DoS, as mentioned already
      Resource leak in C = Heartbleed.

      Personally, I'd rather my application crash than expose my private keys and other data that was supposed to be encrypted.

      • Re:Managed langauges (Score:4, Informative)

        by Anonymous Coward on Wednesday April 16, 2014 @07:16PM (#46774869)

        Apparently you missed the cyrpto flaws in Android 's Java crypto library from last year that exposed private keys. Apparently writing things in Java guarantees jack and shit.

        • by sexconker ( 1179573 ) on Wednesday April 16, 2014 @07:35PM (#46775035)

          Apparently you missed the cyrpto flaws in Android 's Java crypto library from last year that exposed private keys. Apparently writing things in Java guarantees jack and shit.

          No, writing things in Java guarantees your shit will be jacked.

        • by Anonymous Coward

          Java calls C for anything performance-critical, anyway.

        • Re:Managed langauges (Score:5, Interesting)

          by Anonymous Coward on Wednesday April 16, 2014 @08:05PM (#46775275)

          You mean this one [securitytracker.com], lol?

          Solution: The vendor has issued a fix for the Android OpenSSL implementation and has distributed patches to Android Open Handset Alliance (OHA) partners.

          Oh, that notorious piece of Java code, OpenSSL!

          • Re: (Score:3, Informative)

            by Anonymous Coward
            The underlying bug was in the Android PRNG handling, not a flaw in OpenSSL.
    • Re:Managed langauges (Score:5, Interesting)

      by Darinbob ( 1142669 ) on Wednesday April 16, 2014 @08:18PM (#46775397)

      I also think that with a low level language that more developers are aware of potential problems than developers using high level languages. In some sense I think this is also due to the types of programs being developed. C/C++ today is common used for embedded systems, operating systems, runtime libraries, compilers, security facilities, and so forth. So systems programmers versus application programmers versus apps programmers. The system programmers are forced to take a close look at the code and must be mindful of how the code affects the system. I think that if you had such a comparison done back in the 80s that the numbers would be different because many more application programmers were using C/C++.

      Ie, interview for a systems programmer: do you know about priority inversion, do you understand how the hardware works, do you know the proper byte order to use, what does the stack look like, etc.
      Interview for the modern applications programmer: have you memorized the framework and library facilities.

      • by reanjr ( 588767 )

        I've never been in an interview where they asked for memorized framework and library facilities. As a web developer, I get questions about data normalization, graph theory, and complicated SQL JOINs.

      • The problem with low level languages, isn't anything technical about the languages.
        It is about a common attitude among programmers.

        As a kid, We learn things by taking steps up.
        We Walk/Run, Then we Ride a Bike, then we Can Drive a Car. It is a simplistic way of viewing things. One is better then the other, and you need to be better to use the better method.

        The same idea goes with programming languages. (I'll Show my age hear)
        You code in Basic, then you go to Pascal, then you can do C finally you will be ab

    • by Anonymous Coward

      Another explanation is that the leaks left in managed programs are likely to be the harder ones to fix, because a silly programming error (oops, I didn't free this pointer) isn't a source of leaks.

      Or we could moralize it and pretend managed developers are lazy.

  • by RonBurk ( 543988 ) on Wednesday April 16, 2014 @06:35PM (#46774511) Homepage Journal

    First, we shouldn't confuse Coverity's numerical measurements with actual code quality, which is a much more nuanced property.

    Second, this report can't compare open source to proprietary code, even on the narrow measure of Coverity defect counts. In the open source group, the cost of the tool is zero (skewing the sample versus the commercial world) and Coverity reserved the rights to reveal data. Would commercial customers behave differently if they were told Coverity might reveal to the world their Coverity-alleged-defect data?

    Again, having good Coverity numbers can't be presumed to be causally related to quality. For example, Coverity failed to detect the "heartbleed" bug, demonstrating that the effect of bugs on quality is very nonlinear. 10 bugs is not always worse than 1 bug; it depends on what that one bug is.

    • by dkf ( 304284 )

      First, we shouldn't confuse Coverity's numerical measurements with actual code quality, which is a much more nuanced property.

      Yeah, but good quality might well correspond to some sort of measurable anyway. Provided you've got the right measure. Maybe some sort of measure of the degree of interconnectedness of the code? The more things are isolated from each other, across lots of levels (in a fractal dimension sense, perhaps) the better things are likely to be.

      Maybe that would only apply to a larger project, and I'm not sure what effect system libraries (and other externals) would have. Yet the fact that it might be a scale-invaria

      • The more things are isolated from each other, across lots of levels (in a fractal dimension sense, perhaps) the better things are likely to be.

        Language has a lot to do with that.
        If your project is written in a managed language, allocated memory is always initialised first, there is no pointers arithmetic and array bounds are always checked, so it's impossible to read random data from memory.
        If your project is written in C, all code has access to all memory.

        • by Anonymous Coward

          "If your project is written in a managed language, allocated memory is always initialised first, there is no pointers arithmetic and array bounds are always checked, so it's impossible to read random data from memory."

          Except when you forgot to remove some reference to an object, so it's still stitting around in a list somewhere because it can't be garbage-collected, and some code then uses whatever objects happen to be in that list.

          No language is safe for an unthinking programmer to use.

        • by gbjbaanb ( 229885 ) on Thursday April 17, 2014 @03:58AM (#46777193)

          are you sure about that?

          unsafe
          {
          // srcPtr and destPtr are IntPtr's pointing to valid memory locations
          // size is the number of long (normally 4 bytes) to copy
              long* src = (long*)srcPtr;
              long* dest = (long*)destPtr;
              for (int i = 0; i < size / sizeof(long); i++)
              {
                  dest[i] = src[i];
              }
          }

          that's valid C#, all you need to do is inject something like that into the codebase and let the JIT compile it (using all the lovely features they added to support dynamic code) and you're good to get all the memory you like.

          Now I know the CLR will not let you do this so easily, but there's always a security vulnerability lying around waiting to be discovered that will, or an unpatched system that already has such a bug found in any of the .NET framework, for example this one [cisecurity.org] that exploits... a "buffer allocation vulnerability", and is present in Silverlight. [mitre.org]

          The moral is ... don't think C programs are somehow insecure and managed languages are perfectly safe.

          • Yes, so your argument is that you can, with great difficulty cause a possible security issue in C#, but in order to do so, you have to basically say... I'm about to do something possibly bad, please don't check to make sure what I'm doing is bad. Then modify the compiler from default to allow said code to be compiled, then put it into a fully trusted assembly so it bypasses all security checks, and THEN you might have an issue.

            and this is comparision to where in C/C++ where you can write an exploit in 2 li

        • by Anonymous Coward

          A managed language would not have protected against Heartbleed, because the program maintained it's own freelist to prevent memory from being unallocated. If it did not do this then being written in a managed language would have prevented Heartbleed - but then again, if it did not do this then the C code wouldn't have been vulnerable either.

    • Another problem with the comparison is that the average closed-source project is four times as big as the average open-source project. I'd expect defect density to go up with size of codebase. (Of course, this may not be an issue with what Coverity detects, but if so that emphasized that Coverity doesn't find all the important defects.)

  • Yeah, I have seen the source code to the Windows 7 OS, CISCO's iOS and LINUX of course.

    They all suck equally.

    However, that being said, I am currenrlty running a version of the LINUX OS I built and modified for my customers use in a PostGRES server which is quite large.

    Open Source wins again because I can correct the suck. :-)

    • by Anonymous Coward

      I am currenrlty running a version of the LINUX OS I built and modified for my customers use in a PostGRES server which is quite large.

      Assuming you mean Linux, that's a kernel and not an OS. Are you saying you run a custom kernel, or are you actually insane enough to run LFS on a server? If the later, WHY?

    • by raymorris ( 2726007 ) on Wednesday April 16, 2014 @08:58PM (#46775691) Journal

      Your four-sentence comment has five glaring errors that make it obvious that you have absolutely no idea what you're talking about. You very much remind me of the job applicant who told me he has experience in C, C+, and C++.

      • Maybe they meant C#? The [+] and [#] keys might be very close together on the custom keyboard layout your applicant used to type their resume...
        • That's an interesting thought. Had it been typed, it might be a typo. I was thinking of a guy who said that, out loud, face-to-face. That's not the only comment that made it clear he was claiming four times as much as he in fact knew.

          Of course, in a interview I give someone leeway - my mind went blank once in an interview when I was asked "what are the four pillars of object oriented programming?". At the time, I could have implemented objects in C using the preprocessor*, but interview stress caused a brai

          • Yeah, but "plus" and "sharp" (or should that be "hashtag"?) are almost the same word. I mean, anyone could make that mistake!
      • Your four-sentence comment has five glaring errors that make it obvious that you have absolutely no idea what you're talking about. You very much remind me of the job applicant who told me he has experience in C, C+, and C++.

        Well, since that time, I also learned C+++.

    • by hackus ( 159037 )

      First of all, built means compiled and modified means I switched off u32 support as well as targeting a Xeon processor class.

      I did not need to write code, the kernel had to be rebuilt and the binary replaced/modified for the target processor and memory architecture.

      You use the .config for that and rebuild the kernel tree. You don't need to write code.

      SO! The included Redhat kernel was way too generic for the application performance required.

      Among other things. But the point is, you can't do that with an OS

    • How can software that has any bugs be considered as good quality??? I guess that if guns are legal in your country, then buggy software may be too.

  • by mtippett ( 110279 ) on Wednesday April 16, 2014 @06:44PM (#46774589) Homepage

    The report doesn't really go into an important measure.

    What is the defect density of the new code that is being added to these projects?

    Large projects and old projects in particular will demonstrate good scores in polishing - cleaning out old defects that are present. The new code that is being injected into the project is really where we should be looking... Coverity has the capability to do this, but it doesn't seem to be reported.

    Next year it would be very interesting to see the "New code defect density" as a separate metric - currently it is "all code defect density" which may not reflect if Open Source is *producing* better code. The report shows that the collection of *existing* code is getting better each year.

    • Next year it would be very interesting to see the "New code defect density" as a separate metric - currently it is "all code defect density" which may not reflect if Open Source is *producing* better code. The report shows that the collection of *existing* code is getting better each year.

      This is exactly what I would expect. Odds are that open source and closed source software start out with similar defect densities. The difference is that open source software, over time, is available for more people to

    • by Anonymous Coward on Thursday April 17, 2014 @07:10AM (#46777731)

      Actually it's the reverse. Fengguang Wu does automatic defect reporting so new bugs get found and reported within a week. We've had great success with this.

      But if we delay then here is the timeline:
      - original dev moves to a new project and stops responding to bug reports (2 months).
      - hardware company stops selling the device and doesn't respond.
      - original dev gets a new job and his email is dead (2 years).
      - driver is still present in the kernel and everyone can see the bug but no one knows the correct fix.
      - driver is still in the kernel but everyone assumes that all the hardware died a long time ago. (Everything in drivers/scsi/ falls under this catagory. Otherwise it takes 8 years).

      Each step of the way decreases your chance of getting a bug fix. I am posting anonymously because I have too much information about this and don't want to be official. :)

  • Ah, Coverity (Score:5, Insightful)

    by ceoyoyo ( 59147 ) on Wednesday April 16, 2014 @06:58PM (#46774709)

    Coverity: Hey you, proprietary software developer with the deep pockets. Yeah, you. We've got this great tool for finding software defects. You should buy it.

    Proprietary software developer: get lost.

    Coverity: Hey, open source dudes, we've got this great defect scanner. Want to use it? Free of course!

    Open source dudes: Meh, why not?

    Coverity: Hey proprietary software developer, did we mention those dirty hippie neck beards are beating the stuffing out of you in defect (that we detect)-free code?

    PSD: Fine, how much?

  • Useless analogy (Score:5, Interesting)

    by Virtucon ( 127420 ) on Wednesday April 16, 2014 @07:08PM (#46774777)

    This is a useless analogy. Code Quality is a function of both skill and the stewardship of the team supporting the code. Tools help as well but you can write some elegant, high quality code regardless of the language chosen. You can also write some real shit too but ultimately how many defects a piece of software has comes down to the design and testing that goes along with it. Some bodies of work get rigorous testing and it's not like OpenSSL's recent problem wasn't about deficient design it was about a faulty implementation. Faulty implementations in logic happen all the time and there are some bugs that just take awhile to become known. I mean even with test driven development and tools for code analysis probably couldn't have found this particular issue but considering how long it was in the code base without somebody questioning it goes back to not only stewardship by the team but the rest of the world who are using the code. If anything this situation points out that FOSS can have vulnerabilities just like proprietary software however the advantage is that with FOSS you can get it fixed much more quickly and because other people can see the implementation it can become scrutinized by folks outside the team that develops and maintains it.

    In the case of Heartbleed the system works. A problem was found, it was fixed it's now just a matter of rolling out the fix and regressions are put into place to help insure that it doesn't happen again. The repercussions of what it means is that another gaping hole in our privacy was closed and that "bad guys" may have stolen data, rollout the fix ASAP. Your guess is as good as mine as to what was stolen is a matter of research and conjecture at this point. I doubt that the bad guys will tell us what they gained by exploiting it. Let's also be sure that until the systems with the bug are patched, they're vulnerable so cleanup on aisle 5.

    To be honest it's a bit naive if we all assume that FOSS software that handles security doesn't have potential vulnerabilities. Likewise it's also naive to assume that proprietary code has it licked as well given the revelations of NSA spying for the past year. Given that there are numerous nefarious companies that sell vulnerabilities to anybody who can pay for it, that means unless you're buying them you probably will never know what is exposed until somebody trips over it. What this means for everybody that you can depend on is when those vulnerability-selling companies are out of business can assume that your software is free of the easier to exploit vulnerabilities; governments will always use all their tools to get intelligence including subverting standards and paying off companies who can give them access to what they want.

  • Now all we have to do is get Dice to Open Source their stuff so we can FIX IT!
    • its all ready is open sourced and that is what the soylent news guys did but the community didn't fallow.

      • One big reason is that they never turned on the D2 discussion system. So right now Soylent News is even more clunky to use than the Slashdot Beta. You get directed to another page every time you want to reply or moderate.
      • its all ready is open sourced and that is what the soylent news guys did but the community didn't fallow.

        Yes, SlashCode is open source, but the latest public release is 5 years old and not at all what's running on slashdot now.

        It would be very nice, if Dice would release a newer version of the code, not only for SoylentNews [soylentnews.org], but also for the Japanese slashdot.jp [slashdot.jp] and the Spanish barrapunto.com [barrapunto.com], both of them are still using the old version.

  • by Lawrence_Bird ( 67278 ) on Wednesday April 16, 2014 @07:17PM (#46774879) Homepage

    with nearly 2x the LOC.

  • Code repositories were compromised by the NSA (or other capable group)

  • by jonwil ( 467024 ) on Wednesday April 16, 2014 @08:48PM (#46775631)

    With all the noise about OpenSSL lately, running this Coverity test on it (and other security software like GNUTLS) and sharing the results seems like it would be a good thing...

  • If you have good quality people, especially a good leader, your code will be good.

    Even if the people are relatively inexperienced.

    At this point, just about everything in IT/CS is a research project, not innovation.

    So it's a matter of diligently doing the work based on past archetypes.

  • Coverity is no the best "yardstick". Too many false negatives and too expensive.
  • by Murdoch5 ( 1563847 ) on Wednesday April 16, 2014 @10:55PM (#46776241) Homepage
    Some open source projects will have better code then closed source projects and vice vesa, you can't just make a clean line.
    • Hey, you're trying to find a reasonable and truthful middle ground. That prevents all the juicy flame wars. Someone call the guards!
    • Yes how could ever compare to groups that might have a significant amount of overlap? It can't be done! There's no branch of mathematics that would allow us to do such a thing! It's impossible.

  • The only problem with this is, of course, that what they claim to be doing (automatically examining code for defects) is literally impossible.
  • ...home?

    Most people will put more effort into something that will be public (both out of positive motivation and the negative motivation of shaming.)

    Open Source will always, in general, be better than closed source. Again - in general. There are people who will engineer things properly irrespective of whether or not someone will be browsing your github account or checking it out of the company's private server... Too bad there's not more of them ;).

  • One advantage Open Source has is that there are no deadlines and a good project leader can simply reject sub par code. For commercial code no company is going to pay a programmer big bucks and simply throw away his output because it sucks.
  • This is the same broken metric that Coverity has been mis-using year after year.
    "Defect density (defects per 1,000 lines of software code) is a commonly used measurement for software quality, and a defect density of 1.0 is considered the accepted industry standard for good quality software."

    In other words, if you double the size of the code base by adding no-op code, you increase your quality score.
    Also, if you leave the bugs in, but reduce code size, you are reducing your quality score.

For God's sake, stop researching for a while and begin to think!

Working...