Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming Data Storage IT Technology

MySQL & Open Source Code Quality 446

dozek writes "Perhaps another rung for the Open Source model of software development, eWeek reports that an independent study of the MySQL source code found it to be "in fact six times better than that of comparable commercial, proprietary code." You can read the eWeek write-up or the actual research paper (reg. required)."
This discussion has been archived. No new comments can be posted.

MySQL & Open Source Code Quality

Comments Filter:
  • Six times better? (Score:5, Insightful)

    by pyite ( 140350 ) on Tuesday December 23, 2003 @09:40AM (#7794144)
    Six times better? I didn't know it was possible to quantify code quality in that matter. Interesting.
    • If you would RTFA... (Score:5, Informative)

      by Theatetus ( 521747 ) * on Tuesday December 23, 2003 @09:46AM (#7794195) Journal

      ...they quantified it by dividing verified defects by lines of code. MySQL had 0.09 bugs/KLOC while the "commercial" defect density was 0.53 bugs/KLOC. (Their use of the term "commercial" confused me since MySQL is, after all, a "commercial" project, just an open-source one.)

      • by pyite ( 140350 ) on Tuesday December 23, 2003 @09:49AM (#7794220)
        "Defect" is also a difficult term to define. Some errors are much worse than others. It's not all about numbers, folks. Don't get me wrong, I'm not saying that MySQL isn't a great product. I just get skeptical when I hear things talked about in terms of "better" and "best."
        • by SonicBurst ( 546373 ) on Tuesday December 23, 2003 @10:02AM (#7794333) Homepage
          Not only is it hard to define defect (and it is very obvious that some defects are worse than others), but this code review sounds like it only spots "grammatical" or style errors in the code. It doesn't sound like it could find a defect in an algorithm implementation or logic. To me, these are where the true defects are, in the logic/reasoning breakdowns.
          • by B'Trey ( 111263 ) on Tuesday December 23, 2003 @10:21AM (#7794464)
            I'm not sure what you mean by "grammatical" or style errors. If you're talking about syntax errors, those should prevent the code from compiling. I'm not aware of how coding style can be an error (unless you're programming in Python).

            The specific errors in MySQL were dereferencing null pointers, failure to deallocate memory (memory leaks), and use an uninitialized variable. These aren't the only bugs that such an analysis can find; they're the ones that were found in MySQL. And they're definitely errors in logic.

            Certainly, there are bugs that such an analysis can't find. If you define PI as 3.15, your calculations are going to be off. If you create a function to determine the circumference of a circle as 2 * PI * Diameter, you've got a bug. I suspect that those are the types of errors in logic that you were referring to, and you're right that they will not be caught by a code analysis. However, that doesn't mean that comparing the frequence of the errors that CAN be caught between two programs is an invalid act. From my experience, programmers who make fewer of the former errors also make fewer of the latter. Analyzing catchable errors is a good metric for the frequency of errors in a given source tree, even if all errors aren't caught.
          • by Anonymous Brave Guy ( 457657 ) on Tuesday December 23, 2003 @11:48AM (#7795230)
            Not only is it hard to define defect (and it is very obvious that some defects are worse than others), but this code review sounds like it only spots "grammatical" or style errors in the code.

            It does indeed sound a bit like that, and with good reason. If you notice, the "indepedent review" was carried out by Reasoning, Inc., and we've heard of them before in these parts.

            For the benefit of those who haven't seen this trollfest^H^H^H^H^H^H^H^H^Hstory in its previous incarnations, Reasoning's services spot what some people call "systematic" errors, things like NULL pointer dereferencing or the use of uninitialised variables. As many people note every time this subject comes up, any smart development team will use a tool like Lint to check their code anyway, as a required step before check-in and/or as a regular, automated check of the entire codebase, and so any smart development team should find all such errors immediately. IOWs, it's grossly unfair to compare open and closed source "code quality" on this basis. Any project that has errors like this in it at all isn't serious about quality, and it shouldn't take an external study to point this out.

            Serious code quality is not dictated by how many mechanical errors there are that slip through because of weaknesses in the implementation language. Rather, it is indicated by how many "genuine" logic errors -- cases where the output differs unintentionally from the specifications -- there are. Of course, no automated process can identify those, but to get a meaningful comparison of code quality, you'd need to investigate that aspect, rather than kindergarten mistakes.

            There are other objections to their principal metric as well. For starters, source code layout is not normally significant in C, C++ or Java, so any metric based on line count is going to be flawed at best. But the big objection is that they're talking about childish mistakes, and comparing supposedly world class software based on childish mistakes isn't helpful (except to dispel the myth that some big name products have sensible development processes).

        • by Greyfox ( 87712 ) on Tuesday December 23, 2003 @12:52PM (#7795794) Homepage Journal
          According to the article (You DID read the article right?) they found (in the mysql code) 15 null pointer dereferences, 3 memory leaks and 3 usages of uninitialized variables. Apparently they look for comparable defects in commercial code and I think everyone who programs will agree that those are fairly major defects.

          The code scanners I've looked at will flag potential errors even if it's impossible to reach the error condition in code, so it's possible that some or all of that stuff may never have actually happened, but it's generally better to program defensively anyway. All it takes is for some bozo to change your if condition and all of a sudden you're segving all over your customer's important data. 15 null pointer derefences in nearly a quarter million lines of code is a pretty low number though. I've seen more than that in a single thousand line file written by "professionals."

      • by rembem ( 621820 ) on Tuesday December 23, 2003 @10:17AM (#7794438)

        0.09 vs 0.53 bugs/KLOC can also mean mysql has six times the amount of code per line, compared to an average "commercial" program. Those numbers should be divided by a code-density-factor.


      • by Tassach ( 137772 ) on Tuesday December 23, 2003 @10:42AM (#7794645)
        No defects != good software.

        A flawless implementation of a crap algorithm is still crap. I don't care if your bubble-sort routine has no memory leaks or buffer overruns; it still scales O(N^2). Likewise, a so-called "database" which does not implement key features like transactions and stored procedures is fundamentally flawed even if there are zero coding errors.

        MySQL may be well-written, but it's still a piece of crap by the standards of any professional DBA.

        • MySQL does have transactions, and has had them for quite some time. Stored procedures are due in a future version.

          • by Sxooter ( 29722 ) on Tuesday December 23, 2003 @11:06AM (#7794832)
            Sorry, but until MySQL has a mode where ALL tables are transaction safe, or at least throws an error when you try to create a fk reference to a non-transaction safe table, it's transactions are too prone to data loss due to human error.

            It's a good data store, but the guys programming it have to "get it" that transactions can't be optional in certain types of databases, and neither can constraints, or fk enforcement.

            MySQL has a tendency of failing to do what you thought it did, and failing to report an error so you know. This is a legacy left over from being a SQL interpreter over ISAM files. It makes MySQL a great choice for content management, but a dangerous choice for transactional systems.
            • A funny thing to add to this...

              I'm doing my first MySQL work (done a lot of Oracle and a little PostgreSQL) and I was *flabbergasted* when I realized that, when you update a table but the data has not actually changed, you get success and zero rows updated.

              Which is exactly what you get (and should get) when you try to update and no rows are found to update.

              I suppose with no triggers anyway, it might be a tiny bit faster to skip the actual update when the data hasn't changed, but to real DB folks this is
        • by scrytch ( 9198 ) <chuck@myrealbox.com> on Tuesday December 23, 2003 @11:47AM (#7795217)
          If only it were MySQL just lacking features that would, after much mudslinging at the ideas themselves, be grudgingly retrofitted into a new table type. MySQL's brokenness goes deeper than that [sql-info.de].

          MySQL's attitude toward data integrity can be summed up as "if the constraint can't be satisfied, do it half-assed anyway". I find myself having to write application code to manage data integrity with MySQL, something I can take for granted with a real database.
        • by neelm ( 691182 )
          So what you are saying is you would rather have your DB crash over not supporting some feature in a way which is only applicable in select situations?

          As a real world programmer (versus someone living in an academic world of theory) I prefer the what-I-have-works-and-I'm-Working-on-the-rest approach. In the real world, stability and performance are paramount to feature set. Also, when you consider the domain of creating web driven applications, some features of a DB become less important because the state
        • by Greyfox ( 87712 ) on Tuesday December 23, 2003 @01:02PM (#7795889) Homepage Journal
          Yeah, and the 3 users on the planet who actually need a full fledged SQL database can install Oracle or DB2. Although I've had my indexes corrupted and other horrible things with both those database packages.

          I've worked on several projects interacting with SQL databases and I've only seen one really take advantage of the power of the database. Most of them are using Oracle as a glorified DBASE III, and as a glorified DBASE III, MySQL is much less expensive. And I've seen entire companies built around DBASE III applications.

        • by eloki ( 29152 )
          A flawless implementation of a crap algorithm is still crap.

          No.. a flawless implementation of a crap algorithm just doesn't scale well. Of course bug rate is not the only criteria used when evaluating software, but people spend hundreds of man-hours fixing bugs.

          It demonstrates that the quality of open source code is not automatically worse than professional proprietary code (which some people believe is the case). The important thing is that it's at least an attempt at formal study (and not simply pers
      • by Dun Malg ( 230075 ) on Tuesday December 23, 2003 @10:59AM (#7794779) Homepage
        they quantified it by dividing verified defects by lines of code.

        Problem with that is that it assumes the same "code density". Granted, it's probably not going to differ by a factor of six, but remember the old question about programmer productivity:
        who's more productive: the coder who solves a given problem with 100 lines of code written in one hour, or the coder who solves it with 10 lines in two hours?

        I mean, simple stuff like doing this:

        bool function(int i);
        main(void)
        {
        int i;
        if(function(++i))
        //blah blah blah
        }
        ...instead of:
        bool function(int i);
        main(void)
        {
        int i;
        bool foo;
        foo = false;
        i++;
        foo = function(i);
        if(foo)
        //blah blah blah

        }

        ...will give you a threefold difference in line count (specifically counting lines in the main() function). Throw in an identical line using malloc in each, both forgetting to free it later, and you've got a "bug density" of .33 for the former, and .14 for the latter. Heck, you could have two un-freed malloc's in the latter an it'd still only be at .25! I'm not saying the study is wrong-- I'd rather have the code out where I can see it, no matter WHAT the "bug density"-- I'm just saying that I wouldn't take any statistic that is derived using "lines of code" as a variable as a serious, hard number.
      • by jc42 ( 318812 ) on Tuesday December 23, 2003 @12:36PM (#7795672) Homepage Journal
        ...they quantified it by dividing verified defects by lines of code.

        If I write a script to go through my C and perl code, and make sure that there's a newline before and after every brace, that will approximately double the lines of code, and will thus cut my error rate in half.

        This isn't a joke; I've done this on a couple of projects where they measured output by lines of code, just to illustrate the real impact of such measures.

        OTOH, if I deleted the comments from my code, that would approximately double my error rate, so I guess I won't do that.

        I'm also reminded of a project that I worked on a while back in which nearly every routine had some sort of error, sometimes several, and I didn't fix any of them. This would look really bad, I know. But you can probably guess what my task was. I was writing a test suite for a compiler. Most of the tests were to verify that the compiler would catch a particular kind of error. So of course my code contained that error, and the test script verified that the result was the proper error message.

        This is one of the fundamental problems with nearly every definition I've ever seen of "quality code". They usually don't measure the suitability of the code for the task. If your task is to measure a system's response to failures, you code will of course intentionally produce those errors in order to determine the system's responses. So what is an error in other situations is exactly correct code. Counting errors detected without asking what the task was gives you exactly the wrong results in such a case.

        I'm not sure I'd want my name associated with a project that didn't include this sort of test code in the basic distribution. If there are problems with an installation, I want to know about them before the users start using the stuff. And I want to know in a manner that will pinpoint the problems, not from the usual bug report that typically describes some symptom that is only remotely related to the actual problem. So nearly everything that I work on has a component with a high error rate, run under the control of a script that verifies the correctness of thee error messages. If the installation doesn't handle the errors correctly, the users are given output that will tell me what the problem is.

        I'd only be impressed by a study that handles such a test suite correctly. One that counts such "errors" is worse than useless; it actively discourages useful test suites.

        (Actually, just before reading this /. article, the task I was working on was adding some more tests to a test suite for a package that I'm porting to a number of different systems.)

    • by doowy ( 241688 ) on Tuesday December 23, 2003 @09:48AM (#7794214) Homepage
      It was based purely on "defect density" - the number of errors per throusand lines of code.

      MySQL had a defect density of 0.09 and the industry standard was found to be 0.57 defects per thousand lines of code.

      The MySQL development team has since fixed all of the 'defects' that were found in the study. (which ranged from a few uninitialized variables prior to usage to memory leaks).
    • Not really. Just an independent company hired to inspect the source calculated a visible defect rate that was 1/6th the average of the other products they had inspected, or 1 bug found per 10000 lines of code.
    • Re:Six times better? (Score:4, Informative)

      by man_of_mr_e ( 217855 ) on Tuesday December 23, 2003 @09:57AM (#7794297)
      Sadly, this isn't what most people assume it means. Reasoning's software only finds "obvious" defects, such as null pointer assignments. It doesn't (and can't) determine if a bit of code does what it's supposed to do, only that it does whatever it does without any danger of crashing.

      Basically, it's no different from running your code through BoundsChecker or CodeWizard, or any number of other such tools that check for obvious errors (Null pointers, obvious buffer overflows, dangling references, etc..)

      While I have no doubt that MySQL's code is perhaps "cleaner" than your typical unpublished code, I have plenty of doubt that MySQL's code is "better" than unpublished code in terms of efficiency, logic errors, etc..
  • by cableshaft ( 708700 ) <cableshaft@@@yahoo...com> on Tuesday December 23, 2003 @09:42AM (#7794165) Homepage
    ...until I release my MySQL source code to the open source community. Then that 6x multiplier will drop down to 2x.

    Yeah, it's really that bad. Gets the job done, though. Hell to maintain. Probably would've helped if I documented any of it.

    Maybe I should read that Code Complete book I keep meaning to read sometime.
  • by grub ( 11606 ) <slashdot@grub.net> on Tuesday December 23, 2003 @09:43AM (#7794168) Homepage Journal

    Perhaps another rung for the Open Source model of software development

    Uhh... no.

    It's is a glowing report for this particular open source project but that brush shouldn't be used to paint all open source. That will just lull open source developers into a false sense of euphoric contentment. Code quality didn't get this far by having a fixed target, that target should be a carrot on a stick that will never quite be reached.
    • by MartinG ( 52587 ) on Tuesday December 23, 2003 @09:50AM (#7794235) Homepage Journal
      The point is that for some folks it's still unfortunately the case that open source software is automatically worse the proprietry software, despite some of us knowing how outdated and wrong their ideas are.

      The "rung" in question here is the one where open source progresses in those peoples minds from "must be worse" to "can be as good or better"

      There's no suggestion of "all open source is better" anywhere.


      • it's still unfortunately the case that open source software is automatically worse the proprietry software

        All software sucks, the degree of suckiness is what matters. :)
  • Through its analysis, Reasoning concluded that the commercial average defect density--covering 200 recent projects and totaling 35 million lines of commercial code--came to 0.57 defects per thousand lines of code

    Um, so they just guessed that the code was six times better. Okay.
  • Measurements (Score:5, Insightful)

    by Stiletto ( 12066 ) on Tuesday December 23, 2003 @09:44AM (#7794180)

    Undoubtedly()
    {
    when();
    you = measure(quality);
    in.defects();
    per->lines_of(code, anyone);
    can = write(good, solid, code);
    }
    • by Walterk ( 124748 ) <slashdot@duble t . o rg> on Tuesday December 23, 2003 @10:01AM (#7794329) Homepage Journal
      Post:2: warning: return-type defaults to `int'
      Post:2: In function `Undoubtedly':
      Post:3: warning: implicit declaration of function `when'
      Post:4: `you' undeclared (first use in this function)
      Post:4: (Each undeclared identifier is reported only once
      Post:4: for each function it appears in.)
      Post:4: warning: implicit declaration of function `measure'
      Post:4: `quality' undeclared (first use in this function)
      Post:5: `in' undeclared (first use in this function)
      Post:6: `per' undeclared (first use in this function)
      Post:6: `code' undeclared (first use in this function)
      Post:6: `anyone' undeclared (first use in this function)
      Post:7: `can' undeclared (first use in this function)
      Post:7: warning: implicit declaration of function `write' Post:7: `good' undeclared (first use in this function)
      Post:7: `solid' undeclared (first use in this function)
      Post:8: warning: control reaches end of non-void function

      • How did I know someone would run this through a compiler?? Actually, I forgot to include the appropriate header, so it's only one error!
  • This article must have been written by supporters of closed software. The ratio of 0.57/0.09 is 6.333~ and the article states it is 6. Clearly FUD. Let the flaming begin!
  • by the real darkskye ( 723822 ) on Tuesday December 23, 2003 @09:47AM (#7794204) Homepage
    And line of code for line of code there are less known errors in MySQL than there are assumed/predicted/mean errors in their commercial counterparts, but that doesn't answer the question of how does MySQL compare performance-wise to Oracle or <flameretardent coating>MS SQL 2003</flameretardent coating>

    Just my 0.03 (adjusted for inflation)
    • Dude, I read that as a "flame-retarded coating", which only adds to the irony.

      I think a wet donkey in a vat of Jello with post-it notes attached to it's back has better performance than the MS SQL servers of old that I encountered, but to be fair, I haven't seen the 2003 version.
  • Lines of Code? (Score:3, Interesting)

    by ksa122 ( 676905 ) on Tuesday December 23, 2003 @09:47AM (#7794212)
    Reasoning performed its independent analysis using defect density as a prime quality indicator. Defined as the number of defects found per thousand lines of code, MySQL's defect density registered as 0.09 defects per thousand lines of source code.
    Can any measurement that uses lines of code to compare code that could be written in different languages or for different types of applications be very accurate?
    • Re:Lines of Code? (Score:5, Insightful)

      by nojomofo ( 123944 ) on Tuesday December 23, 2003 @09:52AM (#7794250) Homepage
      I'm under the impression that most "bugs" in software (certainly most bugs in my code) aren't bugs like these in the article (null dereferences, uninitialized variables, etc), but they're algorithm bugs. As in, there's a subtle interplay between different parts of complicated algorithms that can be easy for programmers to miss. Those types of bugs are going to be much harder to find, and certainly not going to be found in analysis such as this one.
  • by JSkills ( 69686 ) <jskills.goofball@com> on Tuesday December 23, 2003 @09:49AM (#7794224) Homepage Journal
    All that's missing - to go along with the defects per lines of code comparision - is a comparison of features and performance benchmarking to other commercially built database products. Now that would be the complete comparison.

    As strong proponent of MySQL, I'd be very curious to see how it stacks up in those regards.

  • Stanford Checker (Score:5, Interesting)

    by eddy ( 18759 ) on Tuesday December 23, 2003 @09:49AM (#7794229) Homepage Journal

    Anyone know how this one [stanford.edu] is faring? Will it ever be released? It's based on GCC, right? How many students can it pass between until it's "distribution"?

    The reason I'm asking is because I saw that one member of the team has jumped over to a company called Coverity [coverity.com] where one can read:

    Originally developed by a team of researchers in the Computer Systems Lab at Stanford University, Coverity's patent-pending source code analysis technology successfully detected over 2000 bugs in Linux including hundreds of security holes.

    I just think it'd be horrible if they used the GPL'ed GCC to develop their methods (having access to a full portable compiler onto which to do research and development is hardly a "small thing"), and then lock these same methods away from the community.

    I'm grateful for their work on checking linux, but really... this smells bad, IMHO.

    (If you don't know what I'm taking about, don't assume it's off-topic, okay? The Standford Checker is a related topic to the Reasoning analysis of MySQL, and I'm not sure we'll ever have a _better_ fitting topic to discuss this)

    • Well, considering that there's nothing that prevents one from using GCC to create commercial software, this is probably a non-issue. The libraries are LGPL so unless those are modified (or there are unknown issues), these guys are free and clear to create all the non-open software they want.

      This arrangement is a good thing - it allows everyone to use GCC as they see fit with a minimum of restrictions.
    • Re:Stanford Checker (Score:5, Interesting)

      by Error27 ( 100234 ) <error27@[ ]il.com ['gma' in gap]> on Tuesday December 23, 2003 @10:07AM (#7794366) Homepage Journal
      I wrote a similar tool to the Stanford Checker called smatch.

      I post the bugs and stuff that it finds on kbugs.org. [kbugs.org] The most recent kernel that I've posted is 2.6.0-test11.

      One thing that I was working on a couple weeks ago was invalid uses of spinlocks. Here [kbugs.org] are my results from that. I found quite a few places that don't unlock their spinlocks on error paths etc.

  • Debatable scale (Score:5, Insightful)

    by Basje ( 26968 ) * <bas@bloemsaat.org> on Tuesday December 23, 2003 @09:49AM (#7794230) Homepage
    I do believe that Open Source is better than proprietory. Faults per 1000 lines of code may seem like a valid scale, but I think it is indicatory at best, not proof.

    * It does not take into account the design of the software. This is often as important as the actual quality of the code.
    * It does not take into account the kind of errors. This is related to the first, but a buffer overflow that allows root access is worse than a failed instruction.
    * It does not even take the length of lines into account. Shortening the lines could lower the number, without actually changing anything.

    So, small victory, but the race goes on.
    • Re:Debatable scale (Score:5, Insightful)

      by ebuck ( 585470 ) on Tuesday December 23, 2003 @10:11AM (#7794391)
      Good points, and I agree.

      Also if "lines of code" are going to be part of any code comparisions, then a standard should be propsed that does (at a minimum) the following:

      1. Formats the code consistently. We don't want one project to have more lines of code (and therefore less bug density) because they put a brace or parenthesis on a separate line while others do not.

      2. Strip the comments. Someone could decrease bug density by heavy, heavy commenting. Comments are a vital part of coding (and more usually is better), but they have no impact on the bugginess of the code.

      3. Format conditionals, blocks, and function calls consistently, or better yet, ditch the line counting and count bugs per (function call, assignment operation, operation).

      Lines are easy to count, but they hold so little meaning in determing code quality.
    • Re:Debatable scale (Score:5, Interesting)

      by Zathrus ( 232140 ) on Tuesday December 23, 2003 @10:17AM (#7794436) Homepage
      Faults per 1000 lines of code may seem like a valid scale, but I think it is indicatory at best, not proof.

      It's actually a really miserable scale because of your 3rd point. If they ran the code bases through something like cindent and standardized the code formatting and removed all comments and whitespace then it's a somewhat more valid comparison. I didn't look at the actual research paper -- maybe they did. Odds are, your other two points are valid though.

      Additionally, they only say that the commercial code is "comparable". What does that mean (again, maybe answered in the paper)? Do they have roughly the same features? Are the query optimizers of roughly the same quality? Do they support the same platforms? I can't think of a major commercial database that doesn't exceed MySQL in all of these areas (ok, excepting SQL Server which fails on the 3rd only). Maybe it was a minor player in commercial databases. Dunno.

      These are the kinds of points that are raised when someone bashes OSS. There's no reason that they shouldn't be raised when the inverse is true as well. MySQL has progressed nicely and is worthy of consideration for light to moderate database loads now, I don't question that. All I'm saying is don't take things at face value.

      So, small victory, but the race goes on.

      The nice thing is that this is small and succinct -- it's suitable for showing to upper level management. That's a big win IMHO -- because normally the text bites they read are biased against free/open software.
      • I can't think of a major commercial database that doesn't exceed MySQL in all of these areas (ok, excepting SQL Server which fails on the 3rd only).

        I agree that MySQL is not the best database around, but here you are clearly exagerating.

        Moreover, more and more of the sore points (proper transaction support, foreign keys, online backups, innodb tables, ...) are getting fixed in the newer 4.x releases. Recent MySQL version are actually quite decent.

        So, implying that MySQL sucks worse than SQL Server excep

    • Re:Debatable scale (Score:3, Interesting)

      by G4from128k ( 686170 )
      It is very true that we can measure the "quality" of software with many different dimensions. The parent posts' suggestions of assessing design, error type, and parsimony (lack of dilution of errors with verbose code) are good.

      But the existence of alternative scales does not detract from the original assessment of defects/line unless we have separate knowledge that OSS is unfavorably biased. Do we have reason to believe that OSS is more poorly designed than commericial software, or that OSS has more ser
  • 6 times better? (Score:5, Insightful)

    by kjba ( 679108 ) on Tuesday December 23, 2003 @09:50AM (#7794234)
    I don't see how you can make the statement that MySQL is 6 times better than the proprietary code from the facts that the defect densities are 0.09 and 0.54 per 1000 lines respectively.

    This just looks like some quasi-scientific statement, trying to express things as a number that really don't fit such a representation. For example, as the number of defects decreases, it becomes increasingly more difficult to find the ones that are left. And is code that contains no bugs at all infinitely much better than code that contains a single bug which hardly ever occurs?

    • Re:6 times better? (Score:3, Interesting)

      by Urkki ( 668283 )
      • And is code that contains no bugs at all infinitely much better than code that contains a single bug which hardly ever occurs?

      Fortunately for the "model", there is no substantial piece of code that contains just one rarely occuring bug, let alone code that contains no bugs at all. Therefore such infinities never need to be considered in real life cases.

      But if you think of it theoretically, if that one rarely occuring bug potentailly causes your company go bankrupt (like being sued for huge damages), t

  • by rafael_es_son ( 669255 ) <rafael@human-ass ... ENnfo minus poet> on Tuesday December 23, 2003 @09:51AM (#7794242) Homepage
    The main difference between open and *MOST* closed code is the fact that the early release of closed code means mucho mas money to corporate pigs and dogs, thus, proper requirements analysis, design, coding and testing are usually pummeled in the name of happy-go-lucky capitalism. "It will be ready when it is ready." -Carmack "I love America!" -Murphy
  • by the_mad_poster ( 640772 ) <shattoc@adelphia.com> on Tuesday December 23, 2003 @09:56AM (#7794283) Homepage Journal

    Neener neener!

    Now, I'm sure we can all be very mature about this...

  • Don't generalize! (Score:4, Informative)

    by Junks Jerzey ( 54586 ) on Tuesday December 23, 2003 @09:56AM (#7794286)
    This "proves" that MySQL is better than commercial offerings. Good. A lot of people knew that. Hats off to the developers. But...

    1. This cannot be generalized into a property of all open source projects.
    2. It's more a tribute to the architecture and original core developers of MySQL than anything else.
    3. Realize that even though MySQL is an open source product, MySQL AB is the *company* that organizes and pays for MySQL development. So, again, you can't generalize this into something that covers late night hackers working on personal projects in their basements (the open source geek fantasy).

    MySQL is awesome! But let's be careful about this story, okay? It's the over-generalization that gives OSS/Linux advocates a bad name ("The Gimp is equivalent to Photoshop!").
    • by SuperBanana ( 662181 ) on Tuesday December 23, 2003 @11:17AM (#7794917)
      This "proves" that MySQL is better than commercial offerings. Good.

      No it doesn't. It "proves" that on average, by line, MySQL has fewer errors in code. It says nothing of the severity of the errors in either package.

      Furthermore- MySQL is not even close to being equal in feature set to almost any commercial DB; replication/backup sucks, it's not ACID compliant, it had no transaction support until recently, no stored procedures, no triggers.

      How on earth could you possibly compare it to almost any commercial SQL DB which has all these...and say MySQL is better?

      A lot of people knew that.

      No, every two bit web designer thinks its the greatest thing since sliced bread, since they think a select w/group+sort is an advanced query. Every professional DBA I've met refuses to work with MySQL and/or hates it, and they can go on for an hour about why. When are you people going to realize that PostgreSQL is so much better than MySQL, save some incredibly risky performance options?

      MySQL is awesome! But let's be careful about this story, okay? It's the over-generalization that gives OSS/Linux advocates a bad name ("The Gimp is equivalent to Photoshop!").

      But you just said "This proves that MySQL is better than commercial offerings!"

      • by ajs ( 35943 )
        Most of your points on MySQL are out of date. Its featureset has progressed a great deal since, apparently, you last looked into it. Even way back when this was a hot topic (when PostgreSQL, another excellent open source DB, was an up-and-comer), MySQL developers were already saying that most of people's concerns were being addressed in upcoming releasesd... Those releases have since come and gone (mostly in the form of 4.0 [mysql.com], though IMHO, 4.1 [mysql.com] is MySQL's finest moment, and its current release status as alpha
  • How many lines of code are there in a Library Of Congress?!
  • by jordan ( 17131 ) on Tuesday December 23, 2003 @09:57AM (#7794301) Homepage
    Because there are portions of the MySQL code that are just painful to look at.

    Take for instance the part that takes as input the key index size and calculates internal buffer sizes. The option's size is an unsigned long long, but they cast it to an unsigned long all over the place, do in-place bitshifting on the cast (and cause it to wrap -- try specifying 4G for your key index sometime and you'll get 0), and the quality of code in that case is just painfully horrible to look at or even figure out what it's doing.

    I could only shudder to think what the quality of the commercial product looked like, in comparison. Hell, I'll have nightmares if I consider the quality of MySQL++ as a comparison..

    --jordan
  • Total Crock (Score:3, Insightful)

    by nberardi ( 199555 ) * on Tuesday December 23, 2003 @10:00AM (#7794322) Homepage
    So how many of the eWeek people do you think saw the code to MS SQL Server or Oracal SQL? I am hightly doubting that they even were able to get to the front door to knock on either of the doors to ask if they could see the code. I mean this just looks like pure propoganda to anybody that has half a brain and keeps up with the industry.

    Don't get me wrong I love MySQL, but these types of articles are just as bad as the people saying that MacOS X isn't that secure because of the less users on it. Or the guy claiming that MS is way superior in the Internet Server world. These type of articles are just there to cause controversy and seperate us as a community Mac/Windows/Linux combined.

    I am not putting any merrit in this article and neither should you.
  • Wasn't it 6.56 times better?
  • Hardly fair (Score:3, Insightful)

    by stinkyfingers ( 588428 ) on Tuesday December 23, 2003 @10:03AM (#7794340)
    There's a hell of a difference between 235,667 lines of code and 35 million lines of code. Just like there's a difference between 1000 lines of code and 235,667 lines of code. That is, the more line of code, the more likely a defect will survive.
  • Remember the 'Open Source' IE patch that came out recently? That had a few bugs in - buffer overflows, that sort of thing. Luckily, being Open Source, they got spotted quickly.

    Now apply the 'Rule of 6 times' to Microsoft's closed source IE patches...

    • by thebatlab ( 468898 ) on Tuesday December 23, 2003 @10:22AM (#7794476)
      That open source patch was quite shoddily and hastily written. It wasn't even a patch really. Using it as representative of open source is not fair in any way whatsoever to other successful open source products.

      "Now apply the 'Rule of 6 times' to Microsoft's closed source IE patches..."

      There is no 'Rule of 6 times'. An analysis concluded that MySQL had a very limited number of defects in their code base. Kudos to them. This doesn't define a rule to be used in the open source vs. closed source holy war.
  • by ad0le ( 684017 ) on Tuesday December 23, 2003 @10:12AM (#7794399)
    First off, I think MySQL is a fantastic product. Its the perfect mix of speed and ease of use well suited for small to medium sized datastores where speed and relaibility are a must. That being said, I think it's unfair to describe this product alongside others such as Oracle, MSSQL (blow me guys, its a great product) and even PostgreSQL and SAP DB (which is be best OpenSource option in my opinion). The codebase for MySQL will never acheive the magnitude of the aforementioned products so it should be used that way. Just my 2 cents.
  • So.. if they can find x number of "defects" per y lines of code...

    Why not fix them?
  • by rlp ( 11898 ) on Tuesday December 23, 2003 @10:19AM (#7794451)
    I'm in the midst of upgrading a SQL Server 2000 installation. MS issued their latest patch in August - a mere 56 MB patch. Hopefully that will fix some of the flakiness I've been seeing.
  • FUD (Score:5, Insightful)

    by Kenneth Stephen ( 1950 ) on Tuesday December 23, 2003 @10:28AM (#7794532) Journal

    This is proof positive that the marketing engine has started churning in the Linux / Open Source arena. The quoted statistics are meaningless. Here are is a short list of things (in no particular order) that are wrong with this "study" (who paid for it anyway?):

    Lines of code is meaningless as a reliable measure of anything. The most this number can be used for is for assessing the high level complexity (i.e. simple, non-trivial, or hard) of an application / code construct. It is absolutely pointless to compare two different applications against each other by lines of code. This means that you can say that one is non-trivial and the other is complex or you can say that both are complex, but there is no valid way of determining (by using this particular metric) that one application is more complex than the other. I believe this is the fundamental flaw in this "study".

    The study igores capabilities. If application A has feature a, b, and c, and application B has features a, b, c, d, e, f, g, h , is it even meaningful to compare the number of defects detected between applications A and B? And no - normalizing it by lines of code is not valid (see previous point).

    Testing methodology : from the defects quoted in the article, it appears as if they "study" did white box testing on MySQL. This is hardly complete. While null pointer dereferences are certainly terrible, I would be also very very concerned about bugs pertaining to SQL capabilites, data integrity, performance, etc. If I go out and do a comparison of RDBMS's for a client, my report wouldnt be complete at all without covering these areas. How come the "study" doesnt mention any of these things?

    Lets face it : this is a paid propaganda article by the marketing machinery. Much like Microsoft has done in the past.

  • by jhines ( 82154 ) <john@jhines.org> on Tuesday December 23, 2003 @10:29AM (#7794537) Homepage
    It is really embarassing to have bad code with your name on it, released to the public.

    Not only that, but there is a small percentage of coders when presented with an ugly solution to a problem, will pretty it up, just "because". And it is a good way to get known in the OSS world.

    Unlike the corporate world, working but ugly code is hidden deeper and deeper, and people go out of their way to avoid it.
  • Toy DBMS (Score:3, Interesting)

    by leandrod ( 17766 ) <l@d u t r a s . org> on Tuesday December 23, 2003 @10:32AM (#7794572) Homepage Journal
    Seen lots of intelligent comments about lenght of lines and potential bloat skewing the results, but there is one more issue to consider: design.

    No matter how good the coding itself, if the design is broken, the tool is broken, period.

    And MySQL has a broken design. So broken that the upgrade path isn't MySQL X or something the like, but MaxSQL -- in fact, rebranded SAPdb. That SAPdb is at most at Oracle v7.2 levels tells lots about MySQL.

    I could be more specific, but do your own research in Google -- lack of SQL compliance, lack of features to enable declarative coding at the server instead of procedural client code, and so on.

    Now, the interesting part. Suppose MySQL AB would have a sudden insight and repent of their un-SQL, anti-relational ways. Unlikely, you say; yet possible. Now suddenly they have to recode, or change drastically the current code. The resulting tool will be probably much bigger than the current, because SQL is baroque; or even worse than much bigger, because of MySQL backwards compatibility.

    The sheer bloat will make even this faulty measure of bugs/KLoC skyrocket. Now, run the comparision again...

    Not to say SQL compliance shouldn't be attained. In fact, bloat in the SQL DBMS is a more than good enough tradeoff against bloat in the application. The ideal would be a RDBMS, but while there isn't a MyDataphor a SQL DBMS should do.

    Even today, I don't care about comparing to, say, Oracle or MS SQL Server. IBM DB2 would be a better baseline, but best of all the real competitors: PostgreSQL and Alphora Dataphor.
    • Re:Toy DBMS (Score:3, Informative)

      by kpharmer ( 452893 ) *
      > Even today, I don't care about comparing to, say, Oracle or MS SQL Server. IBM DB2 would be a better
      > baseline, but best of all the real competitors: PostgreSQL and Alphora Dataphor.

      I think you've got your dbms' mixed-up:
      Oracle, Informix, and DB2 are all of comparable complexity and power: Oracle's partitioning is the simplest and its clustering the most complex. DB2 & Informix have more complex partitioning - but can scale beowulf-style to hundreds (if not thousands of separate servers).

      SQL
  • by Anonymous Coward on Tuesday December 23, 2003 @10:36AM (#7794605)
    I'm a little confused. I thought I understood how to make profit with the GPL, but now I'm not sure.

    MySQL GPL'ed all their products. (presumably so they could get developers and bug-fixes to their product for no charge.) However, they offer "commercial" licenses for people who want to integrate MySQL into their software, but don't want to GPL it. How can they do that? Presumably, any improvements/bugfixes/modifications that came from the community would be GPL, and therefore cannot be re-integrated under a more restricted license. I'm a little confused here. How can they take code that has been released under the GPL and turn around and release it under a more restrictive license?
  • by Bruha ( 412869 ) on Tuesday December 23, 2003 @11:24AM (#7794982) Homepage Journal
    That they're filing suit against MYSQL for violating their IP on code quality.

The herd instinct among economists makes sheep look like independent thinkers.

Working...