Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming Software Apache IT Technology

Software Code Quality Of Apache Analyzed 442

fruey writes "Following Reasoning's February analysis of the Linux TCP/IP stack (putting it ahead of many commercial implementations for it's low error density), they recently pitted Apache 2.1 source code against commercial web server offerings, although they don't say which. Apparently, Apache is close, but no cigar..."
This discussion has been archived. No new comments can be posted.

Software Code Quality Of Apache Analyzed

Comments Filter:
  • by mao che minh ( 611166 ) * on Monday July 07, 2003 @10:27AM (#6382660) Journal
    I suppose now we have to question the severity of the defects (and also factor in the implementation and use of the code). If Apache and, say, IIS are roughly equivalent in terms of code defects, you have to ask yourself "well, why does IIS have so many more general problems and security flaws then Apache, when they both carry the same general amount of coding defects?". Is IIS just inherinetly insucure because it is used on a Windows platform? Is it because hackers generally target IIS and not Apache (most people will rush to this conclusion)?

    But here's the kicker: the vast majority runs Apache on either BSD or Linux. All of this code, from the kernel to the library that tells Apache how to use PHP, is open source. Every hacker on the planet has full access to the code - which means that they can review it and find vulnerabilities in it. Not many people have access to Windows or IIS code. So why does IIS and Windows come out as far less secure, and is exploited so much more?

    I think the answer lies in the severity of the code defects, and the architecture and design of the operating system that powers the web server. And yes, I know that Apache can run on Windows.

  • Wait a second (Score:4, Insightful)

    by Knife_Edge ( 582068 ) on Monday July 07, 2003 @10:27AM (#6382661)
    Has Apache 2.1 been released as a stable, non-developmental release? If not I would say testing it for defects is a bit premature.
  • 2.1 ? (Score:4, Insightful)

    by Aliencow ( 653119 ) on Monday July 07, 2003 @10:27AM (#6382667) Homepage Journal
    Wouldn't that be unstable? I thought the latest was 2.0.46 or something.. If I'm not mistaken, it would be a bit like saying "Freebsd 4.8 has less bugs than Linux 2.5!"
  • by SystematicPsycho ( 456042 ) on Monday July 07, 2003 @10:28AM (#6382670)
    So basically they offer a service like lclint [virginia.edu] only many times more advanced ? What is to say they haven't missed anything?

    This is probably a publicity stunt for them although a good one. I think it would be a good idea for them to sell software suites of their product if they don't already.
  • by TheRaven64 ( 641858 ) on Monday July 07, 2003 @10:28AM (#6382673) Journal
    Hmm, so they looked at 58,944 lines of code, and found 31 defects? Did they find every defect? Can they prove this? What about those found in commercial code? If it were possible to find all of the defects in a piece of code this big in a small amount of time, then there would be no defects, since they would all be identified and fixed before release.

    As far as I can see, this article says 'We have two arbitary numbers, and one is bigger than the other. From this we deduce that Apache is not as good as commercial software.'

  • Apache 2.1...? (Score:5, Insightful)

    by bc90021 ( 43730 ) * <`bc90021' `at' `bc90021.net'> on Monday July 07, 2003 @10:28AM (#6382675) Homepage
    According to Apache.org [apache.org], Apache's latest stable version is 2.0.46. Is that a typo on their part, or are they testing a development version? Also, since 1.3.27 is widely used, it would have been interesting to see how that stacked up as well, having been developed longer.

    Either way, to have only 31 errors in close to 60,000 lines of code is impressive!
  • "Defect Density"? (Score:5, Insightful)

    by sparkhead ( 589134 ) on Monday July 07, 2003 @10:29AM (#6382676)
    A key reliability measurement indicator is defect density, defined as the number of defects found per thousand lines of source code.

    Since LOC is a poor metric, a "defect density" measurement based on that will be just as poor.

    Yes, I know there's not much else to go on, but something along the lines of putting the program through its paces, stress testing, load testing, etc. would be a much better measurement than a metric based on LOC.


  • What bothers me about these articles is that there is more to software quality than the # of flaws-per-unit-"whatever".

    Like design.

    It seems to me most of the problems with Apache's main competitor in terms of software quality are the result of design and engineering choices made by MS's IIS development team.

    In other words, it does exactly what they designed it to do, but what they designed it to do was a very bad idea.

  • No cigar, my ass. (Score:5, Insightful)

    by KFury ( 19522 ) * on Monday July 07, 2003 @10:32AM (#6382710) Homepage
    The article claims Apache's error density, based on a meager 5100 lines of code, is 0.53, while that of 'comparable commercial applications' is 0.51.

    The problems with this are:
    • 5100 lines of code does not give you a confidence range of less than 0.02, especially when the error rate can be expected to be heterogeneous across the code base, as would be the case in an open-source product where different code pieces are created by entirely different groups.
    • 'Comparable' my ass. If they can't provide details of what software they're comparing to (I somehow doubt they got a look at IIS source code) then the stats are worthless, because anyone who's ever programmed knows that quality control isn't a constant factor across commercial products any more than it is among open-source products.
    • What's the error rate of their 'defect analysis'? If they're so good at finding defects, why aren't they out there writing perfect software? If their defect detection rate is less than 98% accurate, then the difference between a rate of 0.51 and 0.53 is meaningless anyhow.
    • There's a big difference between caught coding exceptions and fundamental security problems. The first can cause code to run a little slower, the second can destroy your company. This testing methodology doesn't even look at the second.
  • Re:Apache 2.1...? (Score:3, Insightful)

    by jbp4444 ( 193803 ) on Monday July 07, 2003 @10:33AM (#6382713)
    I was quite impressed by the fact that Apache can cram all the functionality into ~59k lines. So besides defect rate, I would like to know how many lines of code the commercial package had ... 0.51 defects per 1000 lines sounds good, unless there are 1,000,000 lines more code in the commercial package.
  • by Sikmaz ( 686372 ) on Monday July 07, 2003 @10:35AM (#6382725)
    This looks like it was just an ad/demo of their code testing software.

    I am trying to get the main analysis downloaded now, but they must have been prepared for a slashdot posting ;)
  • code defects? (Score:1, Insightful)

    by Anonymous Coward on Monday July 07, 2003 @10:36AM (#6382728)
    I see the point in automatically checking the
    source code for common programming errors,
    but how can such a system ever find semantic
    errors, such as complicated protocol handling
    issues?
    It seems to me that those just happen to be
    strong points of open source software.
  • by Anonymous Coward on Monday July 07, 2003 @10:36AM (#6382731)
    For the same reason that windows boxes get hacked more often. The more a platform is used the more attacks on it.
  • by frankthechicken ( 607647 ) on Monday July 07, 2003 @10:37AM (#6382738) Journal
    Completely and utterly agree, I mean hell, I could write fifty thousand lines of code, each line completely and utterly with no meaning, run it through the checker and produce 0 defects, except for one overall defective piece of software. Does this article have any point whatsoever to it at all, I mean, even if the results had any meaning, what on earth is the point of comparing a known to an unknown ?
  • by siskbc ( 598067 ) on Monday July 07, 2003 @10:37AM (#6382745) Homepage
    If Apache and, say, IIS are roughly equivalent in terms of code defects, you have to ask yourself "well, why does IIS have so many more general problems and security flaws then Apache, when they both carry the same general amount of coding defects?". Is IIS just inherinetly insucure because it is used on a Windows platform? Is it because hackers generally target IIS and not Apache (most people will rush to this conclusion)?

    First, are all of IIS's issues "software errors" per se? I'm wondering if all security problems would have been caught, or if that was really the goal of the analysis. Perhaps it was, but I'm not sure. One could contest that IIS has a lot of things unprotected, but that this doesn't constitute a software error.

    And as you say, severity would be another issue. It's always been typical open-source style to get the mission-critical parts hardened against nuclear attack, but leaving the other bits a tad soft. I wouldn't be surprised to learn that was the case with apache.

    One thing I want to know - did MS (or whoever) give these guys source or were they analyzing the binaries?

  • Dubious (Score:5, Insightful)

    by cca93014 ( 466820 ) on Monday July 07, 2003 @10:39AM (#6382754) Homepage
    Is it just me that finds this entire concept of "code defects per 000 lines" sounding like a little bullshit?

    If the company has developed proprietary tools to enable them to identify defects in medium-sized software projects, which of the following business models do you think is more effective:

    1. Design proprietary tools to identify defects in medium-sized software projects.
    2. Fix defects
    3. Profit

    or

    1. Design proprietary tools to identify defects in medium-sized software projects.
    2. Sit around mumbling about defects, Open Source software, closed source software and why farting in the bath smells worse
    3. ???
    4. Profit

    Secondly, where on earth did they get hold of a closed source enterprise level (which Apache undoubtedly is) web server software codebase?

    "Hi, is that BEA? Do you mind if we take a copy of your entire code base so that we can peer review it against Apache's? What's that? Yes, Apache might come out on top, and we will make the results public..."

    How do they define a defect anyway? A memory leak? A missing overflow check? A tab instead of 4 spaces?

    It just sounds like bullshit to me...

  • by NotClever ( 635709 ) on Monday July 07, 2003 @10:39AM (#6382757)
    When the same group said that the IP stack in Linux was cleaner than a comparable one, everyone was screaming from the rooftops that it validated the open source model. When they say that an open source project and a closed source project are roughly comparable, all of a sudden everyone criticizes the methodology of the report!

  • by brlewis ( 214632 ) on Monday July 07, 2003 @10:40AM (#6382762) Homepage
    Another post seems to indicate this was done via software to automatically detect defects. Many (most?) security defects cannot be detected automatically, as they involve using the software in an unintended way.
  • is equivalent to the error level in post-release commercial web serving software. Sounds like an endorsement to me.
  • Bad Statistics... (Score:5, Insightful)

    by FunkZombie ( 322039 ) on Monday July 07, 2003 @10:42AM (#6382787)
    Also keep in mind that defect density is just an average. If you have 31 defects in 60k lines of code, that is potentially 31 security risks, or out-of-operation risks. If the other software tested had double the lines of code (120k), the density would imply that they had slightly less than double the defects, so say 58 or 60. That implies _58_ potential security or uptime risks. In this case, imho, defect density is not a good indicator of the reliablity of the software.

    My general rule is that if someone is quoting statictics to you, they are lying. At least on average. :)
  • by sterno ( 16320 ) on Monday July 07, 2003 @10:42AM (#6382791) Homepage
    This doesn't indicate that the commercial equivalents are better. You've got the DEVELOPMENT branch of Apache, which is derrived from the 2.0.x code which is a complete rework from the original 1.X branch of code. So it's a rather new code base and it's showing similar defect rates to a code base that has been around for a while. I'd say this prooves that open source is better.
  • Wrong Math (Score:5, Insightful)

    by bstadil ( 7110 ) on Monday July 07, 2003 @10:45AM (#6382803) Homepage
    You got the math reversed

    The longer and more content you have per line the higher the likelyhood of error/ line.

    As example with one errror in 100 lines you get 1% error. Imagine you could do the whole thing in one line. Now you have 100% error.

  • by dkh2 ( 29130 ) <dkh2@WhyDoMyTits I t c h .com> on Monday July 07, 2003 @10:48AM (#6382824) Homepage
    Sure, they found them but, did they catalog them in any way. .53/KLOC errors translates to approx. 1 error every 1886 LOC on average. On top of that, on further investigation, which of these are actual errors and which only look like errors?

    I'm just glad I'm not the poor go-coder who has to go through the code to find and fix these few "errors."
  • Don't assume IIS (Score:5, Insightful)

    by m00nun1t ( 588082 ) on Monday July 07, 2003 @10:49AM (#6382828) Homepage
    Ok, IIS is the obvious choice as being the second most popular web server after Apache. But I hardly think Microsoft will be letting these guys all over the IIS source code.

    It could also be Zeus, SunOne or one of the other lesser known web servers out there.
  • by defile ( 1059 ) on Monday July 07, 2003 @10:50AM (#6382833) Homepage Journal

    The test may be more interesting if applied to Apache 1. As someone who has had to migrate a mod_perl site from Apache 1 to Apache 2, I can tell you that Apache 2 is a very new beast, and it doesn't shock me at all that there are dozens of bugs that still need to be shaken out. Fewer users are running Apache 2 in a production environment as well, since it's considered a development branch. See less eyeballs rule.

  • by the eric conspiracy ( 20178 ) on Monday July 07, 2003 @10:51AM (#6382847)
    This study makes a lot of sense to me - that the defect rate is tied to the maturity of the code base. I have long felt that Microsoft's business model where they redo the operating system in order to churn their user base and induce cash flow will always result in more defects and security problems than a model where software change is driven on a solely technical basis.

    I think the next step for these folks would be to take a project that has a long history, say perhaps Apache 1.x and show defect rates over the life of the project.

  • by David McBride ( 183571 ) <david+slashdot&dwm,me,uk> on Monday July 07, 2003 @10:53AM (#6382862) Homepage
    Well, the reports simply state that, in the 360 files they checked (most of them header files) they found 29 cases of a potential NULL pointer dereference and 2 potentially uninitialized variables. This is from the Apache 2.1 codebase as of 31st Jan this year, about 58k lines of code.

    Their automated checker also searched for out-of-bounds array accesses, memory leaks, and bad deallocations. It found none.

    They also state that they ran the same checks against other codebases, and found that they did marginally better, on average.

    In short, this report says that OLD development code for an unreleased opensource project is nearly as good as current commercial offerings. That's at best, when you consider the huge gamut of possible defects that this checker won't pick up. That margin probably disappears in the +/- of the sampling if you were to do a proper statistical analysis.

    The report is fairly useless. It certainly should not be taken as a reason to not trust Apache; to do so would be foolhardy particularly given Apache's track record.

    Oh, and Reasoning's webserver is being pounded into the ground. You can get my local copy of the reports from here [ic.ac.uk].
  • by jdh-22 ( 636684 ) on Monday July 07, 2003 @10:58AM (#6382887)
    Every hacker on the planet has full access to the code - which means that they can review it and find vulnerabilities in it. Not many people have access to Windows or IIS code.
    To quote Bruce Schneier: "If I had a letter, sealed it in a locked vault and hid the vault somewhere in New York. Then told you to read the letter, thats not secruity, thats obsecurity. If I made a letter, sealed it in a vault, gave you the blueprints of the vault, the combinations of 1000 other vaults, access to the best lock smiths in the world, then told you to read the letter, and you still can't, thats security." Open source does have an upper hand on holes and bugs, but the code isn't where we should be looking.

    The majority of the secruity holes are from the people setting up the web servers. The holes are usually abused by "wanna-be" hackers, or script-kiddies. The problem is that people are not educated enough to run some of these programs. Being able to understand Apache, and how to make it operate correctly is not everyone's top priority. As long as it works, people don't care how it works (as goes for many other things in this world).
  • by sterno ( 16320 ) on Monday July 07, 2003 @10:59AM (#6382893) Homepage
    The thing that always kills IIS, is the integration it has with Windows. This isn't a defect in IIS, or Windows, per se, but rather a defect that arises because of how they integrate with eachother. A script executes on IIS in a way that's not inately a bug, but then when it interacts with Windows, Exchange, etc, suddenly it becomes one.

    Apache is just a webserver, and that's all. PHP, JSP, etc, are all separate applications treated separately. The integration does make things more efficient, yes, but also more prone to problems.
  • by XaXXon ( 202882 ) <xaxxon&gmail,com> on Monday July 07, 2003 @11:03AM (#6382919) Homepage
    I have to play the BS card here.

    There is no magic "defect detector" for software. If there was such a thing, they would be making a helluva lot more money than they get for doing little defect tests.

    It is very difficult to prove a program to be correct, and there's a lot of REALLY smart people who have tried.

    Maybe these people have stuff than can look for buffer overflows and stuff, but actually being able to tell if Apache is returning the correct results requires far more than generic tests.

    And I'll all but guarantee they didn't get together an entire development team to understand the code base and how it works as apache is a very large and complex code base.

    Maybe they take what the find for their generic tests and extrapolate that if they find more generic problems there are probably more specialized errors as well, but they make it very clear in the report that the difference between .51 and .53 defects / KLoC (thousand lines of code) is statistical noise.

    Anyways, I'm not saying the entire thing is worthless, just not to read too much into it -- either this one that puts Apache slightly behind some unnamed commercial implementation or the one that put the Linux TCP/IP stack ahead of some other commercial implementation (though I'd say it would probably be easier to test a TCP/IP for correct behaviour than a web server).

  • Re:Magic software (Score:2, Insightful)

    by Eustace Tilley ( 23991 ) * on Monday July 07, 2003 @11:08AM (#6382948) Journal
    Ok, pretend you are the magic software and you see this code:
    int ar[50];
    for (int i = 0; i<=50; i++) { ar = 1;}
    How are you going to "automatically" fix that? Change the comparison operator? Change the array size? Replace the loop with a library function?

    "Fixing" requires understanding the code's intent.
  • by UnknowingFool ( 672806 ) on Monday July 07, 2003 @11:14AM (#6382978)
    Numbers can mean anything. It's the interpretation that matters. 31 errors in 58,944 lines. Hmmm. Even if we take Reasoning's word that these are errors and not "features", that's 0.53 error rate. The unnamed commercial software had an error of 0.51. So what does that prove?

    1) Apache 2.1 has more bugs than some unknown commercial competitor. If the version is correct, a development (not-ready-for-release) build was pitted against a released commercial build. Not fair playing ground.

    2) Reasoning does not detail the severity or kind of the bugs. Certainly, a web server not being able to handle a type of format (pdf, csv, ogg vorbis) is less severe than a security hole. Pitted against IIS, I would trust Apache even if it had more bugs, because historically it has had fewer security patches. Check out Apache's 2.0 known patches [apacheweek.com] vs IIS 5.0 [microsoft.com]

  • by MisterFancypants ( 615129 ) on Monday July 07, 2003 @11:15AM (#6382983)
    None of that bug report is at all useful if there is no logical way for all of those preconditions they listed to actually be met.

    I mean, yeah, it would be nice if code would explicitly check for a NULL before dereferencing, but if there's no earthly way for the pointer to actually BE a NULL pointer at that time (barring memory corruption -- in which case all bets are off and your code is doomed anyway) then I wouldn't call those errors.

    This whole exercise seems very suspect to me.

  • Re:every program. (Score:2, Insightful)

    by lucas_gonze ( 94721 ) on Monday July 07, 2003 @11:20AM (#6383025) Homepage Journal
    that's not just reductio ad absurdem, it's actually useful. you should always write the least code possible, and since features mean code, you should have as few features as you can get away with.
  • Re:Apache 2.1...? (Score:3, Insightful)

    by pmz ( 462998 ) on Monday July 07, 2003 @11:21AM (#6383026) Homepage
    I was quite impressed by the fact that Apache can cram all the functionality into ~59k lines.

    Agreed. It would be interesting to know whether this low LOC is accomplished through good architecture that emphasizes simplicity and maintainability or "clever" hacks that compress a 10-line loop down into a three-line abomination of pointer arithmetic. I genuinely hope it is not the latter.

    Regardless, 59K lines is small enough a program that--given a good architecture--can be studied and debugged relatively easily by one or two people. I'd estimate that this is why Apache is known for its low number of exploits in spite of its enormous web server market share.
  • is equivalent to the error level in post-release commercial web serving software. Sounds like an endorsement to me.

    That, too, but I'm damn certain that they must have tried it on recent stable 2.0.46ish release aswell. The question is, why weren't those results made public?

    I'm guessing it's because the results were something that would've placed their "defect detection sw" into bad light. I.e. nothing as fancy as the forementioned "use of uninitialized variable" and "dereference of a NULL pointer" (which strikes really odd to me in the first place).

    Naturally the other explanation is endorsement. It would be so much not-the-first-time that I don't even bother... but I wouldn't bet that this is the case here, because the defect counts were only compared to production release code averages (which strikes me as the other extremely dubious part of this whole "experiment").
  • by MisterFancypants ( 615129 ) on Monday July 07, 2003 @11:22AM (#6383042)
    Every hacker on the planet has full access to the code - which means that they can review it and find vulnerabilities in it.

    Do you know how long it takes to read someone else's code on something like an Apache-level webserver and understand it to the point where you can make useful changes and fixes? The big lie of the "all bugs are shallow" argument is that such a thing is simple, when in fact it is not.

    Fixing a non-obvious bug in a 100k or so line C or C++ project is hard enough when you wrote the code yourself. If someone else wrote the code, it is harder still.

  • RTFAdvertising (Score:4, Insightful)

    by tanguyr ( 468371 ) <tanguyr+slashdot@gmail.com> on Monday July 07, 2003 @11:24AM (#6383054) Homepage
    As has been pointed out a couple of times in other comments, 2.1 is the development branch of the Apache web server - ie "beta", "buggy", "work in progress", etc. etc. In stead of reading this as "Apache has roughly as many defects as closed source web servers" let's read this as "the development version of Apache has as many defects as... well, some unidentified (beta? shiping?) version of some unknown (iPlanet? IIS?) web server". But you can be *much* more confident that these defects will be fixed in Apache than in the *other* product.

    Heck, forget confidence - YOU CAN JUST CHECK.

    The fact that Reasoning didn't have to go and get permission from Apache to run this test - coupled with the fact that we don't even know what Apache is being compared to - is the *real* point behind this "article". /t

    ps: IANAL but don't they have to include a copy of the Apache License given that they publish fragments of the source code in their defect report?
  • by Bazman ( 4849 ) on Monday July 07, 2003 @11:38AM (#6383176) Journal
    Take the null pointer dereferencing thing. All this program seems to do is see if there's a possible path for null-pointer dereferencing. It has no clue as to whether this is logically going to happen. For example:
    2815 while (1) {
    2816 ap_ssi_get_tag_and_value(ctx, &tag, &tag_val, 1);
    2817 if ((tag == NULL) && (tag_val == NULL)) { 2818 return 0;
    2819 }
    2820 else if (tag_val == NULL) {
    2821 return 1;
    2822 }
    2823 else if (!strcmp(tag, "var")) {
    2824 var = ap_ssi_parse_string(r, ctx, tag_val, NULL,
    2825 MAX_STRING_LEN, 0);
    The software claims that tag could be null on line 2823. But thats only if on return from ap_ssi_get_tag_and_value that tag is a NULL pointer and tag_val is non-NULL. If ap_ssi_get_tag_and_value cant return these conditions then this is not a defect. If anything its a red flag, in case the return values of ap_ssi_get_tag_and_value could satisfy that condition.

    I suspect the following code will be flagged as a defect:

    char *tag=NULL;
    doOrDie(&tag);
    strcmp(tag,"do");
    as long as doOrDie() does its job and never returns a NULL then where's the defect? The guys who wrote this tester seem to want you to check any pointer dereferencing against NULL before use - I might be doing this in my doOrDie() function, I dont want to have to do it twice.
  • OSS Standards (Score:2, Insightful)

    by pmiller396 ( 457575 ) on Monday July 07, 2003 @11:44AM (#6383219)
    Okay, we've beat to death the fact it was a pre-release version. But look at it this way:

    When Open Source software is about the same quality as closed source, the developers consider it unstable and warn people that they may run into problems.

    It shows a big difference, to me, in the quality standards that OSS developers (and users) expect.
  • by mystran ( 545374 ) on Monday July 07, 2003 @11:44AM (#6383220)
    I don't know, probably some of these defects might be actual problems, but unless the software is real good, it's always possible that certain cases never happen, although automatic software can find "defects".

    As a rather "stupid" example, I had to initialize a Map to an empty HashMap just last week to get Sun's Java compiler accept my code, although the only two references to the Map where within two if-blocks, within the same function, both of which depended on the same boolean value, which wasn't changed in the whole function.

    There's a difference between defect and a bug. Tools that help in finding problems are great, but after all, they can only point possibly unsafe points. Ofcourse it's good to write code that doesn't trigger any such possibilities in the first place.

  • by MROD ( 101561 ) on Monday July 07, 2003 @11:54AM (#6383275) Homepage
    Of course, this test of the code is purely a test of coding errors rather than errors in the code logic.

    The most worrying errors in programs are generally not coding errors as they are either terminal (ie. crash) or they are benign (the error may cause memory corruption in a place where it does no harm). Of course, there are exceptions such as buffer overflows, but I'd class those, in general, into the logic error category.

    Logic or algorythmic errors are far more dangerous as they can be well hidden and are more likely to make the code do things unintended. The code itself may be perfect but if the algorithm is faulty then there's a major problem.
  • by Anonymous Coward on Monday July 07, 2003 @11:57AM (#6383294)
    The funny thing is that this "bug" doesn't appear to actually be one...

    Note that current_provider is set to conf->providers on line 257. The loop starts and neither current_provider or conf->providers change. Then on line 287 there's a conditional break if conf->providers is NULL.

    If current_provider is going to be NULL at line 291, then conf->providers must be as well, so the conditional break will happen and the NULL dereference will be skipped.

    Or am I missing something else?
  • by bwt ( 68845 ) on Monday July 07, 2003 @11:58AM (#6383304)
    I agree completely. Any metric based on Lines of Code anything is a harmful metric. Any metric based on defect counts is also harmful. Both of these are left-overs from attempts to (mis)-apply statistical process control. Control of crappy metrics give crappy quality.

    Suppose I had 100K lines of code with 100 defects. After reviewing my code I discovered that I could refactor it to 80K lines and suppose further that doing so had no effect on the defect count. Defects per line of code would look worse after an improvement.

    Also, given that this is an automated program, I have to ask how they calibrate and validate its results. How many of the 32 errors found actually aren't errors? How many existing known bugs were not found by this program. I really can't accept these results as anything more than fluff with numbers.
  • by fnorky ( 16067 ) on Monday July 07, 2003 @12:02PM (#6383330) Homepage
    I found it interesting that they used a 1/31/03 version of Apache 2.1-dev. This wasn't mentioned anywhere in the article- either that it was a development version or that their analysis was of a development-level piece of software 5 months ago. It would be interesting to see how far 2.1 has progressed since then.

    After reading the review I came a way with the impression that the reviewers were trying to hide this very fact. No mention this is a development version of Apache. No mention of what the "several commercial equivalents" are. Not much to back up their claim "Apache http server V2.1 code has defect density rate similar to the average found within commercial applications - Findings differ from previous Open Source Study".

    I dare say that at first glance this this seems to be a case of FUD.

  • by Door-opening Fascist ( 534466 ) <skylar@cs.earlham.edu> on Monday July 07, 2003 @12:03PM (#6383338) Homepage
    Why did they use the development branch of Apache, when only a handful of sites are running it? I would have found an analysis of the stable 1.3 branch, which 60% [netcraft.com] of the web-serving world uses, to be more informative.
  • by Foofoobar ( 318279 ) on Monday July 07, 2003 @12:19PM (#6383456)
    Errors in coding mean next to nothing when it is a machine that is checking the syntax of your code. Variations in coding techniques that are perfectly acceptable often show up as errors merely because the program doing the code checking does not understand your syntax. I've seen it happen time and again with error checkers and one could even say that 2% of all errors found by error checkers are mere differences in syntax.

    My wife who is a lead QA tester could vouch for that...

  • by marcink1234 ( 556931 ) on Monday July 07, 2003 @12:19PM (#6383459) Homepage
    I have just read the first 'null dereference' claim and it seems to me that in fact it is not possible. Maybe we got amount of reasoning bugs?
  • by schon ( 31600 ) on Monday July 07, 2003 @12:26PM (#6383495)
    Every time I hear the "obscurity is not security" mantra I chuckle. Of course it isn't, but that doesn't make publishing the information a good idea.

    Nobody's saying that the information should be published - what they're saying is that you can't rely on that information being a secret.

    Is Fort Knox secure? Probably. If so, then why don't they publish the blueprints, guard rotation schedule and security policies?

    That's pretty much the point you're missing - even if that information was published, it wouldn't diminish the security of Fort Knox..

    If the people in charge relied on the fact that they don't publish those details, that would be obscurity, because it would lead them to make errors elsewhere. (Oh, it's OK if we leave the main vault open tonight - nobody knows that there will be no guards around it for 10 minutes at 3:30 AM tonight.)
  • by Anonymous Coward on Monday July 07, 2003 @12:26PM (#6383496)
    My quibble with explicitly checking for NULL pointers is that you're only going to catch the case when the pointer is NULL. Just about any other bad value is going to give you a segmentation fault (which is exactly what a NULL pointer is also going to give you). I would consider such a check of more value if you also bothered to check all the other pointer values it shouldn't be, but that's something which is mainly only practical at the kernel level. Otherwise, I find all the extra NULL checking pedantic.

    The only place where I like to put NULL checks is where passing a NULL pointer has some sort of meaning in the API (in which case, it's obviously necessary). Doing so helps signal to anyone reading the code (mainly myself) that a NULL pointer value has significance beyond a possible segmentation fault. That would be drowned out if I put a NULL pointer check everywhere just to return a marginally useful error code, which I would also have to check for, rather than the program crashing in a clean and spectacular manner (the fail fast mentality).
  • by apankrat ( 314147 ) on Monday July 07, 2003 @12:29PM (#6383513) Homepage
    .. But tomorrow, a new coder will add something that modifies the preconditions and suddenly that pointer can indeed be NULL.

    That's what assert() exists for. And 'preconditions' you are referring to are actually 'invariants', so if "suddenly that pointer can indeed be NULL" it means that someone broke a fundmental design assumption and should not be tweaking the code anyway.

    And for those who haven't seen this trick before, a nice habit to get into is to write your checks like so:..

    I found this trick pretty annoying. First of all any decent compiler can catch this with a warning. Second, if you are in fact misplacing == with = so often that you need a special habit for fighting it, then perhaps you should look at what you type :) There are plenty C language constructions that can ruin your code with a single misplaced character:

    "xFF" vs "\xFF"
    comma operator; for instance, f(param) vs f,(param)
    misplaced structure initializers
    etc, etc

    It does not mean the programmer need to guard against all these too, it just means that the code must be proofread as it's being written, which is a reasonable thing to expect from a professional developer.
  • by DASHSL0T ( 634167 ) on Monday July 07, 2003 @12:34PM (#6383547) Homepage
    Everybody take a deep breath.

    Their conclusion is that while the INITITAL defect rate of Apache is roughly equivalent to a closed source product (since they are testing a development release), the Open Source methodology reduces the defects to a greater extent and results in code with fewer defects over time.

    They are saying that Open Source coding methods are producing _better_ code in the long run.
  • by Tony-A ( 29931 ) on Monday July 07, 2003 @12:37PM (#6383569)
    It's always been typical open-source style to get the mission-critical parts hardened against nuclear attack, but leaving the other bits a tad soft.

    IMNSHO, that ought to be standard for any mission-critical software. Bugs and the places that bugs live in are not created equal. The beauty of Apache (at least 1.13) is that the overall system can be very robust and reliable with rather buggy modules. I suspect the problem with IIS is that everything assumes everything else is perfect, which overall doesn't quite work so well.
  • by Jeremy Erwin ( 2054 ) on Monday July 07, 2003 @12:42PM (#6383610) Journal
    The earlier study was of polished code, many iterations after release. This latest study is of an unpolished developers snapshot. I suppose that you might be able to divine some kind of wisdom about the development of open-source software-- Development branches shall be as stable as commercial code. Release branches shall be more so.

    The metrics report does mention the version number (dev-1/31/03), though the fact that this is development code is not explicitly noted No mentions is made who commissioned this study. Perhaps the company is simply fishing for clients.
  • by pclminion ( 145572 ) on Monday July 07, 2003 @12:48PM (#6383637)
    Considering that Brian Kernighan, co-inventor of the C language, advocates this coding style in his book The Practice of Programming [bell-labs.com], I think it might be you who's the moron (and the 12 year old). This is a classic error that thousands of programmers have made and continue to make. It's the difference of a single repeated keystroke.

    So shut up, you little twerp.

  • by Daniel_Staal ( 609844 ) <DStaal@usa.net> on Monday July 07, 2003 @01:01PM (#6383734)
    I don't think the poster meant to dis commercial QA work: he was instead of the opinion that commercial software will value the widgets and so on more than open source does.

    That is: he is sure that *both* processes take into account severity and priority of bugs. The poster just felt that their priorities were different. (Polish being more important for commercial code, absolute correctness for open source. The question of the 'correct' balance is left up to the reader.)
  • by sabat ( 23293 ) on Monday July 07, 2003 @01:36PM (#6383956) Journal

    Why did they use the development branch of Apache

    Let me restate this: why are they comparing pre-alpha software with production releases?

    Most simple answer: because they wanted to find flaws. The second most popular web software is ISS. This looks like a Microsoft tactic: anonymously hire this company to "evaluate" code so that the results look unbiased. Everyone will likely realize that the competitor is Microsoft's ISS, so it doesn't need to be stated bluntly. MS wins; another (small) battle for mindshare is won.

  • Re:It's not fair! (Score:2, Insightful)

    by Drathos ( 1092 ) on Monday July 07, 2003 @01:57PM (#6384120)
    That's what compiler errors are for.. How else are you supposed to find typos when vim doesn't have a spellchecker? :)
  • by f00zbll ( 526151 ) on Monday July 07, 2003 @02:00PM (#6384151)
    The report hardley takes down OSS or Apache. The report is reasonable and doesn't over extrapolate about quality. For me, the report is encouraging because MS has something like 80 programmers working on IIS and apache is made up of volunteers with far fewer resources, that is pretty darn impressive for alpha code. I haven't looked at the list of active committers lately, but I know it's no where near 80. Draw your own conclusions.
  • by yaphadam097 ( 670358 ) on Monday July 07, 2003 @02:24PM (#6384340)
    I've worked on open source projects and I've also worked in commercial development shops. I think that their findings are accurate but misleading:
    1. In my experience there are generally less bugs in pre-release code on a commercial project because there is a stronger culture of code ownership, and most if not all code is independently reviewed before being committed.
    2. There are generally a high number of defects in pre-release open source code, because developers commit early and commit often. Independent review happens more often in open source projects, but it usually happens after the code has already been committed to the dev branch (Before that, the geographically dispersed dev team has no access to it.)
    3. The quality of code released to production in a commercial environment is usually very similar to the quality of code in the development branch. Once it is reviewed and committed it enters a QA cycle where an independent team tries to find any bugs. At this point there is invariably strong pressure to release. So, bug fixes happen quickly and quality suffers (I've always found it ironic that we called this "Quality Assurance.")
    4. Once an open source project has been completed (Meaning all of the features have been developed) it enters a much longer period of code review, bug hunting, and alpha release. For a project like Apache it was over a year before anyone started to use 2.0 in production. Most commercial companies can't afford nearly that much "QA" time, because they are spending money to make money.
  • by Major Tom ( 164687 ) on Monday July 07, 2003 @02:24PM (#6384344) Homepage
    There is no need to freak out about this being some sort of attack on open source software or agonize over what the unnamed commercial product used for comparison was.

    The article seems to indicate that the .51 error density for "commercial software" is talking about commercial software in the abstract. Presumably, this isn't the error density of some secret web server, but the average density of all the commercial products they've analyzed so far.

    This report is simply an attempt to prove a simple hypothesis about OSS: it gets increasinly refined as it matures.

    Reasoning believes they've proved the hypothesis because Apache, a middle-aged project, I suppose, has an error density comparable to commercial software, while the TCP/IP stack, a mature project, has a significantly lower density.

    This isn't inteded to be a comparison of web servers (come on, people, *of course* they didn't have access to IIS) it is intended to be a mildy interesting observation about the life-cycle of open source software.

    It would be a lot more interesting if we could see an analysis of whether or not commercial software goes through a similar maturing process. Maybe commercial products also grow refined with age. Maybe not. If so, which matures faster?
  • by LilMikey ( 615759 ) on Monday July 07, 2003 @03:03PM (#6384666) Homepage
    This is a pointless study. While yes, the slight possibility that one may dereference a NULL pointer is a bad thing it's miniscule compared to bad design. A perfectly programmed web server designed poorly will have bazillions more bugs and security flaws than a slightly bugged well-designed one. An objective code scanning bug-finder can't fix stupid.
  • by the_duke_of_hazzard ( 603473 ) on Monday July 07, 2003 @03:07PM (#6384702)
    "The defect density of the Apache code inspected was 0.53 per thousand lines of source code, while the commercial average defect density came to 0.51 per thousand lines of source code."

    A simple reductio ad absurdum from this: if you produce thousands and thousands of lines of harmless, simple code to do something that could be done in a line, then your more verbose code is "better" than the concise one by this metric.

    This is assuming that it is possible to reliably statically test for errors in the first place, and that one "error" is equivalent to another... All seems a little suspect to me.

    This signature is intentionally pointless.

  • 0.53 errors per 1000 for Apache, vs. 0.51 per 1000 for "commercial equivalents" (note, that they fail to say how many equivalents were used to generate the average, nor which ones)? That's definately within the margin of error. Not only that, but Apache is a less mature FS/OSS project, so the comparison seems to favor the FS/OSS model.

    Furthermore, while presumely many commercial equivalents were used to generate the commercial average, only one Apache was used to generate the FS/OSS average error density. Again, very crappy statistics.

    Even if 100 different FS/OSS projects like Apache and Apache were used to generate that 0.53 average, and 100 different commercial equivalents used to generate the commercial average, it's probably still within the margin of error (or standard deviation).

    In short, this study = completely insignificant. Likewise, so was their previous study showing that FS/OSS has a lower bug-density, as it only used one FS/OSS project. To get useful statistics, you need hundreds of data-points -- not one.
  • by aborchers ( 471342 ) on Monday July 07, 2003 @03:40PM (#6385013) Homepage Journal
    Sorry to get pedantic, but char* buffers are not error prone. Programmers are prone to make errors when using them. Lack of maturity (so to speak) in the language and bad programmer form are not the same. Bad form is bad form in C or Java. That one lacks array bounds checking that the other provides is irrelevant. Languages that protect the programmer from errors may make bad form less likely to result in a failure, but failing to employ best practices in code design can still lead to hard-to-detect logic bombs.

    In this case, the bad form in using early returns is that using them leads one to not look at the whole routine as a cohesive whole where all the antecedents and consequents are correctly considered and accounted for. It's similar to why:

    if (a) { ...
    }
    else if (b) { ...
    }

    is bad form compared to

    if (a) { ...
    }
    else {
    if (b) { ...
    }
    }

    From tracing point of view, they are indistinguishable. They may even compile to the same set of instructions. The second, however, shows a level of diligence on the part of the engineer that all the possible routes are considered and there is no dangling consequent.

    Disclaimer: The real reasons why these things are bad form are practically impossible to convey in an example that doesn't make use of real code. i.e. it's the "..." bit that provides the opportunity for the bad-form constructs to leak bugs.

  • Re:Apache 1.3? (Score:3, Insightful)

    by Piquan ( 49943 ) on Monday July 07, 2003 @04:47PM (#6385770)

    I keep hearing this, and I'm not convinced.

    I didn't see anything in the article about what versions of closed-source codebases they used for comparison. But I'd hypothesize that it's code that they've been contracted to analyze. That means it's probably development code in that event, too.

    We can't gritch about them using Apache 2.1-dev unless we have reason to believe they didn't compare againt dev versions. We can gritch about not having this information.

"Gravitation cannot be held responsible for people falling in love." -- Albert Einstein

Working...