Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming Operating Systems Software Sun Microsystems Windows IT Technology BSD Linux

Code Quality In Open and Closed Source Kernels 252

Diomidis Spinellis writes "Earlier today I presented at the 30th International Conference on Software Engineering a research paper comparing the code quality of Linux, Windows (its research kernel distribution), OpenSolaris, and FreeBSD. For the comparison I parsed multiple configurations of these systems (more than ten million lines) and stored the results in four databases, where I could run SQL queries on them. This amounted to 8GB of data, 160 million records. (I've made the databases and the SQL queries available online.) The areas I examined were file organization, code structure, code style, preprocessing, and data organization. To my surprise there was no clear winner or loser, but there were interesting differences in specific areas. As the summary concludes: '..the structure and internal quality attributes of a working, non-trivial software artifact will represent first and foremost the engineering requirements of its construction, with the influence of process being marginal, if any.'"
This discussion has been archived. No new comments can be posted.

Code Quality In Open and Closed Source Kernels

Comments Filter:
  • Re:Is it just me? (Score:5, Insightful)

    by Anonymous Coward on Friday May 16, 2008 @11:58AM (#23434836)
    That is if you can figure out which of the 12 links are the actual FA and which are supporting material.
  • by Anonymous Coward on Friday May 16, 2008 @12:00PM (#23434874)
    I can't wait to see what kind of ignorant anti-Windows screeds the 13-year-olds who post on Slashdot during recess will come up with in response to this article! I'm betting five minutes, tops, until someone posts a comment that spells "Microsoft" with a "$". Don't let me down, Slashbots!
  • by wellingtonsteve ( 892855 ) <wellingtonsteve AT gmail DOT com> on Friday May 16, 2008 @12:05PM (#23434962)
    ..that Open Source code is of quality, but at least the point of things like the GPL is that you have the power to change that, and improve that code..
  • CScout Compilation (Score:5, Insightful)

    by allenw ( 33234 ) on Friday May 16, 2008 @12:08PM (#23435028) Homepage Journal
    "The OpenSolaris kernel was a welcomed surprise: it was the only body of source code that did not require any extensions to CScout in order to compile."

    Given that the Solaris kernel has been compiled by two very different compilers (Sun Studio, of course, and gcc), it isn't that surprising. Because of the compiler issues, it is likely the most ANSI compliant of the bunch.
  • by davejenkins ( 99111 ) <slashdot@NOSPam.davejenkins.com> on Friday May 16, 2008 @12:09PM (#23435034) Homepage
    If I am understanding correctly, you were looking for 'winners' and 'losers' (weasel words in and of themselves, but anyway...) in terms of 'quality' (another semi-subjective term that could make someone go crazy and drive motorcycles across the country for the rest of their lives).

    You found that '..the structure and internal quality attributes of a working, non-trivial software artifact will represent first and foremost the engineering requirements of its construction, with the influence of process being marginal, if any.' -- or in plain English: "the app specs had a much bigger influence when compared to internal efficiencies".

    I would wonder if you're just seeing a statistical wash-out. Are you dealing with data sets (tens of millions of lines and thousands of functions) that are so large, that patterns simply get washed out in the analysis?

    Oh dear, my post is no more clear than the summary...
  • Re:Is it just me? (Score:5, Insightful)

    by raddan ( 519638 ) on Friday May 16, 2008 @12:18PM (#23435184)
    It's not a very good summary, but the paper is well-written, which is interesting considering that the author is the one who submitted the summary to Slashdot. I suspect that he assumes we have more familiarity with the subject than we actually do.
  • Really? (Score:3, Insightful)

    by jastus ( 996055 ) on Friday May 16, 2008 @12:19PM (#23435200)
    I'm sorry, but if this is what passes for serious academic computer-science work, close the schools. This all appears to boil down to: quality code (definition left to the reader) is produced by good programmers (can't define, but I know one when I see his/her code) who are given the time to produce quality code. Rushed projects by teams of average-to-crappy programmers results in low-quality code. All the tools and management theories in the world have little impact on this basic fact of life. My PhD, please?
  • by ivan256 ( 17499 ) on Friday May 16, 2008 @12:25PM (#23435320)
    It's obvious what the results would be.

    Half completed, unpolished commercial software usually stays unreleased and safe from this sort of scrutiny. However many of the same types of projects get left out in the open and easily visible to everybody when developed as open source. The low code quality of these projects would drag down the average for open source projects as a whole.

    On the lighter side, you could say that you'd only consider software that was "out of beta" or version 1.0 or greater, but that would leave out most open source projects and commercial "Web 2.0" products....
  • KLOCs? (Score:5, Insightful)

    by Baavgai ( 598847 ) on Friday May 16, 2008 @12:28PM (#23435380) Homepage
    If good code and bad code were a simple automated analysis away, don't you think everyone would be doing it? What methodolgy could possibly give a quantitative weighting for "quality"?

    "To my surprise there was no clear winner or loser..." Not really a surprise at all, actually.

  • by indifferent children ( 842621 ) on Friday May 16, 2008 @12:35PM (#23435516)
    So open source software is not of 'markedly' higher quality. If it is of even 'slightly' higher quality, or 'exactly the same quality' as closed source software, then the fact that it costs less, and gives users freedoms that they don't have with closed source software, means that closed source is doomed.
  • by abolitiontheory ( 1138999 ) on Friday May 16, 2008 @12:35PM (#23435518)
    Does anybody see that these results are in still favor of open source? The fact is, it's actually a beautiful thing that the difference in quality is marginal. This equality then becomes the rubric by which to judge other elements of the design process, and choices about whether to develop and deploy programs with open source or closed source.

    People make claims about the need for closed source all the time, usually revolving around the need to a predictable level of quality, or some other factor. The fact is, this results proves that its a wash whether you choose open or closed--so why not choose open?

    There's a deep significance here I'm failing to capture completely. Someone else word it better if they can. But there didn't need to be some blow-out victory of open source over closed source for this to be a victory. All open source needed to do was compare--which it did, clearly--with closed source, in terms of value, to secure its worth.

  • Re:Is it just me? (Score:2, Insightful)

    by smittyoneeach ( 243267 ) * on Friday May 16, 2008 @12:43PM (#23435672) Homepage Journal
    I'll attempt a radical paraphrase: "Form follows function, not process."
  • by Yosi ( 139306 ) on Friday May 16, 2008 @12:48PM (#23435754) Journal
    The piece of Windows they had did not include drivers. It says:

    Excluded from the kernel code are the device drivers, and the plug-and-play, power management, and virtual DOS subsystems. The missing parts explain the large size difference between the WRK and the other three kernels.


    Much of the code in Linux, for instance, is drivers.
  • Re:Is it just me? (Score:5, Insightful)

    by Diomidis Spinellis ( 661697 ) on Friday May 16, 2008 @12:50PM (#23435794) Homepage
    I didn't write the last part when I submitted the story, and, yes, the summary given here is comprehensible, because it appears out of context. What the sentence '..the structure and internal quality attributes of a working, non-trivial software artifact will represent first and foremost the engineering requirements of its construction, with the influence of process being marginal, if any.' means is that when you build something complex and demanding, say a dam or an operating system kernel, the end result will have a specific level of quality, no matter how you build it. For this reason the differences in the software built with a tightly-controlled proprietary software process and that built using an open-source process are not that big.
  • Re:Really? (Score:3, Insightful)

    by jjohnson ( 62583 ) on Friday May 16, 2008 @01:12PM (#23436162) Homepage
    If you'd RTFA, you'd know that there's a lot more to what the author said than that. He says nothing about a relationship between the quality of programmers and the quality of code; he says nothing about the time taken to develop code, and makes no conclusions about its effect on code quality.

    What he says is that a cluster of metrics that collectively say something general about code quality (e.g., better code tends to have smaller files with fewer LOC; worse code has more global functions and namespace pollution) show little difference between four kernels with diverse parentage.

    He speculates (and says he is speculating) that obvious differences in process might account for small variances in where each kernel scores well or badly.
  • Re:Stupid metrics (Score:4, Insightful)

    by Diomidis Spinellis ( 661697 ) on Friday May 16, 2008 @01:15PM (#23436214) Homepage
    It took me about two months of work to collect these metrics. Yes, running in addition the code of the four kernels through a static analysis tool would have been even better, but this would have been considerably more work: You need to adjust each tool to the peculiarities of the code, add annotations in the code, weed out false positives, and then again you only get one aspect of quality, that related with bugs, like deadlocks and null pointer indirections.

    Using one of the tools you propose, you will still not obtain results regarding the analysability, changeability or readability of the code.

  • by malevolentjelly ( 1057140 ) on Friday May 16, 2008 @01:19PM (#23436276) Journal

    It's a well known fact that code will always resemble the institution that produced it, to some extent. To describe the Microsoft code as "poorly structured" is likely a bit out of touch.

    The absolutely best kernel code is generally extremely beautiful and descriptive when dealing with the system's abstracts (with nice, long descriptive names for each function) and then unbelievably hellish and ugly in the sections that deal with hardware. Kernels represent an intersection between the idealistic system code and the hideously complex and inhuman machine interaction code. For this reason, we gauge the value of the systems based on how cleanly they compile into assembly, their performance, and ideally how well they do what they were written to do.

    Kernel code fills such a complex role in the computer science paradigm that it is likely impossible to gauge the value or quality of any of them through any sort of automated means. What we have here is a mess of a research paper that comes to no obvious conclusions because they didn't really discover anything. If it were of any value, its final summary and conclusions wouldn't be so obfuscated. The researcher may or may not have mastered the art of understanding the zeitgeist of kernels but he certainly hasn't mastered the research paper.

  • by Llywelyn ( 531070 ) on Friday May 16, 2008 @01:19PM (#23436278) Homepage
    There is a company that, at the heart of their business, exists a 6000 line SQL statement that no one understands, no one can modify, and occasionally doesn't work without anyone knowing why but a restart of the program seems to take care of it.

    It has lasted that way for a very very long time.

    Is it good code simply as function of its survival and (sort of) working?

    I tend to think of good code like good engineering or good architecture. Surely you wouldn't define good architecture as "a building that remains standing," would you? The layout of the rooms, how well that space is used, how well it fits the needs of the users, how difficult it is to make modifications, etc all factor in to "good design" and have nothing to do with whether the building "works."

    I am not sure you can put a metric to it anymore than I could put a metric to measuring the quality of abstract expressionism or how well a circuit is laid out--there may be metrics to aid in the process, but in the end one can't necessarily assign a numerical rating to the final outcome for the purpose of rating.

    That doesn't mean that there isn't such a thing as good quality and bad quality code.
  • by FishWithAHammer ( 957772 ) on Friday May 16, 2008 @01:34PM (#23436528)

    Generally speaking, commercial desktop apps are still way ahead of their open counterparts, with the exception of code development tools and anything that directly implements a standard (browsers, mail clients, etc.)
    Code development tools? VS says hi. (And somebody is now going to leap in and say that that monstrosity Eclipse is somehow "better" than VS...this will be amusing.)
  • by raddan ( 519638 ) on Friday May 16, 2008 @01:37PM (#23436568)
    With regard to the guy who went crazy and drove his motorcycle across the country-- I think the point of the book was to demonstrate that "subjective" and "objective" are specious terms. Science gets all hot and bothered when words like "good" and "bad" are used, but not when words like "point" are used. So if we can make allowances for axiomatic terms, why not so-called "qualitative" terms? After all, the word "axiom" means, according to Wikipedia:

    The word "axiom" comes from the Greek word axioma a verbal noun from the verb axioein, meaning "to deem worthy", but also "to require", which in turn comes from axios, meaning "being in balance", and hence "having (the same) value (as)", "worthy", "proper". Among the ancient Greek philosophers an axiom was a claim which could be seen to be true without any need for proof.
    Indeed, if you look at many of our "quantitative" measures, they are, at their heart, a formalization of "goodness" and "badness". If you're a mathematician, you might argue that this is not true (since there are loads of mathematical constructs whose only requirement is simply self-consistency and not some conformance to any external phenomenon), but if you're an engineer, you're whole career balances on the fine points of "goodness" and "badness". It is an essential concept!

    My personal opinion is that if statistics are a wash-out in general, then the researcher is asking the wrong questions. I know that the author pre-defined his metrics in order to avoid bias, but that's not necessarily good science. Scientific questions should be directed toward answering specific questions, and the investigatory process must allow the scientist to ask new questions based on new data.

    There is clear non-anecdotal evidence that these operating systems behave differently (and, additionally, we assign a qualitative meaning to this behavior), so the question as I understand it is: is this a result of the development style of the OS programmers? The author should seek to answer that question as unambiguously as possible. If the answer to that question is "it is unclear", then the author should have gone back and asked more questions before he published his paper, because all he has shown is that the investigatory techniques he used are ill-suited to answering the question he posed.
  • Re:Is it just me? (Score:5, Insightful)

    by legutierr ( 1199887 ) on Friday May 16, 2008 @01:37PM (#23436570)
    How useful is it to write something about computers that needs to be translated for the slashdot audience? Jargon is a great way to provide specialized information to insiders quickly and efficiently, but this is slashdot. If slashdot readers need for you to restate your description of a problem or observation related to the Linux kernel (even if that description is taken out of context), could it be that the paper could be written in a more open manner? The quote you provided from your paper seems to speak to a narrow audience; how narrow must your audience be, however, if it excludes a good portion of slashdot's readers?

    If I seem overly critical, I do not mean to, it is only that I hate to see good, useful research made less accessible to non-academics by the use of academic language.
  • by Mongoose Disciple ( 722373 ) on Friday May 16, 2008 @01:41PM (#23436624)
    So open source software is not of 'markedly' higher quality. If it is of even 'slightly' higher quality, or 'exactly the same quality' as closed source software, then the fact that it costs less, and gives users freedoms that they don't have with closed source software, means that closed source is doomed.

    The problem with your conclusion is that it assumes that code quality as measured in this sort of way is the only or even the most important thing.

    It's like buying clothes. Sure, the quality of the clothes you're buying does matter some in making your choice. So does price (which you mention above.) But, what the clothes look/feel like is also often important, and something like whether or not they actually fit you can trump all of those concerns.

    In general I would say the open source world (as represented by the best known / flagship projects) produces higher quality code. It's better at finding and fixing bugs. It's often better at fixing inefficient algorithms and the like.

    What it's generally [i]not[/i] as good at are higher-level or market-driven concerns, like if a UI is just bad and needs to be replaced whole-cloth, or if key feature that a developer would never use but that most users will want is present, or if documentation is provided or of sufficient quality.

    As long as that's true, both open and closed source projects will continue to exist. They're best at different kinds of things, and I would argue they exist in a kind of symbiosis.
  • Re:Is it just me? (Score:3, Insightful)

    by Diomidis Spinellis ( 661697 ) on Friday May 16, 2008 @01:49PM (#23436828) Homepage
    This is a very good point...
  • by samkass ( 174571 ) on Friday May 16, 2008 @01:49PM (#23436838) Homepage Journal
    Yes, but there is absolutely no evidence that open source is any better in this respect than commercial software (in fact the actual evidence points to it being little different in this respect). And when it DOES crash, a 1-800 number is often better than a pile of badly commented code.

    It will, in the end, come down to a value proposition. The value proposition of freedom to modify code is very hard to quantify, so that will probably factor into the eventual success of open source not at all. The actual quality, usability, documentation, trainability, ease of install, compatibility with existing infrastructure (usually Microsoft), etc., will probably be the deciding factors, and I don't see open source having a clear-cut advantage in those metrics.
  • Re:KLOCs? (Score:3, Insightful)

    by Diomidis Spinellis ( 661697 ) on Friday May 16, 2008 @01:54PM (#23436978) Homepage
    You can automatically recognize some bad smells of poor quality code. However, this will still let through poor quality code that has been explicitly written to guard against the bad smells. So, you can say for sure that some code stinks, but you can't (and I suspect you will never be able to) tell that some code excels.
  • by Diomidis Spinellis ( 661697 ) on Friday May 16, 2008 @02:09PM (#23437296) Homepage
    Ten years ago I wrote an article [spinellis.gr] criticizing the Windows API. Most of what I wrote then, continues to be true today. Based on that external view of Windows, and the BSODs I regularly see, I was expecting to find in the kernel many worse things. The header file you mention is a clear manifest of an inappropriate design, and I suspect that at higher levels of system functionality (say OLE or the GDI) there will be more parts of similarly bad quality.
  • Weird logic (Score:3, Insightful)

    by BokLM ( 550487 ) <boklm@mars-attacks.org> on Friday May 16, 2008 @02:34PM (#23437816) Homepage Journal

    "An earlier study on the distribution of the maintainability index [4] across various FreeBSD modules [35,Figure 7.3] showed that their maintainability was evenly distributed, with few outliers at each end. This makes me believe that the WRK code examined can be treated as a representative subset of the complete Windows operating system kernel."
    How are FreeBSD and Windows related ? You conclude something about Windows source code based on things you saw in FreeBSD source code ?
  • by daedae ( 1089329 ) on Friday May 16, 2008 @02:42PM (#23437944)
    You: "There's obviously a problem with a study that takes 8GB of data and concludes that there's no difference in quality between kernels with legendary uptimes and those that can't manage memory well enough to stay up more than a few weeks." From the summary: "The areas I examined were file organization, code structure, code style, preprocessing, and data organization." These have no direct correlation to uptime. Yes, indirect, perhaps, as in "a better-organized kernel is easier to understand, debug, and reason about", but not direct as in "implementing the scheduler in 3 files instead of one guarantees stability." That said, what defines this as an interesting but impractical study? Doesn't it say something that there's something more fundamental than just high-level software engineering principles at work in the relative qualities of kernels?
  • by KutuluWare ( 791333 ) <kutulu.kutulu@org> on Friday May 16, 2008 @04:45PM (#23439718) Homepage
    You haven't been paying attention to many Open Source proponents if you haven't ever seen them claim that Open Source code is of vastly superior quality than proprietary. Hell, ESR's claim to fame is a whole paper he wrote on that exact topic. For example, the OSI itself puts this claim at the very top of their advocacy document on selling OSS to your management:

    The foundation of the business case for open-source is high reliability. Open-source software is peer-reviewed software; it is more reliable than closed, proprietary software. Mature open-source code is as bulletproof as software ever gets.
      Open Source Case for Business [opensource.org]
    There is a pretty clear divide in the F/OSS community between the OSI-side-people, who view Open Source as a development model that leads to better software with fewer bugs and quicker turnaround; and the FSF-side-people who think of Free Software as a moral imperative that leads to more freedom in addition to better software with fewer bugs and quicker turnaround.

    Having worked heavily in both areas of software development, I think this particular article's conclusion was obvious: code quality depends on the people who wrote it, not the process the used to license it. But only people who have done extensive proprietary and open-source development could really see that first-hand, and our opinions are automatically dismissed as being pro-Microsoft shills. Thus, I predict this paper will be roasted over an open flame, crushed into a tiny ball, soaked in gasoline, lit on fire, and ejected into deep space by the most devoted open source proponents in both camps.
  • by nguy ( 1207026 ) on Friday May 16, 2008 @09:56PM (#23442510)
    Those statistics are useless; nobody has any idea how these measures translate into correctness, robustness, or performance.

  • by CrazedWalrus ( 901897 ) on Friday May 16, 2008 @10:02PM (#23442564) Journal
    Um, yes.

    You seem to forget that the Linux forums are generally stellar for resolving HOW-TO questions. Additionally, there are FAQs and instructional blog posts that are readily accessible through Google. In other words, "Toggle That Doohicky" is easily obtained in the FOSS environment as well, and can be done WITHOUT sitting on hold and taking your chances with the quality of the rep who answers.

    Additionally, if the source were available, features you want could be added, someone ambitious enough could actually investigate *why* the damn thing works the way it does, and it'd probably have working Linux and Mac versions by now.
     
  • by xenocide2 ( 231786 ) on Friday May 16, 2008 @11:42PM (#23443030) Homepage

    On the lighter side, you could say that you'd only consider software that was "out of beta" or version 1.0 or greater, but that would leave out most open source projects and commercial "Web 2.0" products....


    Then restrain yourself to "what Fedora ships" or "what Canonical supports in main". These are the presumably viable software products with a living upstream.

    But you missed an interesting problem: failed commercial programs sometimes convert into open source projects. Its not clear to me whether this is a positive or negative effect. Are there more s out there or [aaronbishopgames.com]Blenders [blender.org]? Is the OpenOffice.org software good or bad?
  • Re:Is it just me? (Score:3, Insightful)

    by CherniyVolk ( 513591 ) on Saturday May 17, 2008 @03:37AM (#23444012)
    ... means is that when you build something complex and demanding, say a dam or an operating system kernel, the end result will have a specific level of quality, no matter how you build it.

    First, Mr. Spinellis I found the report to be rather intriguing and captivating. I much respect the work put into it, and I think it'll prove valuable resource for study or reference.

    With that said, the above quote struck a chord with me.

    Let's take fire control systems for weapons of mass destruction. Without going into detail, the basic fact is that if a thermo-nuclear warhead is launched from Russia, France, UK, Isreal, US or any other country, it was and will be a deliberate act. The systems are far too complex and far too reliable for error or mistake. In other words, regardless of political position, no ruling body is going to ignore the possibility of an oopsie launch.

    So, I do understand, that certain objectives, though may be approached and implemented vastly differently, will have strong similarities in the end result and with how it was successfully applied.

    My problem is with this. What your saying is that there is no quality difference between Windows and Linux, and this is the discordant chord struck.

    I can not extrapolate the agreeable portions of your thought to the seemingly obvious short comings of the Windows operating system. On any facet, whether it is security, stability, functionality or reliability. Windows is, far behind on all fronts.... aside from secrecy from a Microsoft point of view.

    I once told my boss, who well understood, that he would never get the quality of code from me in the workplace as I might submit for the Open Source realm. It's just painfully obvious, that I will, at some point, hack an improper solution together for his deadline. And the nature of business, after the product is built, no one wants to change it unless they have to. So, while the economics all come into play here, why Microsoft might choose to fix one bug versus the other, the fact is in Open Source a bug will be fix on merit of a fix being available and acknowledgment that the bug is in fact a bug. Regardless if it's economically sound or feasible to fix the bug, in Open Source, it will be fixed.

    While the end results that you present are interesting, I can not accept the proposition that the Windows kernel is too similar in quality. All one has to do is actually use the blasted thing, and no amount of numbers can be that convincing to ignore all the pitfalls well perceived from actually using the dreadful software.

    I think you have overlooked overwhelming variables that directly effect the quality of software. Or, perhaps, the WRK has been a meticulous focus at Microsoft before it's release... this is likely possible, as it's WIDELY known, from nearly ALL examples of closed source proprietary software being released to the Open Source, that it takes years just to clean up and prepare for the ultra high standards of the OS community.
  • Re:Is it just me? (Score:3, Insightful)

    by Allador ( 537449 ) on Saturday May 17, 2008 @05:14AM (#23444266)
    I think you're vastly overestimating the overall knowledge, experience, and education in this context of the /. readership.

    Only a small portion of the /. readers are programmers/developers (or in an academic speciality in this field).

    Even amongst that population, only a small percentage are qualified in large operating systems development.

    Similarly, a small percentage of the programmer subpopulation of /. will be familiar with the various theory, metrics and approaches to measuring and analyzing code quality.

    The target population of the paper are people who fit into all three of these groups. Thats not very many people.

    I've been doing software development for a living for almost 2 decades, but arent familiar with many of the metrics and approaches used to analyze the source code (of course some, like cyclomatic complexity, line lengths, operands per line, etc etc I am familiar with).

    This is incredibly narrowly focused stuff, even for /.

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...