Code Quality In Open and Closed Source Kernels 252
Diomidis Spinellis writes "Earlier today I presented at the 30th International Conference on Software Engineering a research paper comparing the code quality of Linux, Windows (its research kernel distribution), OpenSolaris, and FreeBSD. For the comparison I parsed multiple configurations of these systems (more than ten million lines) and stored the results in four databases, where I could run SQL queries on them. This amounted to 8GB of data, 160 million records. (I've made the databases and the SQL queries available online.) The areas I examined were file organization, code structure, code style, preprocessing, and data organization. To my surprise there was no clear winner or loser, but there were interesting differences in specific areas. As the summary concludes: '..the structure and internal quality attributes of a working, non-trivial software artifact will represent first and foremost the engineering requirements of its construction, with the influence of process being marginal, if any.'"
Is it just me? (Score:5, Interesting)
Of course, I could try to RTFA, but hey, this is Slashdot, after all...
Re:Is it just me? (Score:5, Insightful)
Re:Is it just me? (Score:5, Interesting)
% style conforming lines: FBSD:77.27 LIN:77.96 SOLARIS:84.32 WIN:33.30
% style conforming typedef identifiers: FBSD:57.1 LIN:59.2 SOLARIS:86.9 WIN:100.0
% style conforming aggregate tags: FBSD:0.0 LIN:0.0 SOLARIS:20.7 WIN:98.2
(I'm far too lazy to clean up the rest)
% of variable declarations with global scope 0.36 0.19 1.02 1.86
% of variable operands with global scope 3.3 0.5 1.3 2.3
% of identifiers with wrongly global scope 0.28 0.17 1.51 3.53
Re:Is it just me? (Score:4, Interesting)
Re: (Score:2, Informative)
Re:Is it just me? (Score:5, Insightful)
Re:Is it just me? (Score:5, Interesting)
Re:Is it just me? (Score:4, Interesting)
Yup, and the author of the paper is Diomidis Spinellis, who wrote the excellent book Code Reading [spinellis.gr]. This is a great study of code analysis and familiarization techniques. He also wrote a fine article on C preprocessors... in Dr. Dobb's Journal, I think.
Re:Is it just me? (Score:4, Interesting)
Re: (Score:2)
> is the one I used for parsing the code of this study.
That was a great article; it really showed the complexity of handling those macros. Maybe something for "Beautiful Code II"...
Re: (Score:2, Insightful)
Re:Is it just me? (Score:5, Insightful)
Re:Is it just me? (Score:5, Insightful)
If I seem overly critical, I do not mean to, it is only that I hate to see good, useful research made less accessible to non-academics by the use of academic language.
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
Only a small portion of the
Even amongst that population, only a small percentage are qualified in large operating systems development.
Similarly, a small percentage of the programmer subpopulation of
Re: (Score:3, Insightful)
First, Mr. Spinellis I found the report to be rather intriguing and captivating. I much respect the work put into it, and I think it'll prove valuable resource for study or reference.
With that said, the above quote struck a chord with me.
Let's take fire control systems for weapons of mass destruction. Without going into
Re:Is it just me? (Score:4, Informative)
Re: (Score:2, Insightful)
It's obvious (Score:5, Funny)
Oh. Wait. This is about propeller-head stuff rather than management stuff. Lemme get my "Handbook of postmodern buzz words"...
Not that surprising (Score:5, Interesting)
Interesting, but not shocking for those who have worked with disciplined commercial teams. I wonder what the results would be in less critical areas than the kernel, say certain types of applications.
Re:Not that surprising (Score:4, Insightful)
Half completed, unpolished commercial software usually stays unreleased and safe from this sort of scrutiny. However many of the same types of projects get left out in the open and easily visible to everybody when developed as open source. The low code quality of these projects would drag down the average for open source projects as a whole.
On the lighter side, you could say that you'd only consider software that was "out of beta" or version 1.0 or greater, but that would leave out most open source projects and commercial "Web 2.0" products....
Re: (Score:3, Insightful)
Then restrain yourself to "what Fedora ships" or "what Canonical supports in main". These are the presumably viable software products with a living upstream.
But you missed an interesting problem: failed commercial programs sometimes convert into open source projects. Its not clear to me whether this is a
Re: (Score:3, Insightful)
Re: (Score:3, Interesting)
Generally speaking, commercial desktop apps are still way ahead of their open counterparts, with the exception of code development tools and anything that directly implements a standard (browsers, mail clients, etc.)
One reason for this is that code quality as measured in th
Re:Not that surprising (Score:4, Insightful)
Re: (Score:2)
Re:Not that surprising (Score:4, Informative)
Is it because of intellisense? That's kind of nice, especially when your code is so disorganized that you don't remember where stuff is defined. Or if you can't open stuff up in two different windows to see where it is defined, like VS prevents you from doing (yeah, you sort of can, but it's stuck in the main VS window).
Is it because of the debugger? Sure, the debugger is nice, and I like it, but it only helps get rid of the easy bugs. The bugs that really eat your development time are the ones that only manifest themselves after the program has been running for a few hours/days, and usually a debugger doesn't help much with that. Besides, every other IDE comes with a debugger, even GDB works fine if you can handle arcane keystroke combinations.
And on top of it, Solutions and projects in VS are horrible. Why does VS try to save the solution every time I quit? Makefiles have some awful syntax, but at least when they change, I know it's because of what I've done, and I know how to fix it.
That said, I don't consider VS to be a bad IDE, it is reasonably decent. I just don't understand the logic of these guys who think that VS is the greatest IDE ever. It's a question, not a flame.
Re: (Score:3, Insightful)
The problem with your conclusion is that it assumes that code quality as measured in this sort of way is the only or even the most important thing.
It's like buying clothes. Sure, the quality of the clothes
Re:Not that surprising (Score:5, Interesting)
Re: (Score:2)
Putting a fresh coat of paint on a pig will only work for so long (and producing a fatter, less attractive, less useful, more expensive pig after 5 years of effort to produce something other than a pig, is not a win). Marketing and support can only compensate for high cost of low quality for so long; every day, more people realize that software that doesn't crash is better than software that has a 1-800 number that you
Re:Not that surprising (Score:4, Insightful)
It will, in the end, come down to a value proposition. The value proposition of freedom to modify code is very hard to quantify, so that will probably factor into the eventual success of open source not at all. The actual quality, usability, documentation, trainability, ease of install, compatibility with existing infrastructure (usually Microsoft), etc., will probably be the deciding factors, and I don't see open source having a clear-cut advantage in those metrics.
Re: (Score:2)
Bahahahahahahaha. Hahahahahahaha. Hahahahahaha. Hehehehehe. *snicker*
It's a shame that the "Funny (+1)" rating no longer earns karma, because you deserve it.
Re: (Score:2)
And... really. REALLY? You'd rather have the source for Outlook Server than have a 1-800 person tell you "Oh, yeah, just toggle the doohickey and it'll come right back up."
Personally, my understanding of the revenue model for companies like RedHat and other Linux distros was "Let's take the most difficult to install, use, and generally obtuse operating system in common use today, give it away for free, then charge people for support! Brilliant!"
Not that I
Re: (Score:3, Insightful)
You seem to forget that the Linux forums are generally stellar for resolving HOW-TO questions. Additionally, there are FAQs and instructional blog posts that are readily accessible through Google. In other words, "Toggle That Doohicky" is easily obtained in the FOSS environment as well, and can be done WITHOUT sitting on hold and taking your chances with the quality of the rep who answers.
Additionally, if the source were available, features you want could be added, someone ambitious enough could act
Why would that be a surprise? (Score:2)
Interesting, but not shocking
Considering that few open source developers are strictly open source, that's hardly a surprise. I'd be willing to bet many open source developers are also part of disciplined commercial teams.
The flip side of that coin is just as intriguing. Open source development models don't produce software of notably inferior quality either. That should send a shivey through Castle Redmondore.
Re: (Score:2)
Maybe a better way of saying this is that open source programmers aren't better programmers than closed source ones.
But nobody ever said open source programmers are better. The argument is that open source software gets continually better from a user's perspective. If it doesn't for enough users, somebody else gives them what they want. If you aren't happy with SUSE's direction, you can go to RHEL and vice versa without creating a lot of fuss. Chances are some
Too few kernels studied. (Score:2, Redundant)
This leaves just o
Re: (Score:3, Interesting)
Other methodological limitations of this study are the small number of (admittedly large and important) systems studied, the language specificity of the employed metrics, and the coverage of only maintainability and portability from the space of all software quality attributes. This last limitation means that the study fails to take into account the large and important set of quality attributes that are typically determined at runtime: functionality, reliability, usability, and efficiency. However, these missing attributes are affected by configuration, tuning, and workload selection. Studying them would introduce additional subjective criteria. The controversy surrounding studies comparing competing operating systems in areas like security or performance demonstrates the difficulty of such approaches.
From the end-user perspective functionality, reliability, usability, and efficiency are pretty much the entire thing. Most users couldn't care less that a piece of software is hard to maintain as long as it does what he wants reliably, consistently, and with a minimal amount of cognitive load. So this paper is aimed more at applying traditional software engineering metrics to four pieces of real-world software. The
Re: (Score:2)
I wonder what the results would be if a real, commercial Windows kernel were tested instead of a research-oriented one. I doubt if the research kernel was ever subjected to the "have to release it on time" programming deadlines.
Re: (Score:2)
No-one has ever claimed (Score:4, Insightful)
Re:No-one has ever claimed (Score:4, Insightful)
Open Source Case for Business [opensource.org]
Having worked heavily in both areas of software development, I think this particular article's conclusion was obvious: code quality depends on the people who wrote it, not the process the used to license it. But only people who have done extensive proprietary and open-source development could really see that first-hand, and our opinions are automatically dismissed as being pro-Microsoft shills. Thus, I predict this paper will be roasted over an open flame, crushed into a tiny ball, soaked in gasoline, lit on fire, and ejected into deep space by the most devoted open source proponents in both camps.
Re: (Score:2)
With a large userbase, even a large kernel development team is going to represent "no one". This is just the nature of the numbers.
The fact that anyone can be "no one" and that these "no ones" can then benefit everyone one else, is the whole point of Free Software.
The libre kernels undoubtedly have much more diverse and spread out development teams.
They represent more than just one corporate culture or more than just one approach to software.
CScout Compilation (Score:5, Insightful)
Given that the Solaris kernel has been compiled by two very different compilers (Sun Studio, of course, and gcc), it isn't that surprising. Because of the compiler issues, it is likely the most ANSI compliant of the bunch.
statistical wash-out? (Score:5, Insightful)
You found that '..the structure and internal quality attributes of a working, non-trivial software artifact will represent first and foremost the engineering requirements of its construction, with the influence of process being marginal, if any.' -- or in plain English: "the app specs had a much bigger influence when compared to internal efficiencies".
I would wonder if you're just seeing a statistical wash-out. Are you dealing with data sets (tens of millions of lines and thousands of functions) that are so large, that patterns simply get washed out in the analysis?
Oh dear, my post is no more clear than the summary...
Re: (Score:3, Interesting)
It sounds more like they're saying "If someone built it, and someone else is using it, and it's important, then the code quality is going to be pretty good. If it matters, it's going to get attention and be improved."
Of course, I can think of a bunch of counter-examples in Windows where something was important *to me* and mattered *to me* and no one at Microsoft saw fit to do anything about it for decad
Re: (Score:2)
Of course, I can think of a bunch of counter-examples in Windows where something was important *to me* and mattered *to me* and no one at Microsoft saw fit to do anything about it for decades.
For example ?
Re: (Score:2)
OK, sure that's fair. Here's one:
When copying/move a bunch of files, if Windows encounters an error that prevents a file from being read/written, the entire copy aborts at the point where the problem occurred.
I'd love Windows to automatically fail the problem file while continuing to try to copy/move the remaining files, then present you with some kind of error report at the end saying "These files couldn't be copied for [reason]."
There's file management utilities that you can use to get around t
Re: (Score:2)
For those of us who have read this [wikipedia.org], it is
Re: (Score:2)
Re:statistical wash-out? (Score:4, Insightful)
My personal opinion is that if statistics are a wash-out in general, then the researcher is asking the wrong questions. I know that the author pre-defined his metrics in order to avoid bias, but that's not necessarily good science. Scientific questions should be directed toward answering specific questions, and the investigatory process must allow the scientist to ask new questions based on new data.
There is clear non-anecdotal evidence that these operating systems behave differently (and, additionally, we assign a qualitative meaning to this behavior), so the question as I understand it is: is this a result of the development style of the OS programmers? The author should seek to answer that question as unambiguously as possible. If the answer to that question is "it is unclear", then the author should have gone back and asked more questions before he published his paper, because all he has shown is that the investigatory techniques he used are ill-suited to answering the question he posed.
Really? (Score:3, Insightful)
Re: (Score:2)
Re: (Score:3, Insightful)
What he says is that a cluster of metrics that collectively say something general about code quality (e.g., better code tends to have smaller files with fewer LOC; worse code has more global functions and namespace po
The 99% Solution (Score:5, Interesting)
This would explain some things like lower LOC count - after all, if you just have a bunch of global functions there's no need for a lot of API wrapping, you just call away.
I do hate to lean on LOC as any kind of metric but - even besides that, the far lower count of Windows made me wonder how much there, is there. Is the Windows kernel so much tighter or is it just doing less? That one metric would seem to make further conclusions hard to reach since it's such a different style.
Also, on a side note I would say another conclusion you could reach is that open source would tend to be more readable, with the WRK having a 33.30% adherence to code style and the others being 77-83%. That meshes with my experience working on corporate code, where over time coding styles change on more of a whim whereas in an open source project, it's more important to keep a common look to the code for maintainability. (That's important for corporate code too - it's just that there's usually no-one assigned to care about that).
Re: (Score:2, Insightful)
Much of the code in Linux, for instance, is drivers.
Re:The 99% Solution (Score:4, Informative)
Re:The 99% Solution (Score:5, Informative)
KLOCs? (Score:5, Insightful)
"To my surprise there was no clear winner or loser..." Not really a surprise at all, actually.
Re: (Score:2, Funny)
writeGoodCode(int numberOfLines, float ouncesOfCoffeeConsumed)
Re: (Score:3, Insightful)
The winner is still open source (Score:3, Insightful)
People make claims about the need for closed source all the time, usually revolving around the need to a predictable level of quality, or some other factor. The fact is, this results proves that its a wash whether you choose open or closed--so why not choose open?
There's a deep significance here I'm failing to capture completely. Someone else word it better if they can. But there didn't need to be some blow-out victory of open source over closed source for this to be a victory. All open source needed to do was compare--which it did, clearly--with closed source, in terms of value, to secure its worth.
Re: (Score:2)
So.... (Score:3, Interesting)
Closed Source Developer: I will try to do my best job as I possibly can so I can keep my job and make money because that is what I value.
Open Source Developer: I will try to do my best job as I possibly can so I can help the comunity and feel better about myself/get myself noticed in the comunity/Something cool to put on my resume... because that is what I value.
People who choose to license their software OpenSource vs. Closed Source says nothing about their programming ability. There are a bunch of really crappy GNU projects out there as well as a bunch of crappy closed source projects... Yea there is the argument of millions of eyes fixing problems but really when you get millions of people looking at the same thing you will get good and bad ideas so the more good ideas you get the more bad ideas you get and the more people involved the harder it gets to weed out good ones and bad ones. Closed source is effected often by a narrow level of control where bad ideas can be mandated.... All in all everything really ballances out and the effects of the license are negledgeable.
Re: (Score:3, Interesting)
An interesting point.. (Score:5, Interesting)
From my personal experiences, it typically seems code is written to solve a specific need. Said another way, in the pursuit of solving a given problem, whatever engineering is required to solve the problem must be accomplished - if existing solutions to problems can be recognized, they can be used (for example, Gang of Four/GOF patterns), otherwise, the problem must have a new solution engineered.
Seeing as how there are teams successfully developing projects (with both good, and bad code quality) using traditional OO/UML modeling, the software development life-cycle, capability maturity model, scrum, agile, XP/pair programming, and a myriad of other methods, it would seem to be that what the author is saying is, it didn't necessarily matter which method was used, it was how the solution was actually built (the.. robustness of the engineering) that mattered.
Further clarification on the difference between engineering and "process" would strengthen this paper.
I went to a Microsoft user group event some time ago - and the presenter described what they believed the process of development of code quality looked like. They suggested the progression of code quality was something like:
crap -> slightly less crappy -> decent quality -> elegant code.
Sometimes, your first solution at a given problem is elegant.. sometimes, it's just crap.
Anyways, just my two cents. Maybe two cents too many..
SixD
Re: (Score:2)
That's only the first half of the life-cycle. The rest would be along the lines of
-> elegant code with a few special case hacks -> special case hacks with a few lines of elegant code -> crap.
What I took from it was: (Score:2, Interesting)
Stupid metrics (Score:4, Interesting)
The metrics used in this paper are lame. They're things like "number of #define statements outside header files" and such.
Modern code quality evaluation involves running code through something like Purify, which actually has some understanding of C and its bugs. There are many such tools. [wikipedia.org] This paper is way behind current analysis technology.
Re:Stupid metrics (Score:4, Insightful)
Using one of the tools you propose, you will still not obtain results regarding the analysability, changeability or readability of the code.
"Code quality" is bunk (Score:5, Interesting)
The worst looking piece of spaghetti code could have fewer bugs, be more efficient, and be easier to maintain than the most modular object oriented code.
What is the "real" measure of quality or productivity? Is it LOC? No. Is it overall structure? no. Is it the number of "globals?" maybe not.
The only real measure of code is the pure and simple darwinian test of survival. If it lasts and works, its good code. If it is constantly being rewritten or is tossed, it is bad code.
I currently HATE (with a passion) the current interpretation of the bridge design pattern so popular these days. Yea, it means well, but it fails in implementation by making implementation harder and increasing the LOC benchmark. The core idea is correct, but it has been taken to absurd levels.
I have code that is over 15 years old, almost untouched, and still being used in programs today. Is it pretty? Not always. Is it "object oriented" conceptually, yes, but not necessarily. Think the "fopen,"fread," file operations. Conceptually, the FILE pointer is an object, but it is a pure C convention.
In summation:
Code that works -- good.
Code that does not -- bad.
Re:"Code quality" is bunk (Score:4, Insightful)
It has lasted that way for a very very long time.
Is it good code simply as function of its survival and (sort of) working?
I tend to think of good code like good engineering or good architecture. Surely you wouldn't define good architecture as "a building that remains standing," would you? The layout of the rooms, how well that space is used, how well it fits the needs of the users, how difficult it is to make modifications, etc all factor in to "good design" and have nothing to do with whether the building "works."
I am not sure you can put a metric to it anymore than I could put a metric to measuring the quality of abstract expressionism or how well a circuit is laid out--there may be metrics to aid in the process, but in the end one can't necessarily assign a numerical rating to the final outcome for the purpose of rating.
That doesn't mean that there isn't such a thing as good quality and bad quality code.
Re: (Score:3, Interesting)
"sort" of working is not "working."
exists a 6000 line SQL statement that no one understands
This is "bad" code because it needs to be fixed and no one can do it.
Surely you wouldn't define good architecture as "a building that remains standing,"
I'm pretty sure that is one of the prime criterion for a good building.
Your post ignores the "works" aspect of the rule. "Works" is subtly different than "functions." "Works" implies more than mer
Re:"Code quality" is bunk (Score:4, Interesting)
This example [ioccc.org] is code that works and also has some nice quality attributes: 96% of the program lines (631 out of the 658) are comment text rendering the program readable and understandable. With the exception of the two include file names (needed for a warning-free compile) the program passes the standard Unix spell checker without any errors.
This example [ioccc.org] is also code that works, and is quite compact for what it achieves.
I don't consider any of the two examples quality code. And sprucing bad code with object orientation, design patterns, and a layered architecture will not magically increase its quality. On the other hand, you can often (but now always) recognize bad quality code by looking at figures one can obtain automatically. If the code is full of global variables, gotos, huge functions, copy-pasted elements, meaningless identifier names, and automatically generated template comments, you can be pretty sure that its quality is abysmal.
Re: (Score:2)
A philosophical point, if I may. What is the purpose of code?
I think the problem is that many engineers think that code should fit some form of aesthetic that is really irrelevant for the purpose. So far, the attributes that people use for judging software code quality and productivity have almost nothing to do with what the actual code does or how it works.
Algorithms are hard to quantify unless you know about algorithms. Code "hardness," or the ability
Re: (Score:2)
Re: (Score:3, Interesting)
Not to put too fine a point on it, but this is too much concern over stuff that does not always matter.
I agree "functional" and "reliable" are absolutely important.
"efficient?" Only if efficiency is required or of any concern. How efficient is efficient? It is a balance of efficiency against economy.
"Maintainable?" Sure, most of the time, but not always. Sometimes we toss stuff on
Re: (Score:2)
The only real measure of code is the pure and simple darwinian test of survival. If it lasts and works, its good code. If it is constantly being rewritten or is tossed, it is bad code.
For me, good code is really easy to modify and update and fix problems and add to. So people actually do these things, and the result is that good code gets rewritten and 'tossed' a lot.
On the other hand, take things like ld (and friends) which iirc has code in "func(int) int param; { body }" style... which hasn't been used for what 20+ years? There are so many improvements I would like to make to it, but the code is too crusty to easily make them. I'd call that bad code... even though it 'works' just f
This is virtually baseless (Score:2, Insightful)
It's a well known fact that code will always resemble the institution that produced it, to some extent. To describe the Microsoft code as "poorly structured" is likely a bit out of touch.
The absolutely best kernel code is generally extremely beautiful and descriptive when dealing with the system's abstracts (with nice, long descriptive names for each function) and then unbelievably hellish and ugly in the sections that deal with hardware. Kernels represent an intersection between the idealistic system code
Analogy (Score:2)
Did I get that right?
So now the question is how long did it take to get that level of quality. And maybe there is a difference in quality but the measurement used is not sensitive enough, or not as appropriate or his conclusion isn't quite correct - he measured a difference, just didn't show in his conclusion
No Clear Winner, but... (Score:3, Funny)
Statements like this: "Indeed the longest header file (WRK's winerror.h) at 27,000 lines lumps together error messages from 30 different areas; most of which are not related to the Windows kernel." Allow me to feel smug in my anti-Windows bias
Re: (Score:3, Insightful)
Re: (Score:2)
Not sure how much work it would be to analyse yet another system, but I'd love to see how NetBSD fares in this comparison...
NetBSD is the one to compare here (Score:3, Interesting)
Isn't NetBSD the system filled with academics who insist upon clean, manageable, and portable code above all other standards? Too bad the NetBSD kernel didn't get judged here, I suspect it would have taken the cake.
I still recall this exhaustive report [bulk.fefe.de] comparing several kernels' performance back in 2003 in which NetBSD pretty much beat the pants off of everybody else (note the two updates with separate graphs). The initial poor performance was due to an old revision, and upon seeing that there were some places in which the newer revision wasn't so hot, the developers fixed them and in only two weeks, NetBSD beat out FreeBSD on every scalability test. Their pragmatism and insistence on quality code finally paid off.
Ever since seeing those charts, I've been waiting for Debian/NetBSD [debian.org] to come out...
Re: my kernel comparison link (Score:3, Informative)
Weird logic (Score:3, Insightful)
Re: (Score:2)
Think of my argument as looking at the people living in China and seeing that there are no areas occupied by giants or dwa
Preprocessing: here we go again (Score:3, Interesting)
That subjective conclusion is precise effect reading too much into the metrics.
Sun or Microsoft programmers need to support 2 and 2 platforms respectively. (Sun: SPARC and AMD64; M$: IA32 and AMD64). All portability are of boolean complexity.
But FreeBSD and Linux run on dozen of platforms. I do not know how it is in BSD land, but in Linux first and foremost requirement for platform support, is that it has no negative side-effects on other platforms. Consequently, for example, under Linux most (all? - all!!) locking is still implemented as macros: on uni-processor system with preemptive kernel feature disabled all in-kernel synchronization would miraculously (thanks to preprocessor) disappear from the whole code base. To make sure that on such platform, kernel would run as efficiently as possible - without any locking overhead, because all the locking is not needed anymore.
And that's single example. There are many macros for special CPU features: depending on platform it would be nop or asm statement or function call. No way around using macros.
I think one of the points the author needed to factor in, is portability of OS. Without that, most metrics are skewed too much.
P.S. Actually, Linux affinity to macros is often (at least from words of kernel developers) stems from poor optimization of inlined functions in GCC. Many macros can be converted to functions - but that would damage overall level of performance. In many places significantly.
Re: (Score:2)
Thanks for pointing this out.
pointless handwaving (Score:3, Insightful)
Slashbots are teh 13 year olds blah blah blah (Score:5, Funny)
Just so everyone understands, the tactic used here is known as "Poisoning the well." [wikipedia.org] The idea is the discredit an argument's source before the argument is presented. Here, our AC friend is trying to ward off criticism of Microsoft by insinuating that anyone who does so is a 13 year old "Slashbot."
The fallacy is in the fact that even someone who is 13 and often goes along with the Slashdot zeitgeist may still have legitimate criticisms of Microsoft, such as the fact that Microsoft sucks giant hairy donkey balls.
Re: (Score:2)
Re: (Score:2)
What do they taste like?
You'd have to ask Micro$oft.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re:Closed Source? (Score:5, Interesting)
The WRK is under the Microsoft Windows Research Kernel Source Code License [microsoft.com]. I'm not sure that this license conforms with anyones definition of open source, but it's reasonably free for reasearch.
But PP addresses a crucial point, if something really is closed source there is no reviewable way to compare and present this code. So if the WRK would be total crap they could always say: yes that's only the WRK, not the real kernel.
Only statements about open source code are directly verifiable/falsifiable. One of the reasons, why the FOSS approach is superior from a scientific as well as technical point of view.
Re: (Score:3, Interesting)
Re: (Score:2)
However, I was wondering how legal his database is; it might skate awfully close to the edge of the licenses he had to sign to get access to some of that code.
Re:question (Score:4, Interesting)