Myths About Open Source Development 507
jpkunst writes "A thought-provoking article by chromatic on oreillynet, listing eight "myths" that Open Source developers tell themselves. For example: Myth: Publicly releasing open source code will attract flurries of patches and new contributors. Reality: You'll be lucky to hear from people merely using your code, much less those interested in modifying it."
Headline for the article is a troll (Score:5, Interesting)
----------------
Mythical Man Month Methodology
http://fourm.info/
Re:Headline for the article is a troll (Score:2, Informative)
The myth is addressing the assumption that people who use said software will contribute to its development with patches and improvements to the code.
Re:Headline for the article is a troll (Score:5, Insightful)
Unless the author of the article has done some measurements to see what proportion of users send back improvements - and there's nothing in the article to say that he has measured anything or that he maintains any free software himself - then there's no reason to believe him rather than the 'myth'.
Re:Headline for the article is a troll (Score:5, Interesting)
It's my experience that the percentage of people who send feedback or patches is much lower than commonly expected. See, for example, Nicholas Clark explaning the volunteer pool for Perl 5 core development [mail-archive.com]:
Of course, there are hundreds of people in the CREDITS file, but a handful of people do the bulk of the work. Maybe it's an edge case, but 10% of Perl users aren't contributing back to the core. It's very much below 1%.
That's not bad. It just is. My point is that expecting a smaller, younger, and less-well-used project to attract more regular and frequent developers is usually unrealistic.
Re:Headline for the article is a troll (Score:3, Insightful)
Re:Headline for the article is a troll (Score:3, Insightful)
Even if you hate Perl, you should be amazed that people were able to produce so many useful libs in Perl
Re:Headline for the article is a troll (Score:3, Interesting)
I would disagree. In my experience, early versions of open source projects will attract more patches as a result of their immaturity - they are less refined and any bugs that exist are more obvious and usually easier to fix. As time goes on the project becomes more defined feature-wise and only the hard to find/fix bugs are left.
Re:Headline for the article is a troll (Score:5, Insightful)
I've been maintaining cscvs, a tool for breaking a CVS repository's history into changesets and (among other things) importing its contents into the GNU Arch [gnu.org] revision control system. It's adopted a fair number of users (more as the documentation and such get better), and a number of developers have contributed patches. If I weren't quite so busy with my job right now, I'd have been able to help *another* developer with a bugfix he's asked for a hand in putting together (to fix mismangling of the repository locations of CVS repositories which have a delta between the path used in the CVSROOT and the one used in rlog output other than the single such case I'm currently fixing).
The other project my maintainership of which could be considered active within the past year would be the "Ticket Applet 2", a GNOME applet for showing and updating the status of ones' Kerberos ticket. It's received a quite major patch from one outside developer (providing compatibility with his alternate Kerberos implementation), and feedback from a number of users at my workplace -- but there was certainly no flood of support washing in the moment I put it up on freshmeat, and had I been expecting such I would have been mistaken.
I think the actual claim in the article is a lot more defendable than the little slashdot blurb -- to the point that the blurb really does both the readers and the author something of a disservice. (Indeed, the only one I completely disagree with is the argument that it isn't sometimes best to throw out working code for a complete rewrite should it become unmaintainable).
Re:Headline for the article is a troll (Score:5, Insightful)
This is true, however, most commercial developement groups already know that these myths are just that, if not the coders, then their managers at least.
The issue he is covering is the fact that many people on the FS/OSS movements beleive that these myths are true. This article is not a condemnation of the FS/OSS community, but a reality check for them.
Re:Headline for the article is a troll (Score:3, Insightful)
Defensively crying "troll" in response to criticism isn't going to help matters any.
Re:Headline for the article is a troll (Score:4, Insightful)
Opensource works because even though 90% of users of a project won't contribute, the 10% that do (not just code, but bug reports, comments, newbie help, documentation, etc.) make a huge difference.
Half of the stuff is assumptions I deal with *every day* from management on my paid work, so to say that OSS makes these assumptions exclusively is a pure troll.
Some of it is plain loony - saying that writing code once and sharing it is a commercial advantage is ludicrous - the *point* of OSS is that we write stuff once then share it. Commercial development does exactly the opposite, by protecting everything with patents and forcing everyone to re-invent the wheel when they write anything.
Re:Headline for the article is a troll (Score:3, Insightful)
Ummm...a large percentage of commericial code DOESN'T patent. They just don't share the code. It's closed source. You can't see what it does. That's all. You can't (of c
Re:Headline for the article is a troll (Score:5, Insightful)
No, what the first myth was alluding to is this: when you release your OSS project into the wild, don't expect an army of l33t coders to materialize and assist you in developing it.
I've found this myself; I wrote a code for performing spectral synthesis of stars undergoing quakes, and released it under the GPL. There are quite a few asteroseismology groups around the world using the code now; but not a single person has contacted me and offered to help develop or debug the code.
As chromatic pointed out in his article, the majority of OSS projects have very few developers, even in cases where the project has a large user base.
Re:Headline for the article is a troll (Score:5, Funny)
An optimist would say that this is because your code, through some bizarre statistical anomoly, is perfect and doesn't need any further development or debugging.
A pesimist would say that you are the only one in the world who cares about spectral synthesis of stars undergoing quakes.
Re:Headline for the article is a troll (Score:5, Insightful)
Reality: You'll be lucky to hear from people merely using your code, much less those interested in modifying it.
So. Just because something is open or closed source, it does not mean that it is a good program nor does it imply that anybody wants to use it.
Myth: Stopping new development for weeks or months to fix bugs is the best way to produce stable, polished software.
Reality: Stopping new development for awhile to find and fix unknown bugs is fine. That's only a part of writing good software.
I don't see too much disparity here between the "myth" and "reality".
Myth: New developers interested in the project will best learn the project by fixing bugs and reading the source code.
Reality: Reading code is difficult. Fixing bugs is difficult and probably something you don't want to do anyway. While giving someone unglamorous work is a good way to test his dedication, it relies on unstructured learning by osmosis.
This "reality" again does not dispell the "myth". Try having new developers interested in a project and reading source code in a closed source project. Yeah, its difficult to read code, but infinitely more difficult to read it if you dont have access to it. BTW, the metaphor or whatever "osmosis" is trying to make a point is pretty silly. Osmosis is the transfer of water through a semipermeable membrane.
Myth: Installation and configuration aren't as important as making the source available.
Reality: If it takes too much work just to get the software working, many people will silently quit.
Yeah, there not that important thats why we did silly stuff like create autoconf to configure and install software. That is why we carry around the install.sh form X11 to install software in a predictable and sane way. That is why we have plain readable text files to configure our software. The reality holds true for closed and open source as well.
Myth: Bad or unappealing code or projects should be thrown away completely.
Reality: Solving the same simple problems again and again wastes time that could be applied to solving new, larger problems.
This is again true for open and closed source projects. Go look at one of the windows (closed source) freeware/shareware depositories and you will find at least 5-10 programs that all do the same thing more or less. If these were open source projects, I would imagine that there would be a good amount of code reuse going on here.
Myth: It's better to provide a framework for lots of people to solve lots of problems than to solve only one problem well.
Reality: It's really hard to write a good framework unless you're already using it to solve at least one real problem.
Does anyone thing this is either a valid myth or something too terribly interesting to talk about? I will say however, that UNIX (I'm generalizing that opensource is more of a UNIX like thing here) in general is a framework and our stuff plays well with one another. We have programs have STDOUT, and STDERR messages that are formatted for external processing and parsing, we have exit statuses in our programs so they can be &&ed and ||ed or test for their success or failure. We have signals, pipes, and sockets for IPC. Look at the number of opensource installs and the wide variety of things that they do and tell me that we are not solving a number of real problems well.
Myth: Even though your previous code was buggy, undocumented, hard to maintain, or slow, your next attempt will be perfect.
Reality: If you weren't disciplined then, why would you be disciplined now?
Axiom of life. If program sucks, noone will use it. This is true for opensource and closed source stuff.
Myth: Warnings are just warnings. They're not errors and no one really cares about them.
Reality: Warnings
Re:Headline for the article is a troll (Score:3, Informative)
And even if there are a lot of people who use it - don't expect them to be willing or ABLE to provide you feedback or software development assistance. Bei
Re:Headline for the article is a troll (Score:3, Insightful)
Plain readable text files don't validate parameters or combinations of parameters for you. That's part of the problem; they're just text. Put a GUI around it, a
Re:Headline for the article is a troll (Score:3, Interesting)
What the problem? Binary config files don't validate parameters either.
That's part of the problem; they're just text. Put a GUI around it, and all of a sudden you can prevent users from saying that - say - they want to log all output to a file, but they specify a file which is invalid.
Why a GUI? why not CLI, for example?
With a GUI in there, you can tell the user that they've made a mistake while they're editing t
Re:Headline for the article is a troll (Score:3, Interesting)
Yes, but with binary config files, you have a program which writes those binaries, and which does the validation.
The OP was claiming that plain text files are more than good enough for configuration. His mindset is "well, you can read, can't you?".
Why a GUI? why not CLI, for example?
Whichever you prefer. Radio button choices are easier to make with a GUI, and most end-users will be using a GUI such as KDE or Gnome, so I'd suggest
Installation and configuration (Score:2, Interesting)
Re:Installation and configuration (Score:3, Insightful)
This is a redhat problem, not an opensource problem. I've had similar problems with some silly windows programs that required a certain versions of visual basic dlls or some other prerequisite dll or whatever.
Btw, doing 'apt-get install imagemagick' on Debian works quite well. Doing 'rpm -i imagemagick' on RedHat is more than likely only going to give you a list of reasons why it aint gonna do it.
Ahem. It's called up2date or yum. (Score:3, Informative)
If you want apt-get like behavior, you should be using up2date. And then there's yum which has apt-get like syntax. Both of these meta tools use
myth 9: (Score:5, Funny)
Re:myth 9: (Score:5, Insightful)
Sorry folks, a programmer with no degree but lots of Open Source experience will still have a tougher time getting a job than a C.S. student with no experience.
It's wrong, but it's still true.
Re:myth 9: (Score:3, Insightful)
Re:Umm...... (Score:5, Insightful)
According to that link, Alan has a BSc in CS. Linus Torvalds has a Bachealors degree in CS, and an honorary Ph.d from the same school in Finland. I'm too lazy to dig up links for that. It's in several of the books about his life.
Kirby
Re:myth 9: (Score:5, Interesting)
In the last couple years I have dated a teacher, nurse, legal assistant, and a graphic designer, and the only one who didn't really enjoy talking tech was the graphic designer and I think thats because she, too, worked with computers all day.
Re:myth 9: (Score:5, Funny)
I tell my wife all about my day in the tech world, and she tells me all about her day in the marketing world. Neither of us knows fuck-all about what the other is saying, but it makes for good conversation.
Then we go shag like bunnies...
Re:yea! just like myth 10: (Score:3, Funny)
I thought it might be paralyzing social fear. But it's the karma whoring that leaves my dance card so empty, and my
Myth # 9 (Score:5, Insightful)
This may be true for a minority of widely used projects, but for most applications, I've never bought this argument. Bug swatting, and especially code inspection, is and always will be a tedious process, not well-suited for a volunteer-only development community. The only advantage I see for open source in this area is that bugs can be fixed as they are encountered -- but this only works where the end user has the required skills to do the fixing in the first place.
Re:Myth # 9 (Score:5, Insightful)
The only thing that open source brings to the table is that people might look at it, and might point out problems. But if you are relying on both of those to happen you are making two big assumptions.
Re:Myth # 9 (Score:5, Insightful)
I am using a CAD system that has a bug in it's STL export. I can duplicate the bug. I have sent it to the company I bought the software from and I still do not have a fix. It looks like a pretty simple bug but since I do not have access to the source I am out of luck.
PS I have not seen any good 3D CAD systems that are OSS.
Re:Myth # 9 (Score:4, Insightful)
You can duplicate the bug. You do not have the source.
They have the sources. Their setup can easily be so that they cannot duplicate the bug.
There is also the strong possibility that fixing that bug just moves the bug-covering and by closing off one bug it lets a bunch of other bugs loose on the unsuspecting victims.
There is also the messy problem of tracking and propagating the fix. I'm an old fart, so bear with me on the manual drafting analogy. If a drawing is missing a line, you can't just go into the filing drawers, pull the drawing, add the line, put it back and be finished with it.
This is why methinks Open Source will ultimately win. Not (just) on the low-end, low-budget side, but more importantly on the high-end, high budget side.
If the fix fixes one bug that you care about and exposes ten bugs you do not care about, it is a good fix. For you. It is of course to your advantage that that fix, minus any assorted buglets that you do not care about, makes it into the general stream. In the meantime, you have something that is almost as good.
The net effect seems to be that Open Source gets almost another nine, almost for free. It's not a magic bullet, but it's a very cheap and effective way to aproximate reliability that would otherwise be prohibitively expensive.
Re:Myth # 9 (Score:5, Interesting)
Myth: Thousands of users are looking at the code.
Not Myth: Thousands of users could be looking at the code.
Not Myth: It's that one out of thousands that because (s)he can when (s)he needs to and thereby does that matters. No silver bullet, but it improves the odds drastically.
Personally, mostly I wouldn't bother looking. But IF for some reason, what and how I'm doing something exposes an interesting bug, I will be looking to see "how come", code included.
You do forensic analysis on the airplanes that crash, to see "how come". You don't scrutinize the ones that are flying with the same severity. Aircraft safety would be much worse if aircraft designers could not obtain any information about crashed airplanes. (Part of the closed-source scenario. The developers do not have access to information about crashed applications. No I am not going to ship my servers, users, configurations and proprietary data to some vendor so that they can maybe get to something in a few months.)
Fear not, corporate developers (Score:5, Insightful)
~~~
Not all open-source projects are alike, however. A small number of open-source projects have become well known, but the vast majority never get off the ground, according to Scacchi.
~~~
Open source is obviously faster/better/cheaper when 1000's of people donate their time to a single project. The only open source project I've been involved in was a collaboration among several corporations, all of which wanted to leverage each other's resources, but none of which could really contribute their own.
There's nothing like money to motivate people to work on a project for which people aren't willing to donate their time.
Personally, I'm not convinced speed is related to developer quantity. There's too big a variation in productivity between experienced and amateur developers.
I'm also not convinced open-source is right for all types of software. How many open-source developers you know that conduct large-scale usability tests? How many open-source developers go around interviewing end users? When the developer and product consumer is the same, open-source makes much more sense to me.
Re:Fear not, corporate developers (Score:3, Interesting)
None. Why? One potential reason is because it's not needed. Ever consider that all the 'usability tests' that MS conducts are a bunch of shit? Look at the two 'major' - supposed - outcomes of such research: MS Bob and Windows XP's graphical interface. All that this illustrates is that MS found people are dumb, and that MS doesn't think most folks are capable
Re:Fear not, corporate developers (Score:5, Insightful)
Oh please. Usability is THE REASON (well, ok, marketing too, to a lesser degree) that Windows runs 90%+ of the world's PC's. Usability is THE REASON why Linux isn't widely adopted as a desktop platform. So you just keep telling yourself that, and you'll keep Linux and other OSS projects to a tiny, tiny userbase.
People want more features, so they write them themselves - and quite a few people will use them. Sure, most people don't (they just use the 'vanilla' configuration), but it's necessary to have that flexibility in the framework; otherwise there will be no innovation. The benefit to a system like linux is that flexibility is there due to the openness and availability of the source code: nothing needs to be reverse engineered.
That's great and all, but flexibility is greatly overrated. I want my computers to run my businesses for me. That's it. "Flexibility" as a "feature" is something that's thrown around when a product is simply too difficult to use. Fuck flexibility. I want something that works. Hell, I want something with LESS flexibility. I don't need software that's going to do everything under the sun. Software should do it's job, and get the hell out of the way. If people wanted "flexibility" above all else, you'd find stereos that are sold without cases, and wires that you have to connect yourself every time you wanted to use it.
Re:Fear not, corporate developers (Score:5, Insightful)
Usability is THE REASON (well, ok, marketing too, to a lesser degree) that Windows runs 90%+ of the world's PC's.
Bullshit. If usability was the key issue, MacOS would beat Windows, and the entire IBM-compatable PC line would have died out in the '80s when it was still young because the competitors like Amiga, Atari ST, and the like were a LOT easier, and prettier, and more powerful. Open Hardware is the reason Windows won. The IBM PC was (despite the best efforts of IBM) an open spec that everyone knew how to exploit, and all the advantages that gives to the consumer came out of that. Microsoft was just lucky enough to be the one providing the OS for it.
Re:Fear not, corporate developers (Score:4, Insightful)
I would definitely say MacOS is no better from a usability standpoint than Windows, and my personal experience is that it's less usable (for me).
But would that have been the case back when Windows was just a large bulky application that ran on DOS? Remember it's back *THEN* that Microsoft beat out Apple. Today they're just riding the momentum from that, because the software industry has a huge inertia due to 'network effect'. Back when both platforms were on equal footing and had a chance to compete fairly, Windows beat Mac *even though* it didn't have a good interface back then. Hence my call of "bullshit" to the claim that the UI is the reason Microsoft is winning. It has to be something else because they were the *worst* UI of the field back when there were viable competitors in the '80s. Mac, Amiga, Atari ST - all of these were contemporaries with Windows 3.0, and somehow ended up losing to it. Therefore the user interface CANNOT be the reason for their success.
Re:Fear not, corporate developers (Score:4, Informative)
I would imagine that many orders of magnatudes of more people have tried the lastest version of the Linux kernel as compared to Solaris, WINNT, and darwin kernels. Maybe that is not a usability test. For me I downloaded a few of the lastest Linux kernels for my desktop, I have found some good stuff, like performance increases. I've found some stuff was broke to hell, like sound and IDE when combined withe ACPI. You know what, these issues were already being discussed on the mailing list when I found them, and they appear to be working now that I am running 2.6.0-test11. Btw, I cannot get windows to play a dvd on the same laptop now that I have tried to patch it because of the RPC worms.
How many open-source developers go around interviewing end users?
I do. So thats one. How many closed source developers do this?
When the developer and product consumer is the same, open-source makes much more sense to me.
Hmm, sounds like the UNIX world to me. Built by developers and geeks for developers and geeks. Its working pretty well. All of the big boys are doing it now, IBM, HP, Dell, Sun, etc.
Re:Fear not, corporate developers (Score:5, Insightful)
This is definitely true. If you look around, you'll notice that most of the best Open Source projects are those where people are getting paid to contribute in some way. That's not to say that those same people would not have contributed otherwise, but money allows you to do things like drop your day-job and go full-time doing what you really love. The Open Source community needs to take a good hard look at how more experienced developers can be brought 'on-board' full time. OSS is beyond a hobby at this point. It's quite time to put that into clear perspective.
Open Source, at it's core, is about collaboration to meet needs efficiently. Part of that collaboration needs to involve paying developers so they can work full-time. Corporations who pool resources and collaborate on OSS projects to meet mutual needs are a perfect example. The same idea can work for individual users and smaller projects, however.
Take, as example, a typical desktop application like personal finance managers. We have GNUcash, which is a pretty good start, but it's missing a lot of the useful features found in the far more popular Quicken and Money. I personally have little interest in helping to developing GNUcash, though I wish it was a better fit for my needs. I'm not familiar with its codebase and I already spend most of my free time working on my own OSS project. (which I eventually plan to provide professional consulting services around..) However, I am willing to pay somebody $40 to develop a couple features I need in GNUcash. $40 is about how much I'd have to spend on Quicken or Money, which already meet my needs. But alas, $40 is not fair enough compensation for the developer. That's where collaboration comes in. There are millions of people who use personal finance software. If even 100 people contributed $40, that's $4000 compensation to add maybe one or two features -- easily doable in a month's time by an experienced developer. Realistically, there are far more than 100 GNUcash users able to contribute and far more than 1 or 2 features that need added. Once users start contributing financially to Open Source projects, allowing their developers to work full-time, we will see the true OSS revolution take place. The key is how to organize this process.
Myth 9? (Score:5, Insightful)
Truth: Although it's the most popular, it's not the only license.
Sadly, I think this is what most people think of when they think of open source.
Fortress of Insanity [homeunix.org]
Re:Why Sad? (Score:3, Insightful)
This should have been at the top of the myth list: If I don't use the GPL someone will come along and STEAL MY CODE!
Engage your brain for a second. No one can steal or "close" your code. Unless they delete every copy in the world and erase your memory
Re:Why Sad? (Score:3, Insightful)
Re:Why Sad? (Score:3, Interesting)
and from the Darwin FAQ [apple.com] we find the following
Q. Why did Apple decide to share all of its modifications with the BSD community?
A. Although the BSD licenses don't require companies to post their sources, divergent code bases are very hard to maintain. We believe that the open source model is the most effective form of development for certain types of software. By pooling our expertise with the open source development community, we expect to improve the quality, performance,
MITH#1 open source is comminust (Score:2, Insightful)
I cant tel you how many times I've herd this. That's crap. It's more like copyrights are an overbearing government regulation that locks out the little guy than a true free market property right. When you them for what it is, then the facts of why Linux is going to take over the marketplace becomes obvious.
get off my case (Score:2)
Since you can't spell "communist", it's not surprising that you don't know what it means, either.
Fine, so I rushed it and accidently hit the submit button rather than the preview button. Sue me. Sheesh, I guess it's easier to rant about spelling than the facts.
wrong in at least one place (Score:5, Interesting)
Reality: You'll be lucky to hear from people merely using your code, much less those interested in modifying it.
In my experience, this is not the case. I wrote a little rip-encode-and-tag script called choad and listed it on Freshmeat for the hell of it. This was two years ago, and I've received over 20 patches -- for a crappy little perl script!
I wrote it to solve my problem, and I continue to be pleasantly surprised when I get emails with feature enhancements, bug fixes, or just plain thanks and encouragement from people who had the same problem as me.
Re:wrong in at least one place (Score:5, Insightful)
Re:wrong in at least one place (Score:5, Insightful)
I also wrote a bunch of hacks that I just gave away, but I never expected patches, or for people to actually use it.
Open Source is like socialism, you just help out where you can, and share what you got. If people don't take it, then it's their loss :) At least it was useful for myself, and it might be useful for others.
To assume that one writes a few hundred lines of code, and then get instant fame is of course ridiculous :)
Re:wrong in at least one place (Score:2)
The important thing is to make sure you fix the bugs as they get reported, or people give up reporting.
BTW - if you're one of those people who installed... what are you using it for?
Re:wrong in at least one place (Score:2)
Um, 20 patches is not a flurry, regardless if it was just a like script you listed on freshmeat. The probem is your own experience doesn't scale. The best real life example of that it XFree86, which has hundred thousands of users yet has a regular developer base of less than 20 and less than 100 patch contributors.
That said, congrats on successfully sharing a open source project. Regardless of its size, it appears that it was useful (and hopefully
Re:wrong in at least one place (Score:3, Insightful)
The "reality" he says is "If you weren't disciplined then, why would you be disciplined now?"
Umm, how about because I learn from my mistakes?
Jeebus, but isn't that one of the things humans do? Learn?
It's got nothing to do with being "undisciplined" (well, usually nothing) it's about learning. The more you do, the more you learn, and the better you become.
Are these really myths? (Score:5, Insightful)
I mean, does anyone really think that how they package their product won't effect how many people start using it? Are there really a lot of people out there who assume that they'll have an instant dedicated following of skilled developers spring from nowhere the moment they publish their source?
I really doubt it, somehow. Charitably, I'd file the advice in this article under the "Obvious but sometimes in need of restating" catagory in that sometimes people will lose the forest for the trees. Still, no real revelations here.
Re:Are these really myths? (Score:2)
How many times have you seen a project that says "Here's our ultra-cool project. Just grab the files from CVS and build". No mention of the dependencies (the mailing list will tell you that you need libfoo-1.2; we depend on a specific bug in 1.2, so 1.2.1, which fixes the bug, will break our app).
Or "Here's our ultra-cool project. It's only a small app right now, but when we get people working on it, it'll do your dishes, write your thesis, and mow the lawn, with 3-D graphics and 6.1 surround sou
Amen! (Score:5, Insightful)
Bollocks! (Score:3, Insightful)
BECAUSE I LEARN FROM MY MISTAKES.
Imagine an art critic saying to a painter: "Your first work is sloppy, so therefore everything you do will be sloppy, and there's no way you can improve."
Generally, the more you do, the better you get.
Translation... (Score:4, Insightful)
On warnings (Score:3, Interesting)
Their solution is, always fix warnings.
My solution is, GCC needs some way to suppress warnings!
Yes, GCC can already suppress *classes* of warnings. But I want to be able to suppress warnings on a per-line basis. What if in function x, there is a variable that I have defined but do not use for some specific reason-- but I still want to be warned if I do the same by accident in function y?
In Codewarrior, we had something called #pragma unused which worked like this. But that was just for that one case. Something generalized would be cool, something like "#pragma gcc.sw typecast" that would suppress typecast warnings for the next block, for example...
Re:On warnings (Score:5, Informative)
You can use GCC's attribute system:
int foo __attribute__ ((unused));
GCC supports all kinds of cool attributes, both for functions and variables. For example, the ((deprecated)) attribute marks a variable as deprecated, and will produce a warning if any code uses that variable.
However, these methods are not portable. On nearly any compiler I can imagine, the cleanest and simplest way to supress an unused variable warning is to assign the variable to itself:
int x; /* shut up compiler warning */
x = x;
Run 'info gcc' to get the full documentation. Go to the "C Extensions" section. GCC is littered with HUNDREDS of very cool extensions. Just make sure it's worth giving up portability...
wow (Score:5, Insightful)
Oh my God, this sounds exactly like my last job. 10,000 lines of Tcl, with not a shred of documentation in sight. Running a financial system that processed millions of dollars a day. And I know to this day, my old boss is still trying to figure out why she keeps losing employees left and right, and why it takes so long for new people to come up to speed.
Re:wow (Score:2)
Re:wow (Score:3, Insightful)
Re:wow (Score:3, Interesting)
Another myth (Score:5, Insightful)
As with all the projects, (Score:2)
I've had good experiences with smoothwall, emule mods, azureus, and believe it or not Windows. (i am a MS beta tester
For the LAZY ones (Myths List) (Score:4, Informative)
Myth: Stopping new development for weeks or months to fix bugs is the best way to produce stable, polished software.
Myth: New developers interested in the project will best learn the project by fixing bugs and reading the source code.
Myth: Installation and configuration aren't as important as making the source available.
Myth: Bad or unappealing code or projects should be thrown away completely.
Myth: It's better to provide a framework for lots of people to solve lots of problems than to solve only one problem well.
Myth: Even though your previous code was buggy, undocumented, hard to maintain, or slow, your next attempt will be perfect.
Myth: Warnings are just warnings. They're not errors and no one really cares about them.
Myth: Users don't mind upgrading to the latest version from CVS for a bugfix or a long-awaited feature.
For explanations of each RTFA
Here's a myth I see a lot (Score:5, Insightful)
Open Source Software is all about need (Score:5, Insightful)
That's nice. (Score:2)
Publishing your Code Will Attract Many Skilled.... (Score:5, Funny)
A few words from a desperate open source coder...
Since no one seems to listen to me otherwise, perhaps a poem will get your
attention:
This driver's getting fat and beefy,
But my cat is still named Fifi.
Hmm, I think I'm allowed to call that a poem, even though it's only two
lines. Hey, I'm in Computer Science, not English. Give me a break.
The point is: I REALLY REALLY REALLY REALLY REALLY want to hear from you if
you test this and get it working. Or if you don't. Or anything.
ARCnet 0.32 ALPHA first made it into the Linux kernel 1.1.80 - this was
nice, but after that even FEWER people started writing to me because they
didn't even have to install the patch.
Come on, be a sport! Send me a success report!
(hey, that was even better than my original poem... this is getting bad!)
WARNING:
--------
If you don't e-mail me about your success/failure soon, I may be forced to
start SINGING. And we don't want that, do we?
(You know, it might be argued that I'm pushing this point a little too much.
If you think so, why not flame me in a quick little e-mail? Please also
include the type of card(s) you're using, software, size of network, and
whether it's working or not.)
My e-mail address is: apenwarr@worldvisions.ca
Comments (Score:5, Insightful)
What most developers don't think is "Hey, I didn't contribute anything. Nobody I know has contributed anything. Why will my project be any different?"
Myth 3: Reading code
I've tried to read large bodies of code before. It's damn hard, even if it is documented. And when it isn't documented, your beginning developers don't have a chance.
Myth 4: Packaging
Um...duh? Of course it needs to be properly packaged. And dependency lists? If someone can't get it to compile, they definitely won't use it.
Myth 5: Start from scratch
Don't start from scratch if the code isn't clean. Make new code clean, and go back to clean up existing code. Make sure you have those regression tests ready.
Myth 7: Perfection
Developers are humans. Humans are fallible. I'll make a perfect program - when Bullwinkle pulls a rabbit out of his hat.
Myth 8: Ignore warnings
If the warnings were ignorable, they wouldn't be there. My profs would take marks off if you got warnings in compilation, unless your documentation explained exactly why you let the warning stand (and it had better be a good reason).
Myth 9: Tracking CVS
Users don't track CVS. Developers track CVS. Users want quick-and-easy, working code.
Either I miscounted, or there's more than 8 entries on the site (they aren't numbered)
Re:Comments (Score:3, Insightful)
They have a chance if they start by fixing a bug. It gives focus to the effort of reading the code, and imposes a structure on how you must do it. It's also a great motivator.
Instead of reading random files, and trying to make sense of things this way, you start with a symptom (Widget caption isn't updating to reflect
Re:Comments (Score:4, Insightful)
So? It's still easier than reading tons of source and out of date documentation (documentation is always out of date).
When I had to work with the Mozilla source code, I found that the most effective way to do it was to go right in and implement a feature. Some of the interfaces I had to use were documented, and some weren't. Where no documentation was available, I had to read the surrounding code, a few layers of calls, typically, to understand what was going on. I didn't really understand how things worked until I tried a few things, and saw how they didn't work.
Mozilla is a big project, it comes with its own middleware, and at least when I worked with it, it was poorly documented. Probing it was the only effective way I found to understand how parts of it worked.
Bugs always do something "visible", or they wouldn't be bugs. By "visible" I mean visible to the end user - it can be a protocol stack that sends the wrong message, an MPEG encoder that flips a bit in a picture header, or a real-time scheduler that's late to schedule a process - these bugs are all visible to the person who's bitten by them.
Of course. That's why you don't typically get cvs write permissions right away, and if you screw up, you typically get an explanation of exactly how you screwed up, but it's done in a context with which you're already familiar (you already worked with the code in question), so your chances of understanding the explanation are greater than if you just read the code and didn't try to work with it.
Re:Comments (Score:3, Insightful)
Here's the real problem with this myth: desire to contribute, and ability to contribute, is nothing in the face of lack of premission to contribute.
Not suprisingly, a lot of people who contribute patches and development effort to OSS projects work in the field - they're developers themselves. Since they're mos
Re:Comments (Score:3, Funny)
You lucky shit! Our code had to compile on Windows!
Good points on ease of installation (Score:5, Interesting)
- "Packaging Doesn't Matter"
- "Programs Suck; Frameworks Rule!"
- "Warnings Are OK"
- "End Users Love Tracking CVS"
I appreciate the difficulties involved for open-source developers in making their programs easy to download and play. At the end of the day, it's their choice whether they make it accessible to the masses. Many of them just want to give something to the world that they would have otherwise kept for themselves.
But it is clear from the number of ambitious projects that many developers to aspire to hit prime time. In those cases, I hope they will take the advice in Chromatic's article, and think very carefully about the experience of an end-user who just wants to have a look.
For one thing, provide some screenshots so they don't even have to download the thing to see it. Next, read your installation instructions and consider whether they might not be better represented as an actual installation script. And finally, have an automated test facility to make sure the installation procedure works correctly.
An example of a problematic open-source package is subversion, the "sequel" to CVS. Because of the decision to bootstrap version control, you have to go through some painful procedure (last time I looked), just to see if it's worth bothering about yet. I have better things to do than jump hoops to try out a bit of fresh meat. I'm sure it will be great when it hits 1.0, but I'll save my energy until then.
Remember: the risk of a crap product is high when it comes to picking one of the thousands of packages on SF. Therefore, the pain threshold for most people is very low: if it doesn't work after a few minutes, most people will give up and try one of the dozen alternatives.
BUT (Score:2)
biggest problem I have with list (Score:4, Insightful)
Myth: New developers interested in the project will best learn the project by fixing bugs and reading the source code. Reality: Reading code is difficult. Fixing bugs is difficult and probably something you don't want to do anyway. While giving someone unglamorous work is a good way to test his dedication, it relies on unstructured learning by osmosis.
I work for a very niche market/profitable software company and thats exactly how the developers get their feet wet, by fixing minor bugs.
Seems like the only way to "learn a project" is to fix bugs and therefore read the code.
Re:biggest problem I have with list (Score:4, Insightful)
The first being a high level overview / design document that provides a big picture of how the pieces correlate and interact with each other.
The second being bug fixes and other tasks to get familiar with the low level details of the implementation.
The two together make for a great way to familiarize yourself with a project, but code alone with no other documentation is tedious and much less effective.
Major nitpick with "Warnings are OK" (Score:2)
That's not a warning - that's more akin to an error. If the oil light comes on, and you don't stop immediately, you will stop in a very expensive way seconds later.
Naw, you have hours to days before engine siezure. (Score:3, Funny)
If the oil light comes on, and you don't stop immediately, you will stop in a very expensive way seconds later.
False.
The oil light in my 1990 trooper used to come on regularly because of low oil pressure. After I while, I quit topping it off, always thinking "I'll take care of it tomorrow." The situation went on for weeks before the engine finally siezed.
Of course, the above is strong evidence that I am an idiot.
I detect some bitterness and pessimism (Score:4, Insightful)
True Value of open source (Score:5, Interesting)
A) I can build apon other people's code.. It's effectively stealing their ideas, BUT since I'm GPLing my code as well, there is no net loss, and they are free to resteal my ideas back (if they are so inclined). I do often refer original authors to my new code.
B) I recognize that people MIGHT secretly build apon my code, so I get a warm fuzzy.
C) I can fix problems with open source drivers (postgres jdbc driver, GNU file-utils, etc. are some of my examples). Moreover, my debugger can jump straight to the line of maliscious code.
D) When I am about to release code publicly, I feel self conscious, and thus I put a TREMENDOUS amount of effort into cleaning up the code.. Making sure various platforms work, making sure there is no embarrasing spagetti-code, etc. Thus the mere possibility of people reading my code causes me to exert effort that I wouldn't otherwise. The end positive is a lower propensity for bugs, AND more modular/reusable code (especially with anything in perl).
The end-end result is therefore that Open source facilitates greater code reuse; less re-inventing of the wheel.. And more importantly code extensibility.
Now this begs a question of the distinction between modules and out-right applications. Open source is great for producing millions of reusable modules, but we often get chastized about the availibility of abundant QUALITY applications. Well, in my view, the merging of these two is two fold:
A) Open source applications tend to be more "plugagable"
B) Commercial sites will often pay developers to use open source modules and customize them to the particular needs of the corporation.. In doing so, serious feedback is provided to the various open source projects (because it is in their mutual interest to refine the modules). I as part of such a corp, have contributed (in various small ways) to several open source projects on the corp's dime, and with full authorization. This is of course, a completely unreliable source of income for a project, of course, but it is definitely a facilitator.
Good Software Management takes effort... (Score:4, Insightful)
We (popular IT community) are re-learning the lessons of IBM in the 60s which Fred Brooks distilled in his famous The Mythical Man-Month.
I think the bigger misunderstanding is that new developers/IT types/CS academics thinks that everything is new. Most computer security issues were first discussed based in the 1960s or 1970s. Much of Distributed Computing is now being "re-discovered" as Grid Computing.
A few more I would add (Score:5, Insightful)
1. Using autoconf/automake will make my code portable.
TRUTH: You need to know what system calls are portable, which ones arent, and the nuances in using each on different platforms. The auto* tools will only make detecting and utilizing the correct versions easy. It's up to you to identify and code for them in the first place. (Ditto for compiler flags, shared libraries, linker options, etc)
2. Network programming is easy.
TRUTH: I've seen a lot of projects that implement their own network communication using TCP sockets and sprintf text messages. A number of others transmit little endian integers around. And others still use a blocking style request->response form of communication.
Good network programming is really hard, and unless you take the effort to design and implement something robust from the start, this kind of ad-hoc, inflexible networking will become embedded into the application and require significantly more rework later down the road.
And PLEASE reuse something that might fit before even attempting to write your own layer. The gnutella protocol is a great example of this problem.
3. Threading is as simple as using pthreads and mutexes.
TRUTH: Good threading code is difficult to develop and difficult to debug. It is always preferable to use an event based model where possible, and rely on threads only when you need scalability on SMP, work arounds for blocking system calls (gethostbyname_r), or background tasks that you dont want delaying interaction with a user or network app (there are many other reasons, but these give you the general idea of where threading is appropriate).
Synchronizing access to shared resources between threads is also very tricky. The level of granularity of locking, and the structure of your data structures themselves, will have a significant impact on performance. Too much granularity and you end up with extremely complex locking hierarchies that are difficult to debug, more prone to dead lock. Too little granularity and you get lots of contention for these shared resources.
Finding the sweet spot is tricky, and often requires lots of experience or tuning to get right. The lack of tools to provide visibility to lock contention and latency also make this difficult.
I'm sure there are others, but these are the big ones that come to mind.
Open source contribs can be much easier (Score:5, Insightful)
then please make it easier to contribute.
Show us your roadmap for development,
where you want us to contribute time,
and how we can get started helping you.
Make it easy to understand your software,
maybe by creating help files, diagrams,
real examples of how to use your software,
even comparisons to related software.
Source code comments are good;
technical overviews are even better.
Above all, get FEEDBACK from developers
on your source code and your documentation.
Is it clear? easy? How could it be easier?
The more your improve your documentation,
and your process for contributing code,
the more we can help you. Thanks!
Cheers, Joel
Re:Open source contribs can be much easier (Score:3, Insightful)
Hell, not even most commercial software I've worked with has any of these.
It's easy to contribute. You want project X to do Y, you post 'I wish project X would do Y', and a developer either replies 'I'm all over it... next release' or 'Send us the patches & we'll look at them'. If the documentation sucks, post 'I'd like to write some better documentation.. give me a couple of weeks'. If the installer sucks, post 'Here's an innosetup script... enjoy!'.
Most s
Feature freezes help stability? (Score:4, Insightful)
There are certain types of necessary changes that inherently destabilize a codebase no matter how careful you've been. It's inevitable. Oftentimes, things like this are checked in to amortize the cost of producing, fixing, and improving said code. There are the unforseen interactions that your new subsystem has, that none of the regression or unit tests have picked up. I know - "write more/better tests" is a better solution. But omnipotence is an impossible goal.
To continue the author's "home" anology, relasing software is like preparing a meal. The pots and spoons simply must get dirty when you're cooking. Many try to "clean as you go," but at the end, you're still left with your dirty casserole dish. You can either choose to clean things up before your guests get there (feature freeze), or you can leave the dirty dishes lying on the counter for all to see.
I might be inclined to say that the shorter the feature freeze, the better. But I don't have any evidence to back this up - nor does Chromatic cite any evidence (except antecdoctal) to support or detract this claim. Maybe people by nature are better at fixing a slew of bugs at once. Maybe not.
Freezes, milestones (alpha, beta) and the like are inevitable parts of producing quality software fit for public consumption, short of "papal infallibility." We're only human.
Dom
Accessibility... (Score:4, Insightful)
I think most people, tech savvy
I think people need to find their niche, as to what they can and can't do in order to contribute. Many people think because they are not a hard-core coder they cant do anything to help. I've only contributed to a couple of things since I've been using Open Source stuff.(the past 4yrs) But when I do fix a bug or create something a project might find useful I usually send any files or useful info over to the project maintainers. It is the least I can do when I owe my redmond-free world to so many dedicated geeks!
I wonder just how many regular Open Source users feel that if they could, they would help, but maybe dont know how.
I would say project maintainers should encourage people to help out in other ways, There are loads of things people can do. Artwork, Documentation, Website maintennance heck , even give free support to people if they are nice enough.
I've been helping a few newbies through their first forays into linux, as indeed friends helped me when I got started. If you plant the right seeds in those newbie minds, they most certainly will grow a giving and generous attitude.
There is one more way people can support Open Source.. Lets introduce a "Send your favorite project A Beer Day" send em some beer money!
Nick !
Myth: You can't sell open-source software (Score:5, Informative)
Throwing away code? (Score:4, Insightful)
The important part is to have a good understanding of the problem scope, previous attempts (if any) at solving the problem, and what their advantages and drawbacks are.
You have to remember that code doesn't exist for code's sake alone. We write code to solve problems. Code is a window into how someone solved a problem. And not all solutions are created equal.
What is important is to understand the "whys" and "hows" of these previous attempts, and then chart the best course you see toward success. It may well be that the best solution is to scrap another's design. It may be the best solution to build off of another's success. However, it's probably a bad decision to build off of another's failures.
Dom
Counterpoint to the Framework "Myth" (Score:4, Insightful)
Myth: It's better to provide a framework for lots of people to solve lots of problems than to solve only one problem well.
Reality: It's really hard to write a good framework unless you're already using it to solve at least one real problem.
Really-Real-World Reality: Frameworks that are developed in conjunction with one specific project are likely to produce lousy results when used in a different project.
I've seen a number of "generalized" frameworks that came out of one large project, only to wreak havoc when they were forced upon the developers of another project. When people are writing support code for a project, a lot of project-specific design decisions get mistaken for generic architecture because the developers are only looking at it from an insider's perspective.
Re:Counterpoint to the Framework "Myth" (Score:5, Interesting)
You might be surprised, but I agree. It usually takes me finding three instances of similar code before I can generalize it correctly.
This article was talking about the open source world, though. There seems to be a penchant for writing frameworks without any projects that actually use them. That's the myth I was trying to address. Extracting a framework from only one project isn't spectacular, but it's much, much better than extracting a framework from zero working projects.
Code reuse... (Score:4, Insightful)
Solve your real problem first. Generalize after you have working code. Repeat. This kind of reuse is opportunistic...
This is sheer idiocy. If anyone disputes this, I've got some code I'd like to show you...
(Trying not to flame) This guy doesn't know what he's talking about. The proverbial "reinvention of the wheel" is not really reinvention. The problem is that programmers do just what he suggests - rather than think through the problem, and how they can create reusable code, they proceed to cobble together some garbage which solves only the specific problem at hand. Which leads to other programmers having to "reinvent the wheel" because the first programmer didn't make his code reusable!
You can't have it both ways. Either you reinvent the wheel every time, or you write reusable code. It's a discipline, folks - sometimes you have to put forth the extra effort up front to make gains in the long run.
The first three years as a programmer, I must have written at least half a dozen linked list implementations. It wasn't until I had worked on some large projects that I learned that writing reusable code is well worth the extra effort. I was the guy who "just coded the solution". It took me a long time to learn that the more time I spent thinking about the problem, the less time I spent on coding and debugging.
Re:Amen! (Score:3, Interesting)
And I really wish I had availed myself of open source.
A few months ago, I wrote a utility which would copy files from one directory to another, if and only if, the files in the destination directory were older than the source files. It turns out that XCOPY wasn't working the way I wanted it to, so I wrote my own utility. Then, I stumbled across rsync while Googling one night...
But I'm still rolling my own, because it would take more time for me to make sense of the documentation. Most OS authors giv
packaging (Score:3, Funny)
Myth: Installation and configuration aren't as important as making the source available.
Reality: If it takes too much work just to get the software working, many people will silently quit.
No, they will quit and then bitch about it on slashdot. :)> Oh the perils!
Community (Score:4, Insightful)
Let people get involved, encourage them, provide a forum.... hopefully provide the tools (sourceforge) but also provide a unique community experience. Create a brand (read a book on marketing) and you will reap the rewards for years... think about Aibo for instance...
Re:her work (Score:3, Insightful)
Just because she works on projects most people don't even know exist (research-related academic stuff), it's still technically "open source" and thus there's at least ONE female open source developer that I know of.
Re:why? (Score:3, Insightful)