Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Big Ball Of Mud Development Model 143

Lightborn writes: "The Big Ball of Mud Development Model examines exactly why so many projects (software and otherwise) end up looking like a bowl of spaghetti. A good list of things not to do when developing a project."
This discussion has been archived. No new comments can be posted.

Big Ball Of Mud Development Model

Comments Filter:
  • Hmm, from what I have seen, there is some good, some bad, and some ugly... just like in the commercial world. The only difference is that we get the src code, so we can see how bad it is.

    I am not saying we shouldn't encourage higher standards in OSS, just that commercial code is not necessarily any better.

  • This is a perfect example of why some code is crappy. One thing that always bothers me about Linux code in general is the lack of proper documentation. Sure, someone can say "read the f**king code, but in reality, it is usually much easier, even for experienced programmers, to have code that is well documented. If nothing else, it makes searching the code much easier.

    In all of the commercial settings I have worked in, we always document our code. Each function is required to have a comment describing its function.

    As a highly experienced developer of device drivers, embedded code, and other low-level code, the documentation is a big plus. Let's face it, computer code was designed to be parsed and read by computers, not human beings. One person's coding style may be hard to understand by even an experienced programmer.

    The Linux kernel code is improving in this respect.
  • Actually it's not an OSS problem. Bad software is generally caused by:

    1) Bad initial design,
    2) Bad/Inexperienced programmers,
    3) Bad management.

    OSS suffers from the first two, but peer review tends to weed out bad programmers pretty fast (at least in larger projects, less so in smaller ones).

    Commercial software suffers from all three. The pressure from management to meet some imaginary 'deadline' (often invented after two many beers during an 'important meeting') means software goes out barely tested if at all. In 10 years as a programmer I've never seen a program get more than an hours testing before it got sent out.

    Tony
  • All these causes of "bad" code are valid, but by far, in my limited experience, the by far largest cause of "bad" code in at least closed source software is tight deadlines.

    Programs never have static requirements. They are always changing. Features are always being added / changed. Everyone understands this (mgmt / engineering). However, the orignal design for the program, no matter how perfect it may be, becomes inadequate. Yes, if the architect / designer is good, the program is proof against some feature additions. But not all feature additions fit seamlessly into any good design. The program has to be redesigned to add this feature "correctly". Or it can be hacked in. To redesign takes considerable amounts of time.

    Now what happens if you add constant severe time pressure into the equation.

    The rest of the equation is left for the reader to solve.
  • Well, maybe I was a bit too cynical. I have no doubt that the programmers will do everything possible to make sure their software doesn't kill any astronauts, programmers being decent human beings in general. However, I was actually referring to the management level, where there is not this level of everyday social interaction with the astronauts. I'm sure nobody wants to see another tragic accident happening again, it's just that there is an ethical aspect and an economical aspect to this, and they both play a part in it.
  • exactly... people who want something that works, should use something that's LABELLED as working.

    Hence, if you want a functional linux system, use 2.2.x, not 2.3.x

    I'm sure things tend to break in FreeBSD-CURRENT as much as they do in Linux 2.(odd)
  • Computer science education should focus on teaching good design skills so we turn out programmers and not code monkeys.

    This is exactly right. Computer science students need to know the underlying theory, and learning the underlying theory certainly does make mastering a new language is practically a matter of learning syntax and finding the quirks.

    One way to start is by teaching about lambda calculi, and progressively working up to functional programming, and finally to object-oriented programming

    This is completely wrong. Potentially good students who are not completely dedicated will lose interest for lack of immediate gratification. Those who are dedicated will take the time to learn a language themselves with little or no guidance, creating and compounding the same problem that this solution tries to avoid.

    I'm all for a stronger and earlier emphasis on theory, but at least one programming language should be taught first as a concrete base. When the theory classes came around, concrete ideas will already be in students minds ready to click with the abstractions taught in the classroom. I'm sure it could work the other way around, but I don't know how many students will stay around to find out.

  • IP is already used internally. There is no reason to integrate it into MSDEV; it has its own editor, compiler, and debuger. Its editor is fundamentally different from MSDEV text based environment. Yes, 2-3 years if a fair estimate, despite the begining of the productions.
    Yes, it really blows -- no competitors in sight, MS will have the monopoly on new development paradigm and everyone will bitch again.
  • You are, of course, correct. However, OSS doesnt suffer as much as the closed source community from time constraints. Sure, OSS maintainers are harrassed by users for new versions and bug fixes, but in general, the maintainer's livelihood doesn't depend on releasing a new version. This gives OSS, in it's current form, a much bigger chance to produce better maintained code that most closed source software, as long as the oportunity to do so is utilized.

    Mind you, maintainers will probably have to put up with a lot of crap from users while they reorganize and reimpliment ;)

  • by Anonymous Coward on Saturday April 29, 2000 @09:39PM (#1101799)

    Open source is massively contributing to better source, if it were only because more students now not only get the opportunity to read real-life code.

    Even stronger, learning to read code has become more important than churning out vast amounts by yourself.

    Inadvertedly, open source addresses a serious flaw in computer science education: the fact that students should, in the first place, learn to read, use, alter and inspire themselves from existing bodies of code, instead of churning out unrealisticly small mickey mouse examples.

    Well-written code reads like poetry.

  • Can we be honest? The vast majority of it is complete crap, developed by amateurs with absolutely no clue how develop to professional standards.

    I've only been working in the industry for a couple of years, and have interned at two companies and worked fulltime for a third. The code there was written by 'professionals' (in the sense that writing this code was people's professions) and has been varying degrees of ugly. Not always hideous, rarely fantastic.

    I think that the Open source process encourages clean code more than closed source development, for two reasons. 1) You don't want to show the world your ugly dumb code, 2) the world has the chance to clean up your ugly dumb code, if it wants.
  • by Anonymous Coward

    is a strong maintainer. A maintainer that rejects bad patches with extreme predjudice.

    Ever read Linus' posts to Linux-Kernel? He is exactly the sort of maintainer, that calls out programmers that repeatedly submit incorrect or kludgey patches, and plants a boot up their backside right in front of everyone.

    Read Linus, and you'll see he has a clear vision as to how the linux-kernel is supposed to work -- not just in a design capacity, but with an eye towards maintainability.

    As Linus once told one of the GGI loonies: "the kernel isnt stable because it's a kernel, it's stable because I dont listen to people like you"
  • Oh, for sure. I'm not debating that... there are a lot of good points in there that I believe in. I think you misconstrued my entire post as meaning something more than it does -- it was really just an off-the-cuff reply to the following statements, in particular:

    The pressure from management to meet some imaginary 'deadline' (often invented after two many beers during an 'important meeting') means software goes out barely tested if at all. In 10 years as a programmer I've never seen a program get more than an hours testing before it got sent out.

    --
    It's a fine line between trolling and karma-whoring... and I think I just crossed it.
    - Sean
  • I'm not really sure what extreme programming entails, aside from the pair-programming.

    However, I have been using pair programming for about 2.5 years now, but only for hobby coding, and I have two things to say about it.
    1. I find it can be very fun
    2. It's not without its dangers, though. While traveling together, on no sleep, halfway through a case of Bawls we wrote some code. It works good, but sprinkled through our class definitions is the line "friend Skeletor;"
    3. It's pretty effective for making decent code happen. (expect the first release of jao in a few months)
      1. I do think that how effective this technique is probably depends on the specific pairing, though my experience is quite limited on this, though.

  • I couldn't let this one slide. I have used the shuttle software for years as an example of a failed software project. It was declared a "success" because they can occasionally get it into orbit. BUT

    It costs $1000/line.

    Remember the very first launch? Main engine shutdown due to a software snafu.

    It takes 8 months of training on the shuttle software to avoid killing yourself (this is a direct quote from a former shuttle commander). The originally-designed interlocks you would expect in software were thrown out because it wouldn't fit.

    If the predicted launch winds vary by more than +-5 knots or +-20degrees, they have to scrub the launch - why? The shuttle software can only take a limited number of wind parameters and it takes 12 (or 24, I forget) hours to rebuild the configuration if it changes.

    It requires the largest standing army of programmers to maintain of any 400K line program ever written. The only reason it works at all is they test, and retest, and retest the living daylights out of it, NOT because they designed it well.

  • The Big Ball of Mud website at www.laputan.org doesn't seem to be responding. I get the first page but trying to d/l any of the document formats is futile. Their web server seems to be a Big Ball of Bugs. ./ effect or simply a weekend fallover with no one around to reboot?
  • The vast majority *is* crap. But the stuff that is important, libc, the Linux kernel, GCC etc. isn't.

    From the quite limited time I've spent looking at kernel code, it seems pretty good (modular, clean, etc). I'm glad as I may end up writing device drivers sometime this summer. :)

    I've never looked at glibc so I can't comment (I've heard libc5 was pretty bad though, which is one of the reasons for the switch to glibc). GCC, OTOH, is pretty horrible. I think the problem is that it was designed to handle K&R C, then hacked on to support ANSI C, then C++, ObjC, Fortran, and now who-knows-what-else (IIRC there are Pascal and Ada versions too, just not in the main tree yet). But for whatever reason GCC is not at all pretty. It's odd that nobody else (including the *BSD people) have done work on a free C/C++/ObjC compiler: the only other C compiler I could find on freshmeat is lcc, which is only for non-commercial use, and only does ANSI C.
  • by magic ( 19621 ) on Saturday April 29, 2000 @02:22PM (#1101808) Homepage
    Yes. The problem with OSS is that much of it is evolved code, not designed code. This leads to robust and secure systems, but not ones where it is easy to add features or hunt down new bugs. Consider the example of an AI neural net (or the real thing, if you like squishy things). You've trained the thing to give mostly right answers by propagating feedback through the net in complicated ways, but have sacrificed any possibility of understanding why it gives the right answers. It is like a roof made from patches with no actual roof left. It keeps the rain out, but has lost all structure and is harder and harder to deal with. (Too many analogies in that paragraph).

    I've finally adopted a very game-programmer oriented philosophy towards development. Code should be written so that it is the specification, with appropriate inline comments documenting it and really clear variable names. Programmers should be extremely vigilant, and continuously roam their own code making sure that it actually reflects the current state of assumptions about the system. Whenever a change is made to the system, anything remotely affected should be proactively rewritten to reflect the change. This is pretty much how Abrash describes himself and Carmack working on Doom and Quake, and it is really successful. You keep performance up, stay in touch with your code, and never accumulate cruft. Bugs are immediately ferreted out and the programmer must never fear diving into code to tackle a big cleanup job, and can never allow pieces of code to exist that she (or he) doesn't understand.

    Of course, you need massive automated tests to make sure your rewrites don't screw anything up. Designs must be extremely abstraction oriented, with a close eye to strong interfaces and bootstrapping, otherwise you will end up with so much code that it is impossible to manage the continual cleaning. And you need really dedicated programmers.

    When I look at the Doom and Quake source, and the code that my own dev. team has produced, I see that the results are worthwhile. Each routine is beautifully crafted and works flawlessly. The codebase is a fraction of the size you would expect because so much effort has been put into doing everything the right way and eliminating broken or excessive code. And no bugs...

    magic

  • the Linux kernel

    Are you sure?

    I'm not trolling, I'm deeply interested in OS & OSS stuff, and I've been going through the kernel a lot lately. I'll admit that all of the kernel code I've seen seems pretty damn solid , but as I understand it the network stuff is total spaghetti.

    Don't get me wrong - linux networking support is brilliant, but the code wasn't designed with multiprocessor in mind (this is how NT beat Linux on those high publicity MS funded tests a while back). Apparently the reason no one has fixed the TCP/IP stack to multiproc better is that the code is such a mess.

    Linux may be good, but that doesn't mean the code is well written from a maintainance perspective.

    disclaimer: I haven't look at the net stuff yet myself, and don't know if this is true or not - but you can see the point I'm making :-)

  • I've done a quick look at the latest version of glibc. It looks pretty well thought out (everything cleanly seperated by architecture with default backups should a feature not exist) but I only spent 10 minutes on it so I really can't comment much.

    I've spent about 4 hours studying the bison parser in GCC. (I'm working on a very simple compiler as part of another project, Corporate Raiders [sourceforge.net]) While I've never made a compiler, or really any advanced programming before, I still could follow the general gist of how the parser worked, enough that I was able to use some of the way it worked in my compiler.

    I've never seen GCC crash in all the time I've used it. If it's a mess then it sure is well working mess!

  • First note that most kernels, Linux espessially, use lots of goto's for performence reasons.

    Secondly the Linux networking code has been completely rewritten 2 or 3 times now. If it's spaghetti it's very quickly eaten spaghetti. :)

  • Microsoft bashing aside, how do we as the technical experts deal with someone without clue making a decision that heavily influences the effectiveness of a project? My biggest challenge has been finding the most appropriate and professional way to say "That would be stupid". Sometimes there's no option to speak out, indicating that its time to move on. :)
  • I'm really glad to see this. In my experience, the great flaw in the OSS model is the quality of the code. Can we be honest? The vast majority of it is complete crap, developed by amateurs with absolutely no clue how develop to professional standards.

    I think this is an unfair generalisation. The vast majority of code I work with is clean, well-designed, and adheres to one of the style and coding standards out there. That includes the kernel sources, Glib, Gtk+, the Gnome libs, python, Perl, and loads of others.
    If you mean that the vast majority of small applications are complete crap, you could be right. But don't knock the single programmer who need to scratch an itch. If enough people are interested in his or her project, they'll help - including design and style improvements and suggestions.

    The OSS community needs to establish some quality standards. Linux code is relatively new, but this is going to bite everyone in the butt as the code gets modified more and more, and software rot starts to rear its ugly head.

    Seems to be doing just fine so far - the OSS community is pretty aggressive about coding standards already.

    • The Gnome Developer's site [gnome.org] has some strongly worded docs on style, consistency, robustness and correctness for prospective developers.
    • /usr/src/linux/Documentation/kernel-coding-style .txt has Linus's words on the subject of style.
    • The GNU coding standards [gnu.org] have also been around for a while.

    Mark my words: Unless coding standards get real important soon, OSS is going to collapse under its own weight. "As long as it works" is not good enough.

    Whereas of course closed-source software is bound to survive since all development houses rigidly enforce coding standards?
    Bollocks. Been there, done that, shipped it because it compiled. In the bazaar model and peer review style of the OSS world, you can't get away with crappy code or bad design for very long. If your project's well designed, maintainable, useful and easy to read as well as being robust, it will survive. If not, then you can't get away with shipping only binaries to some poor customer.

  • those who Want To Be Artists [...] they usually don't deliver very good work, either

    The world contains both ARTISTS and CRAFTSMEN. An artist will explore limits, see what breaks, and come up with new metaphors for work.

    A craftsman will produce something that can be predicted to work, to a predictable schedule. It won't get written up in the magazines, but it will work.

    What society needs is to find a way to finance the artists while not letting them near anything that we intend to ship. In the historical past, this was handled via corporate thinktanks like Bell Labs or Xerox PARC or whereever IBM stashed their official "fellows".

    Once the bean counters slashed these "artistic" (and apparently unprofitable) quarantines, the partly-baked ideas now have to be developed within real projects.

    My dichotomy artist/craftsman can also be expressed as scientist/engineer, or probably a dozen other expressions in different fields.

  • The address given is really slow...

    Try:
    http://users.soltec.net/~foote

    -Donald
  • I'm always willing to learn...

    An OO language requires that I think about aggregating related attributes (objects), a restricted set of operations that can be applied to those (methods), and the relationships between objects of similar and dissimilar types. Encapsulation of the actual data and use of a restricted set of operations to manipulate them helps get better results since it's harder to "cheat" in the sense of reaching across and tweaking some value in an unrelated structure, so you have a better chance that your objects' collective state satisfies all the necessary invariants after an operation. Good programmers have always done these things even without OO languages, although it tends to involve a lot of personal discipline.

    Rigorous enforcement of these principles by the language/compiler makes other things possible. Code reuse in the small by inheritance or delegation (depending on whose model you like), although very little of what I work on seems to benefit from inheritance. Code reuse on a larger scale by components. And it's certainly easier to deal with things like

    this.action(parms)

    than it is to handle a whole bunch of things like

    action_class(this, a)
    action_class2(this, a, b)
    action_class3(this, a, b, c)

    and so on. Late binding allows more nice stuff than early binding, at the cost of some efficiency, etc.

    I count all of this stuff as "thinking about data" and believe that an OO language requires that you spend more time on it. You appear to think that an OO lanuage gives you something different. Care to expand on that a bit?

  • well..its the challenge of the thing. i hope you took the job. :) size of the box is fairly easy with a couple of cameras (write in C - all those tight loops help) and marking on the floor. dense packing algorithms are out there to fit boxes into other boxes and labels can be read with a barcode scanner mounted nearby - you'll get an ascii char stream.
  • by cheezehead ( 167366 ) on Sunday April 30, 2000 @12:50AM (#1101818)
    One example of some very well designed software is the Shuttle OS that powers NASA's Space Shuttle.

    Yep. However, I read some years ago that the Space Shuttle code costs $1000 per source line to develop (for the whole shebang, analysis, design, implementation, testing, maintenance, documentation, etc.) That's one thousand dollars per line (if I got paid that much, I'd be long retired :-). This only applies to manned missions though, software for unmanned missions costs about $100 per SLOC.

    [cynical] I fear this is not because they value human life so much, it's more that loss of human life leads to huge costs in terms of publicity, scrutiny, congressional investigations, freezing of funding, halting of programs, etc. The Challenger tragedy cost far more than the $2 billion that the spacecraft costed, the halting of the launches and the redesign of the shuttle was far more expensive.[/cynical]

    Anyway, as I should not have to point out, testing and bug finding is hampered by the law of diminishing returns. It is extremely costly to get the last 1% of the bugs out of the software. In the case of NASA and the space shuttle, there is an economic incentive to hunt down the last of the bugs. In the case of commercial software, stuff riddled with bugs is released because the cost of delaying the release outweighs the cost of leaving bugs in (dare I say that it often makes economic sense to leave bugs in? Just release the bug fixes as an upgrade...).

    Anyhow, I think that Open Source has an advantage over commercial software in the sense that the developer(s) are motivated by something else than a paycheck. It is often a sense of pride that makes them strive for clean, bug-free code. Ever notice that the quality of software seems proportional to the inverse of the cost? I'm only half joking here....

  • ... about 30 minutes to 1 hour ...
  • yep. im going off to a trade show next week..but should be back. trade shows are a pain - specially when stuff breaks at the last minute. spent the last week hacking on the last minute bug fixes - ugh.
  • I'm really glad to see this. In my experience, the great flaw in the OSS model is the quality of the code. Can we be honest? The vast majority of it is complete crap, developed by amateurs with absolutely no clue how develop to professional standards.

    I disagree. There is crap open-source stuff out there, but overall the coding standard is really not too bad, and some is excellent. The important stuff tends towards the excellent (at least from my experience).

    By contrast, go and have a play with arbitrary pieces of Windows shareware . . .

  • I'm really glad to see this. In my experience, the great flaw in the OSS model is the quality of the code. Can we be honest? The vast majority of it is complete crap, developed by amateurs with absolutely no clue how develop to professional standards.

    The OSS community needs to establish some quality standards. Linux code is relatively new, but this is going to bite everyone in the butt as the code gets modified more and more, and software rot starts to rear its ugly head.

    Unfortunately, the vast majority of OSS developers are not very old (

    Mark my words: Unless coding standards get real important soon, OSS is going to collapse under its own weight. "As long as it works" is not good enough.


    --

  • In a design meeting a few days before the coding begins, a dedicated team of programers burn the midnight oil to hammer out the design spec for the next new new thing.

    12, 13 and 14 hour days are logged to make damn sure the spec is rock solid and ready to take over the world. Then it happened. The kind of comment that sounds like fingernails down the chalk board. The new VP (and CEOs son) stands up in the back of the room ans says:

    " Hey! let's partner with Microsoft"
    ___

  • I screwed up and used a bad character. I resubmitted another version. Please make this one go away.


    --

  • I always thought it was just that in any project (programming or otherwise) there are always at least a few people with poor [programming] habits. This hurts the whole project
  • We have it on paper, we have the "correct" ideas of how a software project should be (un)organized.

    With the duct-taped style we solve problems when we find them and put //FIXME's where we don't want/have time/know how to fix something.

    Aaaahhh, sure feels nice to know that we were right all along. :)

    .sig
  • There's a pattern I've seen at some companies. Programmer comes in, spends 2 hours looking at the source. Says, "This is Crap, I need to re-write all this stuff." Then, he spends a long time, basically re-writing the entire thing, and breaking tons of stuff. At the end of a long process, the program is sort of where it was at the start, that programmer moves on and someone new comes in. He says "This is crap, I need to rewrite it all" and then the whole thing starts all over again. I would say that most programmers, coming into lot of 'someone else's code,' whatever the state of that code is, will tend to say it's crap no matter how it's put together. Just as a matter of reflex. And, partially, I think this is because they understand the stuff they wrote themselves, and when they read other code, they don't get the same warm fuzzy feeling they get reading their own code.
  • This is somewhat similar to the softwear engineering methodology known as "Extreme Programming"

    One of the main tenents of Extreme Programming is constant refactoring (ie, you see something that could be done better another way, you fix it straight away)

    The other main point of extreme programming are: Always do the simplest thing that will work, and have proper, automatic test suites to constantly test your work.

    Have a look at The Extreme Programming Web Site [extremeprogramming.org] to learn more.

  • by faichai ( 166763 ) on Saturday April 29, 2000 @02:47PM (#1101829) Homepage
    Freaky! I was just reading one if Brian Foote's, papers: Metadata and Active Object Models [laputan.org]

    Pilot Light? Check!

    One thing that gets me about the OSS community is the over-reliance on C.

    Petrol(UK->US Translation - Gasoline)? Check!

    I mean look at Gnome and GTK+, it based on some ugly C struct kludge to enable pseudo-object-orientation!

    Flame-throwers are go!!

    And then there is KDE 2.0 based on even uglier preprocessor commands.

    I mean WTF is going on. The method of production in OSS is innovative, but the resulting programs end up being MS ripoffs. We need some true innovation regarding what we develop and the tools we use to do so. Because lets face it, most app level programming would really benefit from C++, or even something like Eiffel and the concepts of design by contract. Using pre/post conditions and invariants as in B notation, one can almost guarantee the correctness of one's program. (Eiffel and B were both in part developed by Betrand Meyer, he also played a hand in Z notation).

    There is also an interesting project called EDMA [gnu.org] That is trying to create an enviroment in which objects can be inherited from after they are built, a bit like CORBA, but IMO better.

    void SelfPromotion {

    I am in the early stages (i.e. thinking a lot and getting myself confused) of developing a Dynamic Object Enviroment to support reflection and better models of code reuse through selective "pilfering" of code and structure from other objects (Classes don't exist, only instances, although instances may share code). No links, or anything much to speak of as yet.

    }

    Anyway I digress, we should start thinking about the tools we are using, and ensure that they are suited to the job. For most things, the performance benefits of C, are not really crucial WRT anything outside the Kernel.

    Cheers,

    faichai

  • Whoa! That's amazing -- I code in an almost identical fashion. I'd never realized it before. I've been asked "how" I code in the past, and never really been able to come up with a satisfactory answer. But you hit the nail right on the head -- that's exactly the way I code.

    It's kind of interesting, too, that I also come from an English (ie: human language) writing background.

    --
    It's a fine line between trolling and karma-whoring... and I think I just crossed it.
    - Sean
  • This article reminds me of my hometown, Knoxville Tennessee. I guess lazy programmers have lazy city planner counterparts! But, really, it seems that as of the last 20-30 years there hasn't been much planning going on, just development. It pisses me off sometimes.

    (For those of you hailing from Knoxville, check out this cool pulp detective story from MetroPulse: Best of Knoxville Awards (alternate universe version) [metropulse.com]. Good perspectives related to this in the final chapter. :)

  • I've never seen GCC crash in all the time I've used it. If it's a mess then it sure is well working mess!

    I actually did cause an internal consitency error on gcc 2.95 once. I should check and see if it's still in 2.95.2.. the problem is that the bug relies on a 14,000 line C++ library, so I don't think they'd be too interested in a bug report.

    I suppose I'm a tad biased against GCC just because of the fairly slow code it generates [and annoying spurious warnings], but for stability it probably is about the best thing around.
  • Basically making it known that if things are done the 'wrong' way as suggested, I take no responsibility for the design descision.

  • Hm, interesting.

    The project I'm currently working on, and which is almost at completion (a commercial project). has just gone through 4 weeks of having the absolute s**t pounded out of it (ie: testing).

    And it's not that big a program.

    What that says about anything, I dunno. But it is a counter-example, anyway.

    --
    It's a fine line between trolling and karma-whoring... and I think I just crossed it.
    - Sean
  • I mean look at Gnome and GTK+, it based on some ugly C struct kludge to enable pseudo-object-orientation!

    Isn't that how they implemented C++ in the first place?

    Okay, that was my cheap shot at New Jersey for the day. :) Seriously, for one reason or another, C is the language of choice for ugly kluges (note the lack of a "d", BTW). I believe the situation is bad enough that choosing C/++ as the basis for a production system becomes a rather damaging decision, especially when alternatives are available whoch help make sure that the program meets quality standards (e.g. Eiffel).

    I don't really have much else to say... just glad to have been able to bash the Bell Labs gang once more :)

  • by Anonymous Coward on Saturday April 29, 2000 @03:06PM (#1101836)
    For a long time I couldn't figure out why others had such a hard time fixing bugs and changing their programs, while I could do it without any problems (no I'm not trying to be arrogant or pretentious).

    Then a coworker made this remark: "Ray how can you write such well-organized code in one pass?" At first I couldn't understand what she was talking about, after all doesn't everybody constantly review their code? Doesn't everybody constantly rearrange functions and classes, rename variables, redefine protocols? Apparently not.

    Correct me if I'm wrong, but it looks to me like most programmers write something once then spend the rest of the time trying to get that working. They never go back and rewrite the code, they just keep adding fixes to it. How can this ever work smoothly?

    I've also seen and heard a lot about processes to make a program, or anything else, come out right the first time. I don't get that either! To me, the only objective when writing programs is to make it easy to change. Period. If it's easy change, it's easy to fix bugs, it's easy to enhance, and it's easy to rearrange and redesign with hindsight.

    If you want to have a good time programming, do yourself a favour: learn the tools to make global changes to your code quickly, then spend a _lot_ of effort rearranging your code and renaming things as your program evolves.
  • Yes you can, and you should. I do it every day. His bussword-bingo sugestion is costing the company to much money and it's your responobility to help him see that. You're right in one respect, he won't be the one to write the detailed spec, but his staff is. Even if they are higher up.

    After a while, you'll earn the repect of the people who you work for and you won't be a jr jr developer for long.
    ___

  • Hey, I thought the point of C++ was so that the programmers could surf the internet during the long compile times. Except you can debate the finer points of object oriented structure with your fellow programmers while your 2 hour compile is going.

    Or, what I suspect is that C++ or not, something is going to end up being obfuscated, however you put it together. So, if you have a 'message oriented' setup, that is components queue messages to one another and those queues are processed later, your high level components might be well-organized, but then the actual flow of control can be confusing.

    I think this is what makes Windows programming confusing. On paper it seems simple enough. message comes in, process the message, do the code. But the order of the messages, and what system calls creates what messages, the weird race conditions and weird recursion problems make it tricky to completely debug all the peculiar conditions.
  • I think when BBoM says:
    "[Foote&Yoder 1998a] went so far as to observe that inscrutable code, in fact, have a survival advantage over good code, by virtue of being difficult to comprehend and change."

    This must be the reason for all the crap code in the world. But it also assumes _great_ code does not exist. Great code probably has even better survival advantage, by virtue of not needing any change. I once thought that all code needed to change, if the requirements for that portion of code change. But that is *not* true, great code does not need to change, even after your requirements change. Great code may be phased out of use after significant changes in requirements, but it does not need to change.

    Now then the Dilbert-style managers enter the picture, and all hope is lost. They take the great code, look at how long it has taken to develop the thing, and decide it must be crap since the last project used a lot of time to develop and enhance it. Then they discard it and REWRITE it in the inscrutable way.

    Then someone says software is not doomed to failure. How many organisations do you know that had both the great management and the great architects?

    Now, the *obvious* solution is to get rid of managers. Then you have already solved half the problem, and only need to find good architects. :-). Now consider open source. The success of open source is directly related to whether you find good architects to write the code. The difficulties in marketing open source are due to its greatest virtue, not having managers and strict time schedules to keep. However, the managers are needed to make the software work in commercial setting. Now, how to solve the software crisis then? Well, you can't give all the answers in one slashdot posting, now can you?
  • From a programmer's point of view, the moment you can write a new routine or program perfectly in the first try, then something is wrong.
    You are writing it perfectly because you can probably do it in your sleep. It's boring code, versions of which you've written a thousand times before. You learn by making mistakes and if you are making mistakes in your code (and fixing them) , then you are learning something.
    Perfectly crafted programs are signs that you are not learning anything new, just executing a known pattern and dying of boredom in the process.
  • Computer science education should focus on teaching good design skills so we turn out programmers and not code monkeys.

    What's wrong with Code Monkeys? ;-]


    I'm all for a stronger and earlier emphasis on theory, but at least one programming language should be taught first as a concrete base.

    I totally agree. We've got to learn to walk before we find out how to get to the store...

    I'll have you know that output-oriented programming was one of my required courses. (Cranked out a whole lot of psudo-code in the process.) It by nature emphasized loop and module organization, and a whole lot of systematic steps with applied Keep It Simple, Stupid.

    The trumpets blair, the banner is unfurled, the cry goes out: CORRECT BY DESIGN!

    Any student with enough intelligence to actually learn coding and get the job, is then capable of learning the principles and strategies to do this stuff right, and not necessarily BEFORE they learn to code... It just needs to be distilled into usable ideas and strategies, and usable texts - something less obtuse than the usual fare. Dare I ask - written with the clarity of a Dummies'© book. (gasp!)

    Now to really reveal my lack of experience: I read as much the article as I could get to load (darn that Slashdot effect) and got as far as the Reconstruction section.

    What I didn't get from the article was many examples that were specific enough to glean useful strategies that I can take into my own projects.

    The Mudball was pretty obvious, your standard hack taken to the inevitable conclusion. (It seems to me the solution is to toss the concept of global variables and string your modules hierarchically. Comment?)

    I got the Reconstruction, Quarantine, and Growth ideas pretty well, but the Sheering idea completly escaped me. I mean, I understand it as a concept (organizing data and code according to rate of change), and whole-heartedly agree with it, but I can't picture any example of it well enough to be able to apply it. Suggestions?



    TangoChaz

    --------------------
  • The problem is that computer science became popular among those people who have no real call in their lives and who regard their work as simply a way of getting their salary.

    Bravo! I bitch and moan all the time about all these wannabe geeks! The problem as well is that most of them are so good at bullshitting their way into good positions that from a CV perspective it is getting harder and harder to distinguish between the "could code before i could walk" hard core and the "Microsoft S/W is soooo good, I wish I could suck BG's c**k" losers, who do treat it as just another job, and have no inherent talent.

    Anway, enough ranting!

    faichai

  • I'm really glad to see this. In my experience, the great flaw in the OSS model is the quality of the code. Can we be honest? The vast majority of it is complete crap, developed by amateurs with absolutely no clue how develop to professional standards
    Unfortunately, Sturgeon's law applies to source code like anything else and I think that more than 90% is crap. Have you ever looked at the source code for large commercial projects? There are many which are just as bad, and a similarly small percentage of the code is well written.

    OSS code looks worse only because you can see it. Commercial software houses have one big advantage, namely that they can pay people to slog through the ugly bits.

    Unfortunately, non-open source also allows people to get away with crap they might be embarrassed to have included with their name in a publicly-available product. There's also a problem in that, except for the efforts a bunch of black-hats and white-hats with dissassemblers, most of us would never see just how bad some of the code in shipping products is.

    In summary, I think it's extremely wrong to assume that OpenSource developers are more likely to be young, inexperienced and/or bad coders. I've found it to be more of a neutral as you have both good and bad programmers at work in either case, in roughly the same percentages and the ability of anyone to find bugs in OSS is countered to some degree by the QA departments for-profit non-open source companies fund. OpenSource has at least a small advantage in that the user can fix critical bugs, or would if they were capable of it and most are not.
    __

  • ...code until it feels too messy, go back, rework, continue to code anew, get stuck, etc
    I have no idea whether others code in a similar fashion or not.
    Don't worry, you are not alone :-) and BTW this process has a name - people call it refactoring
  • by duplex ( 142954 ) on Saturday April 29, 2000 @03:13PM (#1101845)
    Is there anyone out there who's into extreme programming?
    I'm trying to convince my manager that this is the right approach for us but with little success.

    I work for a small software house with just six developers on two main projects. What I see happening though is emergance of a structure that closely follows the XP guidelines. Our key developer left last month and left us with well over half a million lines of code to maintain and extend.
    Naturally the first outcome was an outbreak of panic (particularly among the management) but us programmers didn't have time for lamenting. What happened though is that once we lost our mastermind of coding and had to rely on ourselves even with the most critical parts of the system people started sharing knowledge much more freely and the environment became extremely productive. I found myself often pairing with other developers to help them understand some parts of the code and vice versa.
    It seems that with the old state of things (one developer churning out masses of code and the rest almost idle) while it felt comfortable it left many people fearing that they can't perform on their own (one of my colleagues wanted to resign straight away as soon as she learned that the guru is leaving).

    The moral is that what pair programming prescribes turned out to be our life jacket and ultimately boosted the productivity of people who were previously intimidated by the presence of the 'main man'.

    Personally I find coding in pairs a brilliant idea. I find myself producing much higher quality code when I have someone looking at what I'm typing. The bugs are fewer and more people know what's going on in the system. It works much better than peer reviews and documentation projects that we had foisted upon us by the management. It's not the official company policy but we do it anyway. For us XP works (or at least the parts we adopted).

  • Who is the bigger fool? The fool who demands the impossible or the fool who comes in on weekends to do it?

    (now if you'll pardon me, I'm just on my way in to work... literally...)
  • When I get a clueless request, I ask the person making the request to address my concerns in detailed documentation. Upon entering into a search for answers, the person making the clueless request will soon realize his error, and seek an alternative.

    This puts the research burdon on the person making the request. It also (hopefully) allows him to learn a few things during his quest to document his brain-dead demands thus lowering the likehood that we, as a company, have to waist time on a similar problem in the future.
    ___

  • >> spend a _lot_ of effort rearranging your code and renaming things as your program evolves.

    Try this with a program maintained by more than one person, and you will instantly gets screams of anguish from the other developers.

    Or more likely the person who checks in source changes to the master source will kindly ask you to shut up and go away.

    (yet another reason why programming in teams often isn't "fun")

  • Dun dun dun. Crash course in Managing Software(and hardware) Development. There are millions of links out there. So here's an introduction so people know what to look for. Remember, there is no magical model that automatically works!
    ---
    This is why you learn different management in software development models, because there is no one model that suits everyone. There are generally held principles that anyone can come to, but they aren't solutions that you can work out by common sense.

    Here are some general statements:

    "Software development is multi-dimensional."

    OK, duh.

    "Developers pay attention to what they are measured on"

    OK, that makes sense. People _respect_ what management _inspects_.

    "Some performance dimensions in software may be in conflict"

    Makes sense, although a little more complex. (e.g., min memory, min SLOC vs. min effort and max user satisfaction vs max maintainability...)

    Objectives in managing software development:

    * define the process by which projects are conceived approved, and delivered
    * define the guidelines and standards that are used by architects, developers and managers who will develop software
    * define the mechanisms used to deliver the software to the marketplace
    * general models to develop specific models in particular niche's such as "shrink wrapped" or "web based" or "b2b" or "b2c" or "OEM" etc
    * define who is involved (e.g., product management, project management, development, technical writers, human factors/ui, localization etc) and their roles and their tasks.
    * Specifications documents should follow these definitions and management models such as that for cost estimation (e.g., COCOMO, other models).
    * once tasks are defined, you can help employees do what they are supposed to and evaluate them for future changes to development model

    Interesting links:
    a n article
    The CMU software engineering institute [cmu.edu]
    more [nec.com]
    Defense system management college introduction to project management [dsm.mil]
    wooha [umich.edu] lots of links.
    needed skepticism regarding empirical analysis with models [davidfrico.com]!!!
    "Commercial software models" [ispa-cost.org]

    Example of cost estimation in use (findings from them at least):
    http://www.ll.mit.edu/llrassp/jca/mcmb w.html [mit.edu]

    _Development models_ include (*== > in double sided->):

    The incremental model;

    AKA. The market model. Often dictated by management and generally follows QA builds.

    (P.1)()()()()(1.0)()()()...(2.0)..

    The evolutionary model;

    AKA. The pseudo academic model

    (Product Idea)*-*(Prototype)->(Clean Code)->(test and rinse)->(evolve)->(repeat)

    The spiral model:

    This model makes you ask the question as to the value of functionality and what process one would take in implementation.

    (Kernel)->(Kernel+key or riskiest functionality)->(kernel+key+less troublesome components)->(K.+key+LTC+Less troublesome functionality)

    Waterfall Model:

    Intent:

    (Product Idea)->(Analysis)->(Design)->(Implementation)->(te sting)->(Product life)

    Reality:

    (PI)*-*(Analsysis)->(design)*-*(implementation)* -*(testing)->(product life)*[arrows back to design and analysis]

    Rapid Prototype model:

    (product idea)->(prototype & analysis & design)->(implementation)->(testing)->(product life)

    Common misuse:

    (Product Idea)->(Prototype)->(More Code)->(Test)->(release)

    etc, and hybrids like the "extreme programming" model, which seems to be a more detailed rapid prototype model

    _Requirements methodologies_:

    * generally: Requirements are what. Specifications are how (although they mix).

    Incorrect requirements = no product, or bogus development plan

    The method from which we develop requirements is:

    discovery
    refinement
    modeling
    specifications

    requirements elicitation(href="http://www.se i.cmu.edu/pub/documents/92.reports/pdf/tr12.92.pdf [cmu.edu]) -- more detail (http://www.incose.org/rwg/97panel/97 panel.html [incose.org]) - etc - (http://www.kingst on.ac.uk/~ma_s435/personal/work/CO1032B/tools_5/ [kingston.ac.uk])

    How to defend against requirements crep:

    * use formal methods !
    * use customer requirements formats such as manuals or other docs !
    * your answer must not always be yes !
    * proposed changes must be evaluated and rational !
    * there is always nearly a version 2.0 !
    * the customer almost always values quality over a short delay !
    * remain flexible enough to react to the work-place !

    "without a manual, we don't have a product".
  • I sometimes wonder if people talk about about languages to the point where everything is so abstract, there is no resemblance to the actual language.
    I once had an assignment where I was to write a piece of Java code to take up the smallest amount of space, and keep it in one class file. Hard? "Working against the language?" Not at all. It's true, the code would be unmaintainable if it wasn't so small. But you can declare global variables and change them here & there, let the structure of Java give hints to the person reading your code. I had all the structure of Java if I wanted it, and I could also say No to it.
    Forget which language is the grandfather of which, and just see how you can do what you want with each language. If you start out with the mindset there are limitations in each language, you'll find them.
  • also:

    The Architect in particular [cmu.edu]

    (missed a " in my last post)
  • A fairly famous man once wrote a book:

    "The Art of Computer Programming"
    -- D. Knuth

    Whatever happened to this approach?!

    -- Maz

  • How are you so sure of that?

    Do you have any evidence to present?

  • I think you are confused about the first OO language. It's name is Simula and was invented at the Norwegian computing centre in the 60's. The principal inventors were Kristen Nygård and Ole Johan Dahl. Modula is yet another Wirth-family language.
  • When you develop software by first designing it on paper with Functional Design and Technical Design reports, make database diagrams, N-Tier schema's where to put which functionality and code, UML schema's for your classes, you only have to type in what's already thought out. The only flaws you'll experience is bad algorithm coding (but that's not spagetti code) or detailbugs.

    When you develop software using the evolutional model, that is: add code/functionality on the fly in an ongoing basis with short term designs and not based on original concepts and designs, you end up with eventually (most likely) a pile of code that has to be rewritten NOW because a new feature asks for it. because most of the time in these projects people do NOT choose to rewrite it, it's added anyway, resulting in spagetti.

    In Short: evolutional model code is code where no theoretical basis is stated, there is no original manifest that illustrates WHY all the code is set up like this. MOST OSS projects are developped using the evolutional model. What helps is an ONGOING theoretical design document to function as a theoretical BASIS for the structure of the code. If there is NO designdocument or conceptdocument stating WHY code is structured the way it is structured, it's bad code. Period.

    Another thing that adds up to bad coding is a bad naming scheme, or worse: no naming scheme at all! Nobody is forced to use hungarian coding, but please CHOOSE one! develop your own if no scheme suits you, but a scheme that HAS TO BE used by all developers in a project is a MUST to keep the code clean and updatable, even if you use designdocuments.

    More and more OSS projects get tighter software teams with people who KNOW how to develop software, thus using designs and theory before starting to jam in the code, and that is a good thing. We ain't there yet however. For starters I'd suggest to take a look at the InfoZip sourcecode: ansi and non-ansi C together in 1 project... *UUHHHHH*
    --
  • Correct me if I'm wrong, but haven't nearly all the cornerstone pieces of OSS (eg. bind, sendmail, gcc) been around for longer than 10 years?

    Admittedly these are complex and arguably messy, but they have withstood the test of time and a million users.

  • by Dast ( 10275 )
    Granted it has been over 5 years since I have lived in or even visited Knoxvegas, but you hit the nail on the head with that.

    But honestly, it pales in comparison to the idiocy that is my current college campus.
  • the linux kernel uses DocBook to generate automatic documentation. much better than writing documentation manually.
  • The classic quote from the Mythical Man Month about this is:

    Write one to throw away. You will anyhow.

    I've been doing some work under Windows with threads, and I tried to start by designing a generalized class-based thread model before dropping in my first thread. It didn't work - I had no idea what I needed or how to bring it about.

    So I dove in and wrote something more Cish. Five threads later, I know what should go into a class - it's the code that I keep cutting and ******* pasting. I now know how to encapsulate the functionality I need, smoothly, because I know exactly what it needs to do and a functional way to go about doing it.

  • I work the same way, actually. If I know that my code's going to be maintained by someone else later, I make sure it's readable (that way they don't bug me to explain things) -- if it's just for me and I likely won't need to maintain it (a quick perl script, for example), then I tend to just be messy about it.

    When I write code others will have to read (which I've done a few times, badly) everything needs to be much more clear .... And with OSS, there may not be a more experienced developer around to badger me into doing it right

    I'm currently working on a game which, when it's playable, I intend to release under the GPL (I know, 'release early, release often', but still, I don't like releasing a nonfinished product) -- I find the idea that literally the entire community of linux users could be looking at my source code is an enormous incentive to make it clean.
    I guess the same force that motivates me to clean up the living room when people come over also motivates me to clean up my code when people are going to look at it. It's enough that they'll see how naieve a coder I am, I don't want them to think I'm a bad coder too.
    Which reminds me, I have to go do the dishes....

  • by Kaufmann ( 16976 ) <rnedal&olimpo,com,br> on Saturday April 29, 2000 @05:36PM (#1101864) Homepage
    I agree with you in part, but pigeonholing all of OO as obscurantist and overcomplicated is as erroneous as overhyping it as if it were the Second Coming. There is nothing wrong with the motivation behind OO itself; the problem lies mainly in implementation. Specifically, what we now consider to be "OO programming languages" and "OO design practices" goes far beyond the original concept of both, and much of it is indeed obscurantist nonsense, which induces a huge amount of needless overhead, both of the conceptual kind on the designer and of the practical kind on the implementer. This is especially true in the case of small to medium software projects, even more so because often designer and implementer are one and the same.

    Case study 1: C++. An extensive critique of C++ as an OO language for production systems, from the point of view of an Eiffel-cheerer, can be found here [uts.edu.au]; in my opinion, it suffices to say that, given its status as just one step up from a C add-on, and given that, when building on such a shoddy conceptual infrastructure as C's, it's hard to conceive how one could do any better, C++ should be considered to be outside the scope of this discussion.

    Case study 2: Java. Now, Java is built from head to toe for maximum OO. This is incredibly intrusive to anyone who wants to do some real work using it, as opposed to just drawing nice schemes and writing UML models. Java is built to enforce those styles and concepts of programming which the designer felt to be correct. It's languages like this which give OO a bad name, and they should be shunned.

    Case study 3: Perl. Perl was built to be a scripting language - in Osterhout's original conception, a "glue" language. Thus, practicality being the most important goal in it, it's easy to understand why Perl's OO is as it is. Specifically, it doesn't exist per se; no special syntax or semantics is enforced for OO programming, in fact all of it is built upon simpler, pre-existing constructs - specifically, taking advantage of an isomorphy between modules and classes, objects and references (via abuse of the bless() and ref() functions), methods and namespace-local subs. This makes a transition to OO practices easier as a project grows. It also allows one to implement the concept of an object as he sees fit - usually the slot approach is used, using hashrefs, but there other approaches for specialised cases - including objects as indices into class-wide property arrays, an approach described in "Advanced Programming with Perl" and which is useful for when you need many objects and creating a hash table for each would be a waste of space. The discussion of OO in Perl could be extended further, but it suffices to say that, in true Perl form, it restrains from imposing a paradigm on the programmer, trusting instead that he knows better.

    Case study 4: Smalltalk. Smalltalk is widely considered to be the godfather of modern OO (yes, Modula had something called "OO" before Smalltalk, but a quick glance at both languages will make it clear right away that most of what we call OO today was fathered by Smalltalk); this, combined with the widespread availability of "OO software design tools" for the environment, could lead to some people blaming it unfairly for their current issues with the paradigm. In reality, when using Squeak, a computing environment integrated with a derivative of Smalltalk, I've found the use of OO in programming the system to be perfectly natural, in contrast to the uncomfortable feeling that you get from using, e.g., Java. Part of this comes from language design itself, which makes the concept ubiquitous in a very straightforward and graspable way, but most of it comes from the environment, which is fully built on objects. In the Morphic system, you can "see" and "touch" - inspect, manipulate, delete - all objects alike. The user- and programmer-levels are intertwined, and so instead of programs, methods are the elementary user-level executable unit; this removes one unnecessary level of encapsulation, leaving all objects free to talk to each other, without being first streamlined into the procedural mold enforced by the "program" concept. All of this, plus the elegance of the Smalltalk language, makes for a system which is very easy to program, and which leaves relatively little to be desired. Thus, I consider Squeak to be a paradigm of well-used OO.

    Hell, I think I've said more than I set out to... I hope at least some of it is of any use.

  • One thing I believe I've learned in 25 years of coding is that your data structures are really important. I've written my share of spaghetti code, and it always results from badly organized data. Too much global data, failure to use reasonable structures, using the wrong structures, etc. Fortunately, much of the code I write these days is used for exploratory purposes and I get a chance to rewrite some of the bad parts. And the only times that the code ever really improves is when I restructure the data.

    One of the good things about OO languages (and I'm not particularly fond of OO) is that they make you think about your data more. OO is not a silver bullet, though, since it's certainly possible to use one to organize your data badly. No language is a substitute for an experienced developer with some talent for organizing data in the right way for the particular project.

    Of course, this is not a new concept. Fred Brooks said it nicely in The Mythical Man-Month, a book which should be required reading for everyone who does software development, and more so for people who manage development efforts.

  • Welcome to the real world of software development!
  • I'm reminded of the Dilbert where the PHB says something similar and after praising his insight they completely ignore him and carry on doing it the same way as before.. just a thought.
  • I'm really glad to see this. In my experience, the great flaw in the OSS model is the quality of the code. Can we be honest? The vast majority of it is complete crap, developed by amateurs with absolutely no clue how develop to professional standards.

    The OSS community needs to establish some quality standards. Linux code is relatively new, but this is going to bite everyone in the butt as the code gets modified more and more, and software rot starts to rear its ugly head.

    Unfortunately, the vast majority of OSS developers are not very old (less than 25), and don't have the perspective to appreciate trying to maintain 10 year old code that has been modified 20 zillion times.

    Mark my words: Unless coding standards get real important soon, OSS is going to collapse under its own weight. "As long as it works" is not good enough.


    --

  • by retep ( 108840 ) on Saturday April 29, 2000 @12:44PM (#1101883)

    Why do I get the feeling this problem isn't just found in OpenSource projects? Zillions of programms, both free and commercial, are badly designed from the start. Many more could be well designed if only they didn't have to worry about backward compatibility. (probably one of the biggest problems for Windows right now...) The Big Ball of Mud architecture isn't uncommon by any means. And it's not a problem that only OpenSource faces.

  • by retep ( 108840 ) on Saturday April 29, 2000 @12:53PM (#1101884)

    The vast majority *is* crap. But the stuff that is important, libc, the Linux kernel, GCC etc. isn't. If you look at Windows there is lots of third party software that is complete junk. The case is the same with Linux. But the "big stuff" is all big *because* it's good, well designed software. OpenSource can produce crap, anyone armed with a compiler (standard on most UNIX's) can produce utter junk. But will that junk be used? Will anyone even know it exists? Of course not.

    Both OpenSource and Closed Source development can produce junk software. And both can produce great software.

  • "It's due to the 'evolution' model"

    I think you're talking about an unrestrained evolutionary model with no selection, design, analysis or cleanup.

    Just didn't want anyone to mistake the formalized evolutionary model in literature and an evolutionary model he's talking about in OSS software design. I think what he is referring to really, is the pre-waterfall model where everyone would just "code-and-fix". In short, no matter what "model" you use, convoluted code can come as a result of either unskilled labor, ignorance to methods, or outright rejection to adherence. For example: The model elucidated in mythical man month is basically the rapid prototype model. The process flow of the rapid prototype model is; (Product Idea)->((Prototype & [Analysis) & Design])->(Implementation)->(Testing)->(Product Life). An code and fix spin on this would be: (Product Idea) ->(Prototype)->(More Code)->(Test)->(Release).

    "When you develop software using the evolutional model, that is: add code/functionality on the fly in an ongoing basis with short term designs and not based on original concepts and designs, you end up with eventually (most likely) a pile of code that has to be rewritten NOW because a new feature asks for it. because most of the time in these projects people do NOT choose to rewrite it, it's added anyway, resulting in spagetti.
    "

    When you don't understand the model and don't adhere to it, then the result will inevitably be unoptimal code. The evolutionary model I am aware of (you may just be using evolutionary as a buzzword analogous to unrestrained OSS, I don't know) includes requirements, specifications, design and analysis. There is no, "lets chuck in whatever code we can think of without thinking of the grand design". There's actually process analysis (you know, the high level business or customer process, how it can be streamlined, how we can enhance or keep work-flow efficiency), requirements elicitation (where you talk to or observe customers/clients, perceived market requirements and timelines), et al. The actual design and implementation of the product is through an iteration of high level design, analysis, prototyping, analysis, prototyping, code clean up, analysis, testing, release, design and product idea, evaluation of new idea as per requirements and then specifications according to informed analysis of said factors, design, analysis, prototyping, clean up code, analysis, clean up code, testing, clean up code, release etc.

    "In Short: evolutional model code is code where no theoretical basis is stated, there is no original manifest that illustrates WHY all the code is set up like this. MOST OSS projects are developped using the evolutional model. What helps is an ONGOING theoretical design document to function as a theoretical BASIS for the structure of the code. If there is NO designdocument or conceptdocument stating WHY code is structured the way it is structured, it's bad code. Period.
    "

    No, that's called ad hoc "code and fix".

    "What helps is an ONGOING theoretical design document to function as a theoretical BASIS for the structure of the code. If there is NO designdocument or conceptdocument stating WHY code is structured the way it is structured, it's bad code. Period. "

    This is true, I think. The documentation has to change with the program, either through comments or informed technical writers or developers documenting. Developer documents tend to get out of date though. That's why I stress staying to customer and market requirements above all -- because you otherwise developers tend to end up doing their own thing or modifying specifications to suit their idea of what a good program should look like, and not what the client/customer/market would want. This doesn't tend to work in the OSS model though, because most are coding for themselves, so they just do whatever, like throw in weird function X, because they think it's cool :). Larger projects though, should definitely get both user and developer documentation up and going so that new developers don't have to worry about the learning curve. Example of big convoluted project: X. Example of pretty successful projects: Linux, because linux and others have maintainer control; FreeBSD, where there is a committee atmosphere where new code is peer reviewed *before* it is committed. I would note that both of those projects have pretty decent documentation unlike some projects (although most are dinky doo-hickey programs, so who cares).

    "Another thing that adds up to bad coding is a bad naming scheme, or worse: no naming scheme at all! Nobody is forced to use hungarian coding, but please CHOOSE one! develop your own if no scheme suits you, but a scheme that HAS TO BE used by all developers in a project is a MUST to keep the code clean and updatable, even if you use designdocuments."

    I don't know about this for many OSS projects -- but I'd hope that the code is at least readable if not intuitive to understand.

    "More and more OSS projects get tighter software teams with people who KNOW how to develop software, thus using designs and theory before starting to jam in the code, and that is a good thing."

    I'd definitely agree here. It shouldn't be too hard to turn the "here's a patch to fix or add new functionality X" atmosphere into a more committee based atmosphere where there are actually regular talks about design direction. I'd think, though, that a project has to reach a certain order of magnitude in size before one would start introducing such things. Although it might not be the best idea to have big design meetings regarding tiny inconsequential pieces of software, it would do individual developers better to be ever mindful of stupid functionality that really doesn't need to be in a program.

    This is all, just my opinion, though.
  • by aphrael ( 20058 ) on Saturday April 29, 2000 @12:56PM (#1101886) Homepage
    This is *not* just an OSS problem; it's a problem with *all* software. Even stuff which is well-designed from the beginning, and reasonably well-written, degenerates over time; and with high engineer turnover, even stuff written four or five years ago becomes painful to maintain.

    For the OSS community *in isolation* to seek to solve this problem would be unfortunate; the industry as a whole needs to find a way to address it.
  • by retep ( 108840 ) on Saturday April 29, 2000 @12:57PM (#1101887)

    One example of some very well designed software is the Shuttle OS that powers NASA's Space Shuttle. In 420k lines of code each revision has only had 1 bug each that wasn't caught by testing. If we *really* want to make some good software it wouldn't be a bad idea to take these lessons to heart. OpenSource software is already good, lets make it better. Full artical here [fastcompany.com].

  • One of the emerging trends in academic "out there" approaches to software engineering has been moving patterns from architecture (sofware and housing) to behavioral practice, of which this is a negative example. For positive examples, there is an intesting website I ran across the other day about a concept called "Extreme Programming" [extremeprogramming.org]. This concept wouldn't scale to the sort of distributed development done for OSS, but portions of it might. In any case, documenting robust positive patterns for OSS development sounds like an interesting project.
  • by crisco ( 4669 ) on Saturday April 29, 2000 @01:00PM (#1101891) Homepage
    Here [google.com] is the obligitory mirror courtesy of Google.

    I thought the comparisons to Design Patterns were hilarious but way too true.

  • Your impression of open source developers is distinctly wrong. You need to seperate the people love linux because they grew up with it from the people who actually do development and can make commits to the kernal. You are making a rather broad generalization when you say
    "Unfortunately, the vast majority of OSS developers are not very old (less than 25), and don't have the perspective to appreciate trying to maintain 10 year old code that has been modified 20 zillion times."
    Source? Your personal opinion? If you take a look at the people work actually make significant open source project work, they in no way match your description. When you say the vast majority do you mean the vast majority of OSS projects that get started and then dropped? That's the nature of OSS. Crap generally make it into the kernal or a mature project.
  • by xmedar ( 55856 ) on Saturday April 29, 2000 @04:43PM (#1101898)
    Anti Patterns: Refactoring Software, Archetectures and Projects in Crisis
    ISBN 0-471-19713-0

    Excellent book, I would highly recommend getting your bosses a copy or get a copy if you are running a project, its concise, incisive and useful, personally I'd rate it up there with The Mythical Man Month and helps when you need to point out the company is making a classic mistake.
  • DocBook (and all other forms of inline documentation) suck ass. However, they are about 10 times better than no documentation, and 1000 times better than inaccurate documentation. Experience shows that using out-of-line documentation leads to one of the two above problems, and therefore auto generated documenation is becoming the standard.

    So, in practice I agree with you, but there is a widely held belief that inline docs are inherently superior to out of line docs. This is just plain wrong, as it leads to suboptimal documentation and suboptimal code (I have often cursed wading through gobs of commented out inline documentation when all I really wanted to see was the code). This is one huge advantage (of many) of C/C++ header files over Java-- at least the documentation only obsures the declaration, not the actual code.

  • Large Scale C++ Software Design [barnesandnoble.com], by John Lakos. This book has done more to improve my coding and software design skills than any other book I have read. If you program in C++, you MUST read this book. Until you have, you don't know the language. The concepts described in the book apply to other languages as well (as long as you are using OOP).

    ------
  • Not long ago, there was a slashdot post about the Linux Kernel 2.4 to-do list.

    Someone complained that the list was proof that linux kernel development, and open source development overall, is bad.

    The argument was that any decent system would keep everything working all the time. My reply was that innovation doesn't come easily, and that you can't improve a system, while keeping all of its parts working the entire time.

    It's no wonder that closed software gets so bad and bloated; they're all probably doing the very things listed in this ball of mud article. "Daily builds" can sound like such a good idea, but they do lead to problems.
  • Why does spaghetti end up all tangled up? Or, a more practical problem, why do power cords, cables, phone lines, etc. all end up in a gigantic ball underneath my desk, no matter how many times I sort them out? I don't think it is just Murphy's Law.

    My own theory on this is that the cable, cord, etc. tends to be more weighted down in the center, and therefore drags there. As it drags, it forces other cables, cords, etc. out of the way. As soon as it forces them out of the way and drops down to a lower level, the tension on the other cords, cables, etc. tends to make them wrap around the heavier cables cords...so in about a month or so, they all tend to tie around each other.

  • Ball of mud? Bad ideas? Low quality?

    No fair. I've been working on patenting these techniques.


    ---
    Dammit, my mom is not a Karma whore!

  • Don't count on getting a patent. Too many cases of prior art. :)

  • by geekpress ( 171549 ) on Saturday April 29, 2000 @01:28PM (#1101912) Homepage
    I was writing philosophy long before I was coding (PERL mostly), so it doesn't come as much of a surprise to me that I code using the same method that I use to write.

    When I write, I go along a line of argumentation until something starts feeling wrong, like I've strayed too far. That's when I go back, read all that I have written, fix it up, and then continue writing until I have to stop again.

    When I code, I do exactly the same thing: code until it feels too messy, go back, rework, continue to code anew, get stuck, etc.

    The result has been fairly decent code that isn't too bad to alter over time. However, sometimes I get tempted to overhaul code when it really isn't necessary, because some minor issues are bothering me. (This happened with GeekPress [www.geekpress] when I was just a few days of programming away from launch, but thankfully my husband helped me get over my fussiness!)

    Since I've always completely coded my own projects (even when working within a company), I have no idea whether others code in a similar fashion or not. (I'm sure that my situation is greatly simplified by the fact that I don't have co-programmers. That seems like a nightmare to me!)

    -- Diana Hsieh

  • by devphil ( 51341 )
    Some of the more respectable C++ journals have recently done some good article and interviews about Extreme Programming. I highly recommend looking into it; even if you adopt none of its practices, the concepts raise good points.
  • I see Mr. Gates is hanging around /. these days...
  • >It is hard to find really good Software Architects.

    I figure that SW Architects are a bit like the other kind of Architect (those who design buildings), so it might be worth mentioning one kind of architect: those who Want To Be Artists, but decided they like a steady paycheck, don't want to starve to death in a garret, so they get a degree in architecture.

    These architects are the most disorganized pains in the ass, always working to the last moment when they decide ``'we have to stop & leave the mistakes in", thus throwing off the timetable for deliverables & making the rest of the group pissed off. And they usually don't deliver very good work, either.

    I figure there are numeorus members of this school of architecture working in Redmond right now.

    Geoff
  • YES! They don't seem to comprehend that if you spend some time redesigning and restructuring *now*, you'll save time over the next 7 years of the software's life.

    :wq!

  • This looks like it's by design, actually. Note that there's no moderations attached to it. This account is obviously posting at a default of -2.

    You can set your threshold to read -2 postings. Use the controls at the top of the comments listing to set your threshold to -1 and check the Save box, then click "Change". Then manually edit the Threshold field in the URL that gets returned so that it says "threshold=-2", then hit Enter. Voila!

    --

  • by jetpack ( 22743 ) on Saturday April 29, 2000 @09:18PM (#1101927) Homepage
    While I agree with everything you said, a reply to your post's title is probably worth a comment.

    It took you a long time to figure out. You were probably doing it instinctively before figuring out whatever everyone else's problem with maintenance was. Fair enough. I, on the other hand, learned proper maintenance through a gradual progression, and during that journey learned to recognize the shortcomings (and strong points) on others when it came to maintenance and design.

    However, I think the significance of this paper is that it actually addresses some solutions to problems rather than just explaining the problems, and thereby, hopefully, promoting and explaining useful maintenance and design techniques.

    This article is one of the few times that I've seen this. For example, I've read "Anti-Paterns", which is sited in the article, and quite frankly, I was rather less than impressed. It points out problems, but it's only suggestion, usually, is "rewrite". Not much of a help to the newcomer to programming. On the other hand, anyone that has been coding for any length of time, will find the symptoms that this book points out, and (obvious) solutions, as a waste of reading time.

    The posted article, however, I thot made some useful points through analogy that I think might actually help some newcomers to programming understand some of the issues. I particularly was impressed by the section on "shearing". The project I'm currently working on deals with this issue particularly well, and I've used the concept since (unfortunately, I didnt design the system, and therefore cant take credit for it's chameleon-like design). This article relates the issue of shearing quite well.

    So, in summary, [0] you are right, [1] this article is important because it actually shows *why* you are right.

    Now, if I can just get management to read this and agree. (yeah, right)

    Cheers.

  • by Raleel ( 30913 ) on Saturday April 29, 2000 @02:01PM (#1101931)
    Just the other day I was in talking with the lead developer for one of the projects I was on over the last 9 months, explaining to him how the last 7 years of code that had been written on the project was completely underdesigned and thrown together with duct tape and bailing wire. I cannot even begin to describe it. And I am supposed to be the buildmaster for this, and we are looking to have offsite developers go to town on this stuff. He looked at me, and listened to me bitch and then said "Well, we don't want to invest a lot of time into this, or money." WHat??!?!?!? He wants me to restructure the last seven years of work in a slap dash manner, not even fixing the problem, or making it worse! It was exactly this kind of thinking that got me to the point where I had to say this has got to stop. I may be just ranting, but what the hell else am I going to do? I am supposed to be buildmaster and tech support. Well, because it is such a bear to install, we have like 3 users. Guess what? I don't have anything to do but fix it! SO what am I going to do? I am going to come in on weekends and work when he is not looking and make it go, and do it right, because I am _offended_ by the crappiness of the hierarchy and code design. Now, I am not a total lone wolf here, I am discussing the structure with them, but people need to realize that a crappy design leads to a crappy project.
  • by roman_mir ( 125474 ) on Saturday April 29, 2000 @02:03PM (#1101932) Homepage Journal
    It is hard to find really good Software Architects. Lately the tendency has being to produce code at the speed of rabbit procreation - 10 times a week. The problem has gotten worse over time due to the popular rapid development tools such as Visual Basic for Windows and Object Oriented approaches such as Java for JVM. The bad news is that this deters new good programmers to appear instead of old ones. The old programming school did not have to face such issues as handling millions of users and huge databases or creating user interfaces accessible to a novice user, they were mostly concerned with the speed. Ability to hack together some brilliant Assembly code was the primary concern, I admit it is cool.

    Today most so-called Microsoft Certified 'Engineers' have no clue what 'Assembly' stands for but they still don't know how to handle millions of users terrabytes of data or create decent user interfaces. The problem is that computer science became popular among those people who have no real call in their lives and who regard their work as simply a way of getting their salary. Large salaries of IT department does not help too much, they create an unhealthy attitude towards the profession.

    Working on a large project that is supposed to be scalable to millions of customers, supposed to handle multiple user interfaces of various wireless devices (PDAs, Cell Phones etc) over time I had to design various components of the system. In the beginning there was only an idea which later became basically a large collection of various components. I have never before had to design and build such a complex piece of software and I am just happy that my current formal education allows me to make sound judgements about network traffic averages and variances, the speed of code in terms of iterations (big-O, big-Omega, big-Theta, they are usefull after all), being able to handle various datastructures and even creating my own new tree designs.

    Nevertheless, all the way through I've felt the need for an experienced software architect. My company did not have one and we still don't and I think it is very difficult to find one with really good experience and skills.

    Once I have seing a real software architect at work (he was in his forties) he was giving a presentation of his design and it was jus WOW. I mean even after working professionally for three years and handling hundreds of different programming and design problems, I don't think I could produced such a thoughtfull design that goes into details and goes over every possible issue with all the computations and considerations. It was beautiful.
    I wish we all could learn from the best.

You knew the job was dangerous when you took it, Fred. -- Superchicken

Working...