Become a fan of Slashdot on Facebook


Forgot your password?
Programming IT Technology

Has Software Development Improved? 848

earnest_deyoung asks: "Twenty-five years ago Frederick Brooks laid out a vision of the future of software engineering in "No Silver Bullet." At the time he thought improvements in the process of software creation were most likely to come from object-oriented programming, of-the-shelf components, rapid prototyping, and cultivation of truly great designers. I've found postings on /. where people tout all sorts of design tools, from languages like Java, Perl, Python, Ruby, and Smalltalk to design aids and processes like UML and eXtreme Programming. I'm in a Computer Science degree program, and I keep wondering what "improvements" over the last quarter century have actually brought progress to the key issue: more quickly and more inexpensively developing software that's more reliable?"
This discussion has been archived. No new comments can be posted.

Has Software Development Improved?

Comments Filter:
  • by SpaceLifeForm ( 228190 ) on Tuesday November 26, 2002 @10:36AM (#4758454)
    There is no silver bullet, never will be.
    Logic requires careful thought, and careful thought requires time.
  • CVS / RCS (Score:5, Insightful)

    by voudras ( 105736 ) <voudras AT swiftslayer DOT org> on Tuesday November 26, 2002 @10:40AM (#4758488)
    Programs that assist programmers in the development process by handling who changes what when, etc - are - IMHO - a huge improvement.

    I seriously doubt that a program like Linux could flourish without programs like CVS.

    furthermore - many of the programs that do this sort of thing can be used for any programming language... you could even use it for simple documents.
  • a few thoughts.... (Score:3, Insightful)

    by Yoda2 ( 522522 ) on Tuesday November 26, 2002 @10:41AM (#4758497)
    .NET, what else is there? (just kidding)

    But seriously, a lot of people develop beneath the "enterprise level" and some of the buzzword concepts just don't scale that well for smaller project.

    In my opinion, the two things that have really made a difference are databases (as opposed to manually creating file formats) and the object-oriented paradigm.

    My best advice is to use everything with a grain of salt because there is always something "better" on the horizon.

  • by Havoc'ing ( 618273 ) on Tuesday November 26, 2002 @10:42AM (#4758500)
    If I had a dime for every system I've seen where there was no planning, no logic, just brute force coding I'd be richer than those in the .com era. This bring me back to the last slash dot post a few weeks ago of the 80 hour a week SW engineer. True SW architects are hard to find and so is thier logic. A managment that understands the expense and time involved are even rarer entities. Hardware Architecture keeps building on similar themes, no real inovation going on (I use that term loosely). SW requires a human factor.
  • by ColdBoot ( 89397 ) on Tuesday November 26, 2002 @10:42AM (#4758505)
    more quickly and more inexpensively developing software that's more reliable

    Based on the last 20 years either working in or supporting government efforts, I'd say yes. However (there is always a however), it depends on the sophistication of the developing organization. The cowboy shops still suck. Those who have embraced more formal processes have become more reliable. It is a 2-way street though and requires the customer to be more sophisticated as well. It doesn't do a damn thing to have a development shop at CMM-5 if the client doesn't understand the need for process and doesn't understand the software development challenges.
  • by vikstar ( 615372 ) on Tuesday November 26, 2002 @10:43AM (#4758508) Journal
    The invention of pencil and paper.
  • None really.... (Score:2, Insightful)

    by inepom01 ( 525367 ) <> on Tuesday November 26, 2002 @10:44AM (#4758512)
    If you look at how programming languages and sofrware evolve, the errors just get higher level. For example, Java keeps you from having memory leaks and buffer overflows. But a bad coder is a bad coder and will write buggy software where the problems are on a different layer. They will just arrange your off-the-shelf components incorrectly. With wherever there is still really low level C/assembly programming stuff going on, off the shelf just really isn't applicable so you still have the really low level pointer arithmetic problems. As time goes on nothing really changes... it just evolves. So our bugs are just evolving.
  • by jstell ( 106681 ) on Tuesday November 26, 2002 @10:44AM (#4758516)
    The ability to create decoupled software components , combine them into a coherent, functional application, and deploy them into a standards-based container (i.e., an "application server") is a HUGE step in programmer productivity.
  • by jilles ( 20976 ) on Tuesday November 26, 2002 @10:44AM (#4758517) Homepage
    Silver bullets still do not exist. New technologies and methodologies are often hyped as such and naturally fail to live up to the hype. However, that does not mean they are useless.

    These technologies and methodologies have allowed us to keep pace with Moore's law. The amount of software we develop today would simply be impossible using the state of the art in 1970. We routinely poor out millions of lines of code in mammoth projects that take sever hundreds or thousands of man-years to complete. The only reason we are able to do so is because of improvements in development methodologies and technology.

    The (huge) research field that studies these technologies and approaches is called software engineering.
  • Best improvements (Score:5, Insightful)

    by Anonymous Coward on Tuesday November 26, 2002 @10:44AM (#4758518)
    High quality, reliable C and C++ compilers have emerged as defacto standards on major platforms.

    Now you wouldn't think of developing on UNIX with anything but GCC and the associated build tools.

    In 1990 you were stuck with whatever C compiler your vendor shipped, and there were more than a few dodgy compilers out there. Modern compilers with useful error messages have done more than anything else to make debugging and development faster and easier for me.
  • I can say this.... (Score:5, Insightful)

    by Asprin ( 545477 ) <> on Tuesday November 26, 2002 @10:44AM (#4758519) Homepage Journal

    The focus has definitely shifted away from algorithms and toward abstraction. This was supposed to make things easier, by letting the software do what it does best and keep track of bookkeeping, while we concentrate on building models and governing interations between them.

    Some of it actually makes sense - the object oriented paridigm, component models, virtual machines. (VM's, by the way, go back at least 20 years in the literature -- I studied them in college in the late 80's. However, like Pascal, they were originally considered as an instructional tool, and nobody at the time thought that anyone would be damn fool enough to actually try and *implement* such a thing!)

    But just like letting grade-school students use calculators for their arithmetic, I'm not certain these things are best for students. Sure, you get usable code out quickly, but without an understanding of the underlying algorithms and logic. I doubt many modern 'c0derz' could properly knock out a simple quick-sort, let alone a fully ACID SQL DBMS.

  • by Quazion ( 237706 ) on Tuesday November 26, 2002 @10:45AM (#4758527) Homepage
    else how would commercial software company's make money ? They need to sell updates and support.

    They code crap. My personal experience with not to big software company's is they like to sell crap if they can and most of the time the other company also sells crap and after wards you dont even own the software you buy. Software bussness sucks atm.

    Until the world wakes up and start demanding that software needs to work just like material stuff or else they wont buy it. It wont get better in the near future aslong we except crap from even big company's like Microsoft, which is becoming more reliable and stable but still sells a awful design if you ask me. oke maybe start a flame here but most software is like throw a way dishes instead of a rolce roys. And i think company's like microsoft showed the smaller software houses that its oke to sell crap as long it sells and it bugs me cause software has a bad name, but i know good software exists, bla bla bla, you got me pissed again slashdot....i hate humans ;-)

  • by mir ( 106753 ) <> on Tuesday November 26, 2002 @10:47AM (#4758541) Homepage

    I have to say (and I know I am preaching to the choir here) that for me the biggest improvement is really Open-Source and how it allows me to re-use existing libraries.

    I do most of my work in Perl and the fact that I can call any of the thousands of modules of CPAN [] makes all the difference in the World. When I worked in C in the late-80s we could either buy libraries, and have to go through the vendor to maintain them, or build our own, expensive and buggy toolkit. Nowadays I can build really powerful code in a few hours, using the combined efforts of dozens of authors and thousands of constibutors in the form of a few Perl modules.

    For me that's the main improvement: instead of being part of a team of 5 people writing mostly memory management code I can now simply focus on the task and do it myself. Quicker, without weekly meeting, and having fun in the process.

  • by idontgno ( 624372 ) on Tuesday November 26, 2002 @10:47AM (#4758544) Journal
    Twenty years as a software practitioner tells me that the answer, in a word, is:


    The project management keeps moving the target. The customer still says "I don't know what I really want, but keep coding and I'll let you know when you get it." Analysts can't analyze, designers can't design, coders can't code, testers don't test (enough), ad nauseam.

    Methodologies and philosophies of software development can only make up for so much. Sometimes, they are counterproductive. They "lower the bar", leading management to believe that the methodology can fill in for cheaping-out on hiring good developers. But we /.ers know the truth, don't we: Quality software comes only from quality developers--people, not methods or schools of thought or books by really smart people. Since quality developers are rare (like Leonardo DaVinci rare), quality software is correspondingly rare.

  • by tps12 ( 105590 ) on Tuesday November 26, 2002 @10:49AM (#4758560) Homepage Journal
    No need to haul out references to books or count buzzwords...just look at the software world and the question answers itself.

    Since the early days of computing in the late 70's, we've seen systems grow more larger and more complex. Whereas entire OSes used to be written by a single person (granted, a gifted person like the Woz or Bill Gates), these days we have companies like Sun and Microsoft with literally hundreds of developers and testers for a word processor, let alone the thousands of folks around the world who contribute to Linux, Apache, or KDE.

    Given this incredible change in how software is developed, we'd expect to see systems collapse into instability and mayhem, but save for a few exceptions (Windows 9x, anyone?) this has largely not been the case. Windows XP, Mac OS X, and Linux 8.0 have proven, if anything, more stable and reliable than their predecessors. For an even more dramatic example, look at NASA's software or popular video games. It's clear that not only has software development expanded in scope exponentially, but it has become objectively better in the process. Development has never been better, and I see no reason why this trend shouldn't continue.
  • by nicodaemos ( 454358 ) on Tuesday November 26, 2002 @10:49AM (#4758561) Homepage Journal
    Let's say you're trying to improve the work of a painter. Give him a better brush, his strokes look nicer. Give him better paint, his colors look brighter. Give him a CAD system with a robotic painting arm, his paintings look crisper. However, nothing you've done can change the fact that he's still doing velvet Elvis's. Not that there is anything wrong with that, but he isn't churning out a Mona Lisa anytime soon, or ever. You haven't been able to affect the input to the process - his brain. You know the adage, garbage in, garbage out. Well that is the fundamental rule in software.

    The big improvement in software productivity will come when we are able to successfully clone human beings. Corporations will create a stable of uber-programmers who they will then clone to be able to mass produce software. In the future, the open source war will be about peoples' genes, the software will be just a downstream product.
  • by Badgerman ( 19207 ) on Tuesday November 26, 2002 @10:49AM (#4758563)
    I say this based on seven years as an IT professional and twenty years as a computer enthusiast.

    There are definite improvements to programming. Tools, concepts, etc. have evolved. There are no true silver bullets, but we've got a good selection of choices out there.

    The problem, however, is threefold:
    1. First, the right tool (be it a language, software, or concept) for the right job. There are many "silver bullets" out there, but you have to find the right one for the job you're doing.
    2. We're stuck in a sea of IT dreck that's concealing actual good ideas. New products, endless upgrades, marketing schemes, propaganda, FUD, evangelism, poor releases, confusing releases, and much more. What good tools and improvements in programming are out there, I feel, are concealed by the less-than good and utterly terrible tools, concepts, and techniques that exist.
    3. Even if you have the right tool, you may be in a situation where you can't use it due to company standards, biases by others on your team, etc.

    One of the real challenges for IT professionals today is to find the good tools and ideas out there, the ones that really have improved programming, and then actually get them into use. A good IT person is a good researcher and good at justifying their ideas to people.

  • No progress .... (Score:3, Insightful)

    by binaryDigit ( 557647 ) on Tuesday November 26, 2002 @10:49AM (#4758564)
    what "improvements" over the last quarter century have actually brought progress to the key issue: more quickly and more inexpensively developing software that's more reliable?"

    I've only been programming for 19 years (not 25) but I can say that I've seen absolutely no progress in software development givin your contraints of defining "progress" as being able to achieve the three goals (speed, cost, robustness). HOWEVER, I don't really blame the tools, rather the nature of the end result. The software we have to write has become significantly more complex, and at a rate that has surpassed our abilities to effectively create tools to deal with their complexity. "Back in the day" when a "good" business level word processor was 100KLines of code and written by a small group of dedicated and bright people, you could get very good results. Now, something like Word is significantly larger than 1MLines of code being worked on by programmers with 15+ years of experience to those who just got out of school to those who you wouldn't trust to add up your grocery bill let alone write a line of code.

    It's like we still have hammers and nails, but the "market" wants us to build steel skyscrappers. So we come up with even better hammers and whiz bang processes to "better" utilize those hammers and nail together steel girders, but the fact is that those tools are woefully inadequate to truely make a dent in improving that which we build.
  • Net Access. (Score:5, Insightful)

    by slycer9 ( 264565 ) on Tuesday November 26, 2002 @10:49AM (#4758567) Journal
    Think about it. 25 years ago, it was extremely limited. How many people did you know, in 1977 with a net account? I remember coding on a C64 in my cousin's basement for days on end just because we had scrounged enough money to get into town and buy some new books/magazines that helped us overcome some bug. Now, if you're being bitten by the same bug, whaddya do? Hit the net! Some of the responses above, like the sharing of source through GPL wouldn't be as viable an option without the access we have today. The biggest aid to programmers today. Net Access.
  • by Anonymous Coward on Tuesday November 26, 2002 @10:52AM (#4758584)
    Why not go for a PhD and wait for better times in the safe, nurturing academic environment?

    Yeah, as a PhD student you get paid practically nothing (at least when compared to corporations), you have to work silly hours and possibly teach as well.

    Yet, if you pick your topic carefully, you'll be working those hours on interesting and intellectually challenging bleeding edge stuff and not on some YAMC (yet another moron client) project that was handed over to you. In a nice group you can also choose whether you want to come in to work early in the morning or at around lunch time. Furthermore, teaching experience can never hurt a nerd. I used to hate teaching and I still don't like it. However, positive teaching experiences and the very act of confronting a group of smart, young people for four years improved my self-confidence, presentation and public speaking skills tremendously.

  • by CrudPuppy ( 33870 ) on Tuesday November 26, 2002 @10:52AM (#4758588) Homepage
    but in my world, Java is the single largest memory hog and memory-leaking piece of crap I have ever seen.

    you're kidding yourself if you think Java keeps you from having memory leaks, and I have enterprise code to prove it
  • by unfortunateson ( 527551 ) on Tuesday November 26, 2002 @10:52AM (#4758590) Journal
    I can't say that Java is a significant programming advantage over C -- it's the Java libraries that beat the snot out of calling the raw Windows/Mac/X APIs.

    That's not the only one: The depth of CPAN, the Python library, even MFC/ATL are worlds ahead of hand-coding the guts of an app.

    When I started programming, the Mac SDK had to be handled raw, and GDI was brand spankin' new. APIs such as DEC's FMS were considered heaven-sent(talk about dead ends!). Shortly after, class libraries were available, but expensive for an amateur. With Java and Perl, it's part of the (free) package.

    I'm sure there's garbage code in those libraries -- there's always going to be bugs. But they're going to be relatively smooth to use, and they're already written, letting me focus on the business logic, instead of the presentation.
  • by crovira ( 10242 ) on Tuesday November 26, 2002 @10:52AM (#4758591) Homepage
    If you're coding an app and you are spending time on the GUI you are just creating a maintenance headache for maintenance programmers later on.

    Most documentation is horrid if it even exists (learn a human language first and use it to actually write meaningful comments, specifications, test scripts, internal and user documentation.)

    Most of this industry doesn't know dick about SLAs or optimization for time (first) or space (last.)

    Most of this industry doesn't know dick about configuration management, capacity planning or correctness.

    The difference between duffers (most of this industry,) and the pros is that the pros don't "paint little master-pieces" in a a guild-like cottage industry. They generate "art by the yard" in industrial settings for mass dispersal, distribution and "invisible" use.

    Good luck and remember, computing is nothing but a problem in N-Dimensional topology. If anybody tell you different, they are part of the problem, not part of the solution.

    ALL objects have states and state transitions, code for that first and the rest will follow. Start from a thorough, correct and complete meta-model and you won't get into trouble later.

    As for languages, CASE tools, GUIs, IDEs and the rest. Learn to do it the long and hard way first so you'll:
    a) know what's being optimized and abstracted out,
    b) appreciate it,
    c) know what happens when it fails.
  • by e8johan ( 605347 ) on Tuesday November 26, 2002 @10:53AM (#4758599) Homepage Journal

    I must say (regrettably) that open source does not mean more reliable and better code automatically. The major problem is the lack of leadership, for example a plan of features that is followed and proper design of the GUI. As features are added from multiple sources both the GUI and the code can easily get bloated.

    As it is hard to control what features are added, and to get good support I doubt that open source software will take over most large consumer base markets anytime soon.

    I'm not saying that there isn't possible (look at Linux) but I doubt it happening quickly.

  • by gosand ( 234100 ) on Tuesday November 26, 2002 @10:54AM (#4758610)
    I have a degree in Computer Science, but most places I work give me the title of Software Engineer. I took some of the same classes that my friends took (mechanical, electrical). But something I mull around every so often is - does software really require engineering? It is a little more wooly than something like building a bridge, or a roadway, or electrical circuitry. With the advent of a new language, or method of developing, the whole ballgame can change.

    Now I am not saying that you still don't need to have a good understanding of the language, or use good design, implementation, testing, etc. But I have worked at SEI CMM level 3, level 2, and am currently going through the process of evaluation where I work now, for level 2. But being at that level doesn't guarantee that your software will be good. It seems almost like we are attempting to fit the creation of software into the old engineering mold, because it kind of fits.

    So to answer the question - I don't think that there have been any great improvements to obtain the goals you stated. Software relys on hardware, which seems to contstantly be changing. A bridge is a bridge, but the very idea of what software can do has been changing constantly over the last 25 years.

    If you want reliability, look at what NASA has produced. Those are some engineers. Ahh, but you said 'quickly', didn't you? :-) If you want something rock solid, you can't have it tomorrow, if you want it to do anything remotely complex. I think one of the big things we as an industry has to realize is that our product (software) can be simple and fast to market, or more complex and take more time. And all of the variations in between. I haven't been convinced that what we do is engineering, but I haven't been convinced that it isn't. After all, it is a pretty young profession, compared to others. Imagine trying to explain to someone from 100 years ago what a software engineer does.

    All that being said, I think that the obvious Slashdot answer is that the GPL and free software have been a huge force, but only in recent years. I think the two biggest forces in software development were the PC and the internet.

  • by oldwarrior ( 463580 ) on Tuesday November 26, 2002 @10:57AM (#4758622)
    As a professional developer, I've seen complexity for even simple solutions skyrocket with a concomittant loss of performance with the current obsession on over-inheritence and Java-style interpreted/P-code software overall. Add to this GPL/OS that slashes meaningful business value from well engineered software components and I think we have retrograded from the vanilla unix with tough, straightforward C coding.
  • Standard libraries (Score:5, Insightful)

    by smagoun ( 546733 ) on Tuesday November 26, 2002 @10:57AM (#4758631) Homepage
    I'd say one of the biggest advances of the past decade or two has been the proliferation of standard libraries, like the STL or Java's immense class library.

    IMHO, one of the keys to writing good software is to not write very much software. Class libraries like the one in Java are almost invariably more feature-complete, more reliable, and better tested than roll-your-own libraries. Using existing libraries means you don't have to worry about off-by-one errors in your linked list implementation, or figuring out a good hash key for your string. While I think that all coders should know how to solve these problems, they shouldn't have to as part of their daily life. It's busywork, and giant class libraries do a wonderful job of making life easier. They let you concentrate on the important stuff - the algorithms, and marketing's feature of the week.

    Yes, libraries have been around for more than a decade. The STL and Java both showed up in the mid 90's, though, and I'd argue they've made a huge difference since then. Yes, they have bugs. Yes, there have been other advances. Yes, there are special cases where standard libraries aren't appropriate. There's no silver bullet, but standard libraries are a huge advance.

  • by feronti ( 413011 ) <gsymons@gscon s u l t i n g . biz> on Tuesday November 26, 2002 @10:58AM (#4758635)
    That's what code libraries are for just rewrite the function in the library, recompile the library, and relink the other programs... or even better, put it in a dynamic library and just recompile the library...
  • by TeknoDragon ( 17295 ) on Tuesday November 26, 2002 @10:59AM (#4758642) Journal
    According to Brooks Good and Fast never happen together. If you think they can be accomplished at the expense of cost (by perhaps adding more programmers), then you haven't read "The Mythical Man Month" (the book which spawned the TINSB chapter). On the other hand it may be possible to find or train programmers good enough to accomplish "good" and hopefully "fast".

    The answer is finish that degree and hope your institution teaches you enough about these principles: effective design, KISS, machine states, and proper error handling.

    After a few years in the field I've found that these, paired with knowledge of a language and it's libraries is as close as you are going to get (although I'm still working on perfecting the second one)
  • don't do it (Score:2, Insightful)

    by DuctTape ( 101304 ) on Tuesday November 26, 2002 @11:03AM (#4758677)
    Get out. Programming sucks. You'll work your tail off for nothing. As soon as the schedule gets a little tight, disciplined software development goes out the window and its CODE, CODE, CODE! You'll do months of overtime and weekends, and they'll change the requirements on you and then you'll have to work harder, oh yeah and smarter, to make up for the time you spent coding to bad or outdated requirements.

    Find some other career. Like be a male stripper or something else that has regular hours.

  • by Vaulter ( 15500 ) on Tuesday November 26, 2002 @11:03AM (#4758683)
    All the problems with software today aren't a result of poor engineering, but rather of poor software development management.

    I can't count the number of times last minute feature requests are required to be in a build. As software developers, we just deal with it. But quality suffers because of it. And the engineers get the bad wrap.

    Do you think Intel management requires last minute features to the Pentium core, and tries to push them out the door with no testing? Do you think people building cars for Toyota decide to swap engines at the last minute, becuase GM just put a turbo charger on it's standard Prizm?

    It's ludicrous the stuff that gets requested of software engineers.

    My brother is an contract electrical engineer. He was complaining one time of a 'last minute feature request'. His project still had a year of development time left! I laughed so hard I almost puked.

    Granted, given all that, the current model for software works. When software is required to be bug-free, it can be. Lots of software is bug free. You just don't hear about it, because it works. Look at NASA, the AIN telephone network, or Windows. ;) But most business orientated software is bug-ridden, and that's just fine. It's the accepted risk for getting the software out the door as cheaply and quickly as possible.

  • by gr84b8 ( 235328 ) on Tuesday November 26, 2002 @11:04AM (#4758692)
    Erlang does have some great applications - but I don't think one can say OOP doesn't scale well(forgive me if I am misreading your post). For example, C++ has been proven to be extremely scalable. Not only can it be scalable performance wise (read speed and size), but it has proven to be scalable project wise (read large collaborative project).
  • by smagoun ( 546733 ) on Tuesday November 26, 2002 @11:05AM (#4758699) Homepage
    you're kidding yourself if you think Java keeps you from having memory leaks, and I have enterprise code to prove it

    Um. That was his whole point - bad engineers will always be bad engineers, no matter what the language. Java prevents an entire class of memory leaks by garbage-collecting unreferenced objects, but there's nothing in the language that stops you from doing something stupid like writing an unbounded cache.

  • by Qzukk ( 229616 ) on Tuesday November 26, 2002 @11:06AM (#4758713) Journal
    While people seem to be saying there haven't been any advances, and won't be any advances, and such, they're forgetting that the first key step in solving any problem is identifying the problem.

    People are complaining about how using stronger employees to lead less-skilled employees doesn't work. People are complaining that the person testing the "bad coder's" code is usually the "bad coder" itself. These complaints begin to define the actual problem of "Why software development isn't improving." Each of these complaints, taken together and addressed, is the first step towards improving software design methodologies.
  • Abstraction Levels (Score:2, Insightful)

    by brw215 ( 601732 ) on Tuesday November 26, 2002 @11:08AM (#4758729) Homepage
    I think the varying degree of control over the physical machine a developer can now choose when writing a program is the single most important factor in increased productivity. In assembler the programmer must worry about everything. C was the first truly abstract programming language where the programmer could call a routine like printf and not need to worry about the details of printing a string to the screen. Because the language was more abstract, the programmer could do far more complicated things (have you every tried to write a red/black tree in assembler)?

    Over the last several years languages have gotten more and more abstract, languages like Java isolated the developer from pretty much everything except the logic they are trying to capture. Developers can now choose the level of abstraction they wish to work with based on the problem domain. (low level library vs. script for renaming files) Higher level (more abstract) programs are usually much easier to write but it is worth noting that there is a price to pay for, generally these programs are not as powerful as their low level cousins. Some languages like VB still try to abstract development out even more, so that it is accessible to everyone. Abstraction has brought the ability to program to a much wider audience and has greatly reduced the time it takes to write basic applications and for that it is the most important change in programming.
  • by richieb ( 3277 ) <richieb@g m a i l .com> on Tuesday November 26, 2002 @11:10AM (#4758746) Homepage Journal
    For a construction project all of these elements are mapped out well in advance, which is why the construction industry can work on lower margins.

    Just to nitpick on this particular myth. In construction there is the idea of "fast-path" construction, where building starts before the design is done.

    The idea that all requirements and design are done before construction starts is a myth.

    People hack buildings too. There were some famous cases of building "bugs" (eg. CityCorp skyscrapper wasn't stiff enough) or famous failures where design or implementation errors caused a building collapse.

    Read "To Engineer is Human" and "Design Paradigms" by Henry Petroski for a start.

  • by sql*kitten ( 1359 ) on Tuesday November 26, 2002 @11:12AM (#4758761)
    I think any programmer who sees the benifts of CVS would understand where im going with this concept. We all have functions we use again and again - and realizing that there is a potential flaw in a given function at one point is always followed by exasperation because one realizes that the function needs to be changes in X number of programs.

    You don't need a new version control tool, you need a refactoring tool.
  • by ites ( 600337 ) on Tuesday November 26, 2002 @11:13AM (#4758771) Journal
    Small-scale development has always been efficient. The challenge facing the industry has been to find ways of doing large-scale development (the type Fred Brooks was talking about) cheaply and effectively.
    And in this domain, there has been a revolution, namely the Internet, and the arrival of cheap connectivity between developers anywhere in the world.
    Prior to this, the only way for developers to collaborate was to be hired by the same company, work on the same network. Inefficient as hell.
    Today any group anywhere in the world can create a project and work on this as efficiently as a small group in the past.
    The irony is that the revolution does not care a shit about the technology used, and works as well for COBOL programmers as for companies cracking the human genome. It's about solving a purely human problem: communications.
  • by Atom Tan ( 147797 ) on Tuesday November 26, 2002 @11:15AM (#4758795) Homepage
    First of all, glad to see a lot of positive posts on this topic...I frequently see on this very same site laments about the dismal state of software. I am in agreement with the viewpoint that software developers continue to be more and more productive (through frameworks, code reuse, improved languages and tools, etc.), however the productivity hasn't resulted in improved software quality because we are simply being asked to do more complex tasks with the same schedule and resources.

    One thing that has drastically improved my productivity is the Web itself (time on Slashdot notwithstanding), as a way to locate resources for programming. Almost any algorithm, component, or subsystem that is not specific to the problem domain can probably be found on the web, whether as a library, a set of source code, or simply a precise definition of an algorithm. I agree with one post that a lot of young developers would have difficulty writing a correct bubble sort, but anyone that attempts to design and implement a bubble sort on the job is wasting resources, since there is an implementation somewhere on the Web in any language you would want. In addition to software projects, informative articles and API documentation, newsgroup discussions are invaluable as well for pinpointing problems. So the Web is really my most important programming tool.

    In conjunction with the Web, the vast supply of open source and free software out there that has drastically improved my productivity. The Apache Jakarta project and CPAN are my favorites, but there are many interesting projects on SourceForge and FreshMeat as well. Often, even if you are determined (or required by corporate standards) to roll your own, free software can give you a good idea about how others have approached the problem and the abstractions and metaphors they've used. In my experience, design reuse is often far more helpful and practical than actual reuse of code.
  • by Junks Jerzey ( 54586 ) on Tuesday November 26, 2002 @11:15AM (#4758796)
    Software development, at least many types of software development, has changed, in that programmers are much more dependent on large APIs and libraries than they used to be. In theory this is good, because it saves work. In reality, it has turned many types of programming into scavenger hunts for information. Now you have to hang huge posters on your wall showing object heirarchies (you didn't remember that a Button is a subclass of BaseButton which is a sublcass of ButtonControl which is a subclass of WindowControl?). Now you need online forums so you can get a consensus about how a certain API call is supposed to behave in certain circumstances. Quite often I feel I could write a new function faster than locating information about how an existing function is supposed to work.
  • Re:gpl dude (Score:3, Insightful)

    by TeknoDragon ( 17295 ) on Tuesday November 26, 2002 @11:18AM (#4758812) Journal
    how about "Buy vs Build"

    Brooks seems to be telling us that instead of making our own tools we should be using tools made by others.

    Open & commercially free software (software that you are free to make money off of) is the ultimate realization of that phillosophy, where anyone can see all the code and use all the code to make your product better.
  • by jimmyCarter ( 56088 ) on Tuesday November 26, 2002 @11:18AM (#4758818) Journal
    I can easitly see your point, but at the same time, without knowing much about OneWorld, I have to wonder if it doesn't do a little bit more than you're giving it credit for.

    While I no doubt respect ManMan (especially with a kick-ass moniker like that), I could easily see ManMan looking on enviously at OneWorld's application and system interop features. Try RPC on ManMan..
  • by Anne Thwacks ( 531696 ) on Tuesday November 26, 2002 @11:19AM (#4758825)
    I disagree entirely about "object technology". I agree that "object oriented programming" as a concept can be helpful, but all the evidence is that huge projects that produce efficient, reliable, code use C, and NOT C++. For example FreeBSD.

    I have over 30 years experience of embedded systems, and I would say that without exception the C ones were leaner, meaner and fitter than the C++ ones.

    As for reusability, documentation is the key to reusability. It is conceivable that the syntax of some languages may help with this, but it seems there is more benefit in theory than in practice. All of us use the C standard library all the time (ie anyone using a browser is executing it) - now THAT is reusable. The Fortran library is widely reused, and no one ever accused Fortran of being object oriented. (I have heard Fortran users called "disoriented" :-)

  • by Anonymous Coward on Tuesday November 26, 2002 @11:23AM (#4758862)
    Software development will never be made reliable and inexpensive until we get rid of the programmers.

    Software development is a cottage industry and "art" where artisans produce one of a kind products. No two programmers will produce a program in exactly the same way.

    What we need is an easy way to translate analysis directly into code in a reliable way without human intervention.

    This will give us the consistency and repeatability that translates into reliability and lower cost.
  • by namespan ( 225296 ) <namespan.elitemail@org> on Tuesday November 26, 2002 @11:23AM (#4758863) Journal
    From the article:

    The most a high-level language can do is to furnish all the constructs that the programmer imagines in the abstract program.

    A-men. But if it does that well, then it makes the job a lot easier. That's why going from C to Java or Perl was sheer relief. Actual strings? Real associative arrays? Whoohoo! And less memory management grief. Not to mention the component libraries available for things I hadn't even thought of yet. CPAN...

    To be sure, the level of our thinking about data structures, data types, and operations is steadily rising, but at an ever decreasing rate. And language development approaches closer and closer to the sophistication of users.

    True... but the user sophistication is increasing to. It seems highly apparent to me that with more experience and more shoulders to stand on, language and component developers are able to concieve of more and more useful abstractions. And because of the internet, they're more easily available for sharing, commentary, and change.

    To sum up, I am much, much happier with the readily available toolsets I have access to now than the ones I had 15 years ago, or even eight years ago. They make developing much easier and much more fun.

  • by guybarr ( 447727 ) on Tuesday November 26, 2002 @11:24AM (#4758867)

    remember, computing is nothing but a problem in N-Dimensional topology. If anybody tell you different, they are part of the problem, not part of the solution

    No. That's the problem solving part. The more important (and hard) part is defining the problem.
  • by Zooks! ( 56613 ) on Tuesday November 26, 2002 @11:27AM (#4758896)
    You couldn't be more correct on the time part.

    I often see authors in the ACM and IEEE journals tearing their hair out worrying if software engineering is finally going to come to a magical "maturity" and if professional software engineers are really putting theory into practice, etc, etc. It is my feeling that, in general, professional software engineers are applying good engineering practice when they can and try to at least be conciencious when they cannot. So what is it that is stunting the progress toward software engineering's maturity?

    Engineers putting theory into practice are only part of the equation. Engineers must work under management and it is my belief that the management of software engineering is in a less mature state than software engineering itself, at least in practice. As such, I believe managers of software projects often severly underestimate how long software projects should take, which leads to time budgets that are unrealistic given the requirements for a project.

    There is often a comparison drawn between house building and software. The question: "Would you live in a house built with the same reliability as software?" is often bandied about. I think such a question misses the point. I think the more accurate question is: "Would you live in a house where the architect and structural engineer had two days to make the plans and the builders had 2 weeks to build it?" Of course you wouldn't. You would expect the architect, engineer, and builders to have enough time to build the house safely. That doesn't mean giving them infinite time but it means giving them enough time to do the job properly.

    The sad thing is that many of the managerial problems that inhibit software engineering from progressing have been known for decades. Unfortunately, this information seems to have failed to penetrate into management circles or the information has fallen on deaf ears.

    We know what the problems are, but some of the solutions are not in our control as engineers.
    Changes in software management must take place in order for discoveries in software engineering to bear fruit.

  • by bytesmythe ( 58644 ) <.bytesmythe. .at.> on Tuesday November 26, 2002 @11:30AM (#4758926)
    I think software is becoming cheaper and more reliable, but not much more efficient.

    I notice the original post mentions several things that could influence the development time of a software project. I will address a few of these below:

    1) Object Oriented Programming
    This is one of the bigger Silver Bullets to be unleased upon the programming world. I don't think it entirely lived up to the hype. Most OOP is just for local project design, and heaven help you if you have to reuse code somewhere else. It isn't just a case of bad design. Problems like software design are actually ambiguous. The design process is not algorithmic; rather, it's heuristic. You use "templates" and "patterns" to represent your ideas. Trying to shoehorn real-world complexities into these cookie-cutter styles is difficult at best. Trying to further take those styles and integrate them with each other in a very large scale product is a hair-tearing nightmare. I think Tablizer would agree with me on this... :)

    2) Reusable components
    The most visible place reusable components come into play is GUI programming. It's very, VERY simple to use a visual-based system (like Visual Basic, C++ Builder, Delphi, etc.) to create a GUI simply by dragging the desired components onto the blank form window. If anything has been sped up significantly in the past several years, it has been the GUI development.
    Components are, of course, used in a variety of other places, particularly in run-time libraries of various programming languages. However, learning to use these components effectively takes more time and dedication than one might suspect as the syntax tends to be rather cryptic looking. ;)

    3) Java
    Don't get me started. I am currently employed as a Java developer. I don't really like it a lot. The file scoping rules bug me. (Similarly, I don't like Python because of the way it enforces indentation.) Also, the Java IDE sucks. Whoever thought the entire GUI needed to actually be written in Java needs to be taken out and beaten with a stick. A large stick.

    4) The Internet (and OSS)
    One thing I noticed that you hadn't mentioned is the Internet. I have never been exposed to so many programming concepts and new languages. There is an astounding variety of tools, and thanks to Open Source and researchers at various universities, you can try your hand at as many of them as you have disk space for. The 'Net can be a wonderful place, after all. ;)

    My advice to any new programmer would be to get online and start reading. Download and try out new languages, especially ones in different paradigms, like functional programming. The tools you need (such as compilers, editors, databases, GUI component libraries, etc.) are ALL there, free for the taking. The only real "silver bullet" is to make yourself the best programmer you can be.

  • by GreyPoopon ( 411036 ) <gpoopon@gma[ ]com ['il.' in gap]> on Tuesday November 26, 2002 @11:30AM (#4758928)
    Logic requires careful thought, and careful thought requires time.

    I totally agree with this. If I've learned nothing else in the last 15 years, it's that the most significant amount of time in any project should be spent before any development begins. Establishing functional and technical specifications, and choosing the appropriate platform (both hardware and software) are all more important than the development itself. The only way to improve efficiency here is to make sure that the "users" who are requesting the development have a firm understanding of how to build their requirements while keeping in mind potential limitations of various potential platforms. Although the numbers may vary depending on who you talk to, I generally feel that 60% of any project should be spent on capturing requirements and making the basic design, 10% should be spent on implemenation, 20% should be spent on testing, and a final 10% should be spent on documentation. Sadly, the final 30% spent on testing and documentation, while probably having the most potential to save the company money in the long run, is usually left unfinished in order to keep the project within time and budget constraints.

    Having said all of that, I do believe that the advent of rapid prototyping tools has actually helped for the requirements phase. If a picture is worth a thousand words, a demonstration is probably worth 10,000. Providing a "mock-up" of potential functionality truly helps users to make decisions on what they *really* want, instead of what they think they want.

  • by smagoun ( 546733 ) on Tuesday November 26, 2002 @11:30AM (#4758929) Homepage
    Good, Fast, and Cheap can happen, even in software. If the aircraft industry can pull it off, so can the software industry. Read up on the Lockheed Skunk Works []. They did incredible stuff with very few engineers in a very short amount of time. The key is people. You need a top-notch staff and more often than not a world-class leader. Such a team is hard to come by, but when they do get together they can pull off some amazing stuff.
  • by MadAhab ( 40080 ) <slasher AT ahab DOT com> on Tuesday November 26, 2002 @11:32AM (#4758939) Homepage Journal
    Blah, blah blah. Thanks for wasting my time with a buzzword that applies to almost no one. I took a look at what that "CMM-5" is about, and it simply describes the processes by which you improve your software development in obscenely general terms. BFD, most people who've done any sort of software development and have a modicum of social skills and maturity arrive at the same processes intuitively.

    You want to know what we've really discovered about software development in 25 years? The same thing we know about booze and marijuana: spend your money on high quality, b/c the increase in quality outpaces the increase in expense.

    CMM-5 and crap like that are amusing diversions for middle management types, but keep in mind that middle management is the art of creating an organizational structure that is more important than the people who populate it. It is a recipe for reliable mediocrity, like bread from a factory.

    Eventually, it will be realized that top technical personnel are like good lawyers (and not a thing like wonderbread): essential to an organization over a certain size, not readily "manageable" in the sense that the typical weasel would like, and not readily identifiable on the basis of objective criteria.

    Or just keep hoping for a magic bullet that will allow a caveman to captain a starship, your choice.

  • by Anonymous Coward on Tuesday November 26, 2002 @11:32AM (#4758948)
    Open source is only as good as the people who administer the project, and the quality of community involvement that they recieve. If either of these two elements is sub-par, quality will be no better, and likely worse, than a closed-source project.

    Another thing worth considering, is that in 5 years or so, the first generation of open-source developers will have kids and responsibility, meaning the amount of time they can put into non-paying projects will drastically decrease. I wonder how this continual loss of experience will effect the community. A project needs someone who's already made all the old mistakes, so that they only have to worry about new ones
  • by mestoph ( 620399 ) on Tuesday November 26, 2002 @11:32AM (#4758949)
    To be honest, to throw my 2 cents in, we are seeing more and more bad programming. 20 years ago, you might have expected a program to have bugs. But know we see some of the biggest software houses and programmers releasing half standard programs, just to get them released on time. With many bugs still with them. And for the most part these bugs are bareable only because, our massive increases in hardware does not notice the odd memory leak here for a long time. To be honest, how many things we have designed over the years, would run on a 286, with 4 meg of ram. Fair enough you can't ask it to hand some of the graphics side, but the bare bones should at least run, even if you would be waiting hours for it to say Yippie. All in all, we can get payed silly money for stuff we write, and the pressure of deadlines makes some awful things. But on the plus side, those that do side projects, do end up being the best things around. As you dont want to release a personal project unless its perfect.
  • Not enough! (Score:5, Insightful)

    by Glock27 ( 446276 ) on Tuesday November 26, 2002 @11:36AM (#4758983)
    I've been programming on a full-time basis for over 20 years. I suspect that's a bit longer than the average Slashdotter. ;-)

    I've often thought over the last few years that we've made too little progress in making programmers more productive. I largely blame that on Microsoft, simply because it drives more software development with it's tools than any other entity. One language I've categorically made a decision to avoid is Visual Basic. I have always felt it was basically (sorry) a waste of brain cells. It has certainly done nothing to advance the state of the art.

    In my opinion, one of the best things to come along in a long time is Java. The gentle reader may recall earlier posts along those lines. I enjoy C, and have spent the majority of my career doing C and C++. However, I have also spent _way_ too much time tracking down memory-related bugs. Often, they were in third party code. That is no way to run a railroad.

    Java addresses almost all of the glaring deficiencies of C++, both in language design and in runtime safety. In my opinion, the best programming tools will be those that enable single programmers to tackle larger and larger projects.

    Compared with C++, Java enables me to tackle much more ambitious projects with confidence. A team approach can never attain the efficiency of a single programmer approach. The "sweet spot" of software engineering efficiency is the largest project one person can tackle. Extreme programming is a useful hybrid that attempts to turn two programmers into one programmer. ;-) (Also teams can be nearly as efficient as single programmers if the system is properly decomposed into subsystems separated by simple interfaces. This rarely happens smoothly, in my experience. It takes a top notch group of people.)

    One last note on Java - performance is now almost completely on par with C++. On my most recent round of benchmarks, Java (JDK 1.4.1_01) on both Linux and Windows outperformed C++ (gcc 3 and VC 6) on most tests. Dynamic compilation and aggressive inlining are that effective. The VM also soundly spanked the gcj ahead of time compiler in gcc 3. It thoroughly rocks to have truly cross-platform code that runs faster than native! Think how many religous wars would be avoided if 99%+ of software was available on all OS platforms...and think how much it would help Linux! :-)

    If you want to see what's out there for Java, download either the NetBeans IDE project [], or the Eclipse IDE []. Both are free and each has its strong points. NetBeans is a Swing app and includes a Swing GUI designer. Eclipse uses a new open source "native widget wrapper" library from IBM called SWT which has it's interesting points. You'll also need a Java VM [] (there are also others available from IBM etc.).

    One last thought - wouldn't it be cool if web browsers had support for something like Java? I mean, you could deploy apps just by putting them on a web page! It wouldn't matter what the target platform was! What a great idea! (This paragraph was sarcasm in case you were wondering.)

  • by kin_korn_karn ( 466864 ) on Tuesday November 26, 2002 @11:36AM (#4758986) Homepage
    I maintaint that there is nothing in software that can be called an Engineering Discipline. It's just not that important. People die if a bridge isn't designed right. People have to do things by hand if the software doesn't work. Which of these is more real?

    Coding is not an engineering discipline. Coding is typing. Coding can be done from a Rose .mdl that you ftp to some cut-rate sweat shop in Bangalore.

    I'm going to get flamebait for this but I don't care. I'm a programmer and I know my line of work is a crock of shit, but I'll take all the money my employer will give me.
  • by Devios ( 603168 ) on Tuesday November 26, 2002 @11:53AM (#4759142)
    CS programs will best prepare you for today's (US) job market by teaching abstract systems design, business processes, algoritm (and process) optimization and analysis, applications of probability and statistics, math, and how to manage your overseas code-shop... For the record, I am a CIS student and we still have to learn QuickSort, MergeSort, Bucket, Heap, etc., Dynamic Programming, Binary Trees, ... and do all of those fun things like place bounds on running time, etc. Having looked all over the web for notes from similar classes, I'd say that most CIS programs still cover this stuff. In the same way I look for notes, many students, however, are looking for solutions and, as you suggested, could not implement the code off of the tops of their heads. Why should they? I think that it is more important that students know WHY to choose a particular algorithm (speed, correctness, memmory usage, input predictability, etc. Only a true moron will not be able to find someone else's publicly available code one they've chosen the proper algoritm. Leave coming up with new, mostly useless, algoritms to professors and hobbyists. Students should learn concepts in school instead of learning syntax. A former CS student should be able to apply those concepts to teach himself or herself the syntax of a given language in a few months.
  • by gnalre ( 323830 ) on Tuesday November 26, 2002 @11:55AM (#4759157)
    You are probably right, I am sure there are a large number of successful C+ projects. On the other hand there are probably a number of successful large VB/C/Perl projects out there, which probably indicates the language is not the overiding factor on development.

    The main problem I see with OO systems is that they have been put forward as a panacea to all software woes, when in fact they also bring some of there own problems to the party(see MFC for details). The OO paradigm has been oversold over the years in the detriment of other ways of working
  • by Anonymous Coward on Tuesday November 26, 2002 @11:58AM (#4759172)
    Right but the problem is that for some strange reason, 20 years later, a coder still does have to know how to knock out a quick sort (or some such). We won't see true progress until we get to the point where your "average" coder absolutely will never need to know this, unlike the hodgepodge of high level/low level stuff we have now.
    I'd have to disagree. I think that basic algorithms as quicksort and mergesort are essential in the training of any decent programmer. They are high in the list of examples that you use to discuss the efficiency of algorithms. The assumption here is that one ``needs'' to write quicksort, for example. I haven't written a quicksort in the last 8 or so years, except as an example to myself of how to write a quicksort in a new programming language. It is an instructive example of a not quite trivial program to write to see how the language deals with sorting lists. I think that every language that I use (including C, which is generally the lowest level that I use regularly) has basic sorting in the standard libraries. So, the ``avergage'' programmer does not have to write a quicksort. But, again, I would be very uncomfortable hiring someone who couldn't.

    Of course, the last time that I was interviewing I asked what I felt was a rather basic question, something along the lines of ``Could you describe a insertion sort and a quicksort, compare and contrast the two, and describe under what conditions you might use either?'' Simple question, I was expecting answers along the lines of ``Well, an insertion sort in O(n^2) and a quick sort is O(n log n)''. I was hoping to have some candidates point out that an insertion sort was linear on data that was already sorted and that quicksort has rare pathological cases where it can perform in O(n^2) time. I was really hoping that at least one candidate would tell me that most quicksort implementations use an insertion sort as the last step when the list is almost sorted, because it is faster to do so.

    What I got was from the best candidate ``Um, a quick sort is faster?'' Which was probably an unfortunate consequence of its name. I should've picked merge sort or heap sort...

  • by Zathrus ( 232140 ) on Tuesday November 26, 2002 @12:00PM (#4759185) Homepage
    Not knowing the industry or the apps, I'll just resort to ad hoc speculation. But, hey, this is /. afterall :)

    What are the training costs for the two? Is OneWorld significantly easier to learn and use? Does it interoperate with more 3rd party programs? Is it more friendly with respect to data input and output?

    Odds are good that ManMan is actually more efficient for a trained operator - but the cost of getting that trained operator is relatively high. On the other hand, you can plop down half a dozen monkeys in front of OneWorld and get results.

    True or not? If not, then I definitely have to wonder what advantages OneWorld actually presents over ManMan, other than support and maintainability (which, without a doubt, are huge requirements in the business world).
  • by TheAncientHacker ( 222131 ) <> on Tuesday November 26, 2002 @12:00PM (#4759188)
    People die if a bridge isn't designed right. People have to do things by hand if the software doesn't work. Which of these is more real?

    On the other hand, if a sidwalk curb isn't designed right, it wastes some concrete. People die if the software in their pacemaker doesn't work.

    It's the project, not the discipline.
  • by arivanov ( 12034 ) on Tuesday November 26, 2002 @12:02PM (#4759202) Homepage
    A top-notch staff and a world-class leader, I'm guessing, is significantly more expensive than your average software development team. Therefore, it ain't exactly cheap.

    It is actually cheap compared to the usual practices especially in big companies (hiring 100 cretinoids to midlessly click and drag). The problem is that such teams are not a commodity readily available on the market. You cannot just go out and buy one. And they are hard to manage so the average PHB prefers the monkeys

  • by Twylite ( 234238 ) < minus painter> on Tuesday November 26, 2002 @12:10PM (#4759261) Homepage

    Management is part of the discipline of Software Engineering. You will find extensive discussions of process and management in any book on the subject. There is a significant amount of theory, literature and expiertise on the subjects of time and complexity estimation, resource requirements, managing change, measurement, quality testing and reliability.

    What is not covered is the management of the business pressure, which requires that you use less people, cheaper tools, and deliver sooner. Because everyone knows that its easier to build software than to build a bridge (or building). Or more specifically because fewer people die from faulty software and its easier to slap on a disclaimer because, given the track record, people don't expect software not to collapse.

    The answer is that Software Engineering needs to become a recognised professional qualification, and accredited engineers must adhere to the ethics and codes of conduct which govern other engineering professions. When your boss tells you to do it on the cheap, you have an obligation to say "No", and a body of professionals to back you up.

    Aside: if you want to refer to yourself as a Software Engineer, please have some familiarity with the differences between that role and a developer. The SWEBOK [] is a good place to start.

  • by Rocketboy ( 32971 ) on Tuesday November 26, 2002 @12:16PM (#4759324)
    I wrote my first computer program in 1974 or 75 and have been a professional programmer (meaning that I got paid to write code) since '79 or so. School was mainframes and an early Wang desktop system (Basic and punched cards, oh yeah, baby!) I later moved into minis, mainframes, and I've been working with desktop systems since CP/M and the S-100 bus, so I guess I've seen a little of the history, anyway.

    In my experience, the actual process of coding has greatly improved over time but the process of developing software hasn't improved as much. As other posters have pointed out, object-oriented tools, technologies and techniques (among other factors) have greatly facilitated the generation of code but the management of the process; deciding what gets coded, when, by whom, etc. is little better now in actual practice than it was in the late 70's or early 80's. In fact, in my opinion the situation is in some respects worse.

    Management of software development today makes the same mistakes and operates under many of the same misguided assumptions as it did back when I spent my day in front of a dumb terminal. Adding outsiders unfamiliar with the details of a project makes the project later, not earlier, etc.: all the platitudes we know and love are still with us, still the butt of Dilbert jokes.

    Technology may change; people aren't quite so amenable to upgrades, I think.
  • by Badgerman ( 19207 ) on Tuesday November 26, 2002 @12:27PM (#4759389)
    A top-notch staff and a world-class leader, I'm guessing, is significantly more expensive than your average software development team. Therefore, it ain't exactly cheap.

    It is actually cheap compared to the usual practices especially in big companies (hiring 100 cretinoids to midlessly click and drag). The problem is that such teams are not a commodity readily available on the market. You cannot just go out and buy one. And they are hard to manage so the average PHB prefers the monkeys

    This hits on a very critical point - talented people who get the job done are the solution, but they aren't always easy to find and they aren't always what PHB's expect.

    Finding talented people requires one be able to recognize the talent, be willing to pay for it, and be willing to use it properly. That's a tall order for many people. You can do it, but you have to really understand what you're doing.

    Secondly, very talented people require proper management - that's sometimes at odds with common managerial philosophy. If you have someone who is good at what they do, micromanagement, not giving them proper resources, etc. can minimize the impact of the talent. Not understanding personality quirks of certain talented populations can be disastrous.

    So, people go with what they know, even if 100 codemonkeys are hired as opposed to 50 talented people, even though the 50 talented people may save you 25% of your budget.

    I'm fortunate. Where I'm consulting now is a place where my manager is an IT guy, knows how people work, and lets us to our job as long as we file progress reports. When he sees a talent/skill, he maximizes it. He talks to people as people.

    I got lucky.

  • Re:Absolutely! (Score:2, Insightful)

    by RogerWilco ( 99615 ) on Tuesday November 26, 2002 @12:29PM (#4759402) Homepage Journal
    Uhm, like Borland Delphi /
    C++ Builder / Kylix / JBuilder have been doing for the past 6-7 years ??
    I have both used VS 6.0 and the new VS.Net, and I just find it very familiar to what Borland has been doing for a long time.
    If our client's don't force us to do a project in VS (some do, because "it's corporate policy"), we use Borland,mostly C++, but the JBuilder for cross-platform. Now that Kylix supports C++, I think we are going to use that for Win32/Linux apps.

  • by Anonymous Coward on Tuesday November 26, 2002 @12:32PM (#4759424)
    testers don't test (enough)

    As has been said, testers test as much as they're allowed to. Typically there's a product manager saying something like "I know we promised you six weeks, but there's been eight weeks of slippage in development, so we actually launched the product last month and all our directors swore blind to our investors then that it worked. You going to tell them it doesn't?"

    So the tester reports a bug. Next thing they know, the product manager is breathing down their neck: "Dammit, the investors think the thing's finished! You can't tell people about these bugs now! Give your reports to me privately and ferpityssake, come up with some tests it'll pass."

    Eventually the tester produces a report saying "It's buggier than a compost heap, it's a miracle it hasn't taken anyone's arm off yet". The product manager turns this into a set of RFEs for release 1.1, the developers take one look and say "We can fix maybe a third of those in reasonable time, but the rest might as well wait for the next product" - and the company steadfastly pretends that there was nothing wrong with the original release.

    And thus another promising technology bites the dust.
  • by PissingInTheWind ( 573929 ) on Tuesday November 26, 2002 @12:45PM (#4759559)
    Score 4: funny?

    I'd say: Score 5: tragic.

    But damn I agree. Perl zealot needs not apply: I've been there, I learned, I understood, I realized nothing good is coming from there. --> the way to generate HTML documents? wtf, the function names are even more verbose than typing in tags directly. That's why people will always embed html in print statements in Perl. My eyes still hurts just remembering how f***ing painful Perl is sometimes.
  • by TheCubic ( 151533 ) on Tuesday November 26, 2002 @12:45PM (#4759566) Homepage
    I work for a Fourtune 500 company (intern) working with a software & hardware development model. The model that we use constantly goes through changes and is very sucessful in terms of delivering software and hardware on time, and defect free. I have no clue about budget, but I'd guess that it is not over budget either. Working here has given me a favorable view of software engineering. Every process is laid out and explained, and there are many tools at our disposal, including a requirements tool, a UML tool, code generation tools, code testing tools, etc.

    Now for the rant. I also go to a large University 20 miles down the road, and I am taking their software engineering course (<- yay for relevance). Among CS majors there is a bad reputation for the SE class, that it is never organized well and leaves students with a bad taste for SE. In my class, we are currently in the 'Design' phase for our semester-long project, we will be done tomorrow. The Implementation phase is a week long. That's right, a week. We spent months on the requirement and ooa phases, but they are still unclear (because the document that calls for them is unclear), and now we have a week to iron everything out. This class leaves me with an unfavorable view of software engineering.

    I think that more colleges should teach software engineering not as just one class but as an approach along with code classes. It makes sense when teaching people how to code that one should also teach them the likely environment in which they will code, otherwise students will get the impression that they can opt-out of having to deal with software engineering.

    Has software engineering become better? Probably. Would you be able to tell when taking the class that I am? Hell no.
  • by ChannelX ( 89676 ) on Tuesday November 26, 2002 @12:59PM (#4759695) Homepage
    You should duck from the Java purist flames because you and the poster you're replying to are wrong. Java hasn't been interpreted for years. All Java virtual machines use some sort of JIT mechanism to compile the code before its run.
  • by corvi42 ( 235814 ) on Tuesday November 26, 2002 @01:01PM (#4759712) Homepage Journal
    So far in all that I've seen / read / heard in the literature and water-cooler talk about the advancements and improvements of code design is that the real improvements in design that were supposed to increase our productivity by reducing development time were mostly due to the advances and applications of modularity in programming. The whole object oriented approach was and still is heralded as the solution to all programming woes in some quarters, and hence the development of completely object based languages like Java.

    The idea of course being that good modular design, and good use of classes can increase reuseability ad infinitum, so that eventually there will be no task and no problem that cannot be assembled lego-like from blocks of premade reuseable code. The marvelous technology in Douglas Coupland's Microserfs ( called Goop wasn't it ? ) was really the epitome of this concept, a totally reuseable code system that was so generalized, so modularized that anyone with no more experience in programming than a child could assemble working programs for just about any purpose in a drag-and-drop virtual lego-building reality.

    Any student of the history of science and science forecasting should begin to smirk at this point. Is it any surprise that these visions have not materialized? The hard truth, IMHO, is that logic is inherently not conducive to such high degrees of abstraction and modularity. For given tasks which are fixed and well defined with completely known and understood parameters and requirements than yes, abstraction and modularization can be a great boon in optimizing design and improving the development cycle. We can see the great advantages that this has yielded in some areas where the requirements are fixed and the applications are mostly derivative of each other. GUI design is a great example, and there are a multitude of GUI building toolkits & development environments that take great advantage of this.

    However the whole thing breaks down when you move from the known to the unknown. When you try to extend this principle from fixed tasks to hypothetical ones in which the basic requirements are not envisioned. I would argue that there is a basic lack of understanding among some of the proponents of these techniques that at a fundamental level logic cannot be generalized out across all possible tasks, and all possible configurations. This was similar to the revelations of Godel's theorem in mathematics already in the 1930's - that any axiomatic system could be consistent but never complete. In reality adapting one set of code to a purpose other than that for which it was designed, or with parameters other than those originally envisioned usually is more trouble than it is worth, and often you would be better served by building new objects from scratch. You will always need intelligent educated people to design these things; there is no such thing as logical lego.

    Unfortunately it seems to me that many have not gotten this message yet. Sun and Microsoft are still actively humping that dream of infinite modularity and drag-and-drop programming design. In my experience with both Java and .Net, I have found that I always run into blocks where the established object model is too constraining and has too many built-in assumptions about what you're going to be using the classes for, and so I have ended by coding new versions from scratch. Of course it may simply be the nature of the applications I'm working on, and your mileage may vary. Ultimately I think that for derivative applications this kind of abstraction and generalization is definitely an improvement, but when you come to applications that move beyond those fixed tasks it actually becomes an impediment not an advantage.

  • by hackus ( 159037 ) on Tuesday November 26, 2002 @01:20PM (#4759854) Homepage
    Yes, I think things have improved.

    Such things as Structured Design, and OOP have made coded reuse better.

    What hasn't improved:

    1) Programmers STILL refuse to use tools that could help them in productivity. (i.e. source debuggers, instead of writing printfs around everything and printing out your variables)

    Tools tools TOOLS people. Use a source debugger and save yourself a great deal of time.

    If you can't use a different language or infrastructure to write the code.

    Sadly, many programmers still do not use source debuggers, citing a waste of time. But they will sit there and hack over and over again trying to understand the code they write with printfs!

    Tsk tsk.

    2) Cost Time Estimation. Wow, talk about almost zero improvement there. Almost zero, but not quite zero. After all, most people are now adopting an open source strategy so that even if your estimates are off, the cost penalties are reduced. Furthermore, most people are beginning to realize that you have to complete a full requirements document, and do some fact finding before you attempt to quote work.

    3) Finally, the hardware we use to write software is vastly more powerful, and as a result we can run much nicer environments on our machines when we write code, such as API references, etc. I have far fewer references now days to things like Java for example than I use to have to keep on my desk. Primarily because with the rise of IDE's the development environment can answer alot of questions I might have about the language I am using to write the software with.


    I would also like to point out things have got a little worse. If you believe like I do that 80% of the work in writing software is debugging it and maintaining it over its lifetime, then you like me have problems with our IDE's.

    Primarily when our IDE's produce automated code for drag and drop environments. They produce horrible code, at the expense of saving time now, and end up costing a great deal of time later.
    (Anyone use the latest .Net Beta 3 to generate controls, will understand what I mean.)

    I primarily write only Java code, but even my SunONE environment produces some pretty cruddy stuff if I am writing a desktop app.

    I think automatic code generation is a step backwards in many ways, and ends up costing more money to fix or maintain it.

    I still think a code "repository" built by humans, and nicely documented, like a cvs tree for example, is the better way. Time spent on the CVS code repository for building customized pieces is time much better spent IMHO.

  • by Lumpish Scholar ( 17107 ) on Tuesday November 26, 2002 @01:24PM (#4759870) Homepage Journal
    Look at the software written twenty five years ago, and look at the software written recently.

    Bill Joy wrote vi pretty much by himself. Bram Moolenaar write Vim pretty much by himself; it's a huge superset of Joy's editor.

    The original portable C compiler (PCC) was about 5,000 lines of C. No one even blinks if a graduate student writes a far more sophisticated language processor, e.g., a C++ compiler, a complete Java development environment (including a compiler and debugger).

    The original SimCity was awesome. No one thinks twice of re-implementing it for a PDA or a Java-enabled web browser.

    What's changed? Programmers don't have to worry so much about CPU, disk, or memory limitations. The tools (compilers, libraries, source control) are much improved. Some of the new languages are far more productive. There are also new practices, and the results of lessons learned, on how to do development; some programmers take advantage of these (not enough!)

    But our abilities haven't kept up with our aspirations. Compare SimCity to the masively multi-player Sims Online. How do vi or PCC stack up against Eclipse []? Look at the text formatters of twenty five years ago, and then look at OpenOffice (or Microsoft's stuff); at Unix v6 vs. Max OS X.

    Software hasn't kept up with Moore's Law. We're running a Red Queen's race, going as fast as we can just to stay in one place; and we're losing.
  • by SysKoll ( 48967 ) on Tuesday November 26, 2002 @01:41PM (#4760053)

    Software is at best a cottage industry of craftmen that have widely different abilities. At worst, it is a cottery of alchemists that promote their own secret snake oil recipes and only succeed by sheer luck.

    And I am a developer that calls himself a software engineer, so this is not a flame.

    Why craftmen? Because we develop with tricks and recipes, often called "processes". But neither of these are scientific processes. They cannot predict the outcome of a software project within a definite set of constraints. They cannot be disproved. And they cannot explain failures. So they aren't science, they are rules of thumbs. The guy who makes a living by applying rules of thumbs and learned tricks is a craftmen.

    Why alchemists? Because scientists publish their methods and their results. To the contrary, the software industry hides its customer references and does not publish its source code (with the notable exception of Open Source, with remains the exception in large-scale software projects). This is how alchemists and "savants" worked in the Renaissance. They hid their trade secrets, they had confidential relationships with rich patrons, and they feared full disclosure.

    On top of that, each subbranch of computing has its own lingo and redefines as many words as possible. Mathematicians who specialize in field theory topology may not understand number theory, but at least, they use distinct, well-defined jargons. In computing, terms like "record", "field", "server", "link" are so overloaded that they are just noise.

    About 60% of all software projects are cancelled or written-off as failures. I don't think civil engineering or, say, aeronautics have such a abysmal track record.

    I hope that some day, we'll practice and teach Computer Science as, well, a science, not as a craft.

    -- SysKoll
  • Re:I don't (Score:3, Insightful)

    by binaryDigit ( 557647 ) on Tuesday November 26, 2002 @01:44PM (#4760075)
    Actually you bring up an excellent point that I'm afraid will get lost, but anyway.

    What you described is an "ideal" situation, but then the ugly reality of programming rears it's ugly head. What if you want to sort by something other than the first column, your data abstract should support that (a property perhaps). What if you want to sort an object by a member (do you supply a comparison routine that now makes the abstract not quite as abstract), what if you need to sort unicode, what if you're sorting 500MB of data. All these extra requirements have a tendency to break many abstractions and force the programmer to A) have a lot more knowledge about the tool B) make any particular tool less likely to fit the bill.
  • different minds (Score:3, Insightful)

    by Tablizer ( 95088 ) on Tuesday November 26, 2002 @01:45PM (#4760086) Journal
    After years of debating and haggling with OOP fans (I think OO is way overhyped), I have concluded that an important factor is the personal psychology of the individual. IOW, "mindfit".

    People simply perceive things differently and are bothered and helped by different conventions, notations, traditions, etc. Each individual has to find their *own* silver bullet, or at least a brass one. Some of the surveys by Ed Yourdon seem to back this. OOP scored higher when "OO fanatics" worked on projects, but as OO went more mainstream, its score faded into the background noise (average).

    Perhaps IT shops can focus more on screening for individuals who think alike rather than simple buzzword matching. It perhaps is time for an inkblot-like test for developers.

    I used to complain about how Perl is a "write-only language", but I later realized that it is just write-only to *me*. If other Perl fans can figure out Perl and are productive under it (both in writing and maintenance), then I see little reason to complain. A given shop just has to be willing to accept the fact that they are married to Perl and Perl-loving developers. But if they can get things done, then why should I or anybody else fuss? One man's spehgetti is another's gormet favorite.

    (It is still important to explore other viewpoints to expand your horizons, but if something does not seem to "click" for you after a little while, then don't feel bad. I am tired of "you just don't get it, neener neener" from various fans of different paradigms or languages.)
  • The hell it was (Score:2, Insightful)

    by Anonymous Coward on Tuesday November 26, 2002 @01:48PM (#4760118)
    The Skunk Works delivered lots of products. The SR-71, U-2, F117, P-80, blah blah blah. The U-2 is still in service today (50 years later! What have you built that's been around for 50 years?), and many people argue that the SR-71 should be. The SR sure did leak a ton of fuel, but it's not like they didn't look for a solution. They certainly didn't "blow off all thermo issues." The plane grows a full 12" longer in flight. That 12" has to come from somewhere, and it's not like they can use pine tar to plug the gaps the way people did when building ships (oh wait...wooden ships leak like sieves too, until their in their natural environment, at which point they expand to be nice + tight, just like the SR). Read up on the SR someday, it's an technological tour de force. And it was done by a bunch of guys with slide rules. Do you even know what a slide rule is?

    Please come back when you know what you're talking about. Until then, HAND.

  • by Locutus ( 9039 ) on Tuesday November 26, 2002 @01:56PM (#4760188)
    Just as the hardware industry has grown from abstracting commonly used circuits into chips, what occurs is the creation of building blocks on standard designs. In the software sector, that would equate to a component architecture which keeps building on what's been done in the past. The big problem here is that if ANYBODY starts trying to hide anything in Microsoft Windows, Microsoft eliminates them from the market. C++ frameworks were very popular in the early 90's but they all but disappeared as Microsoft provided their C-- (object like) way of doing things at a financial loss. Borland was a leader in the C++ dev tool market but their work hide MS Windows API's so much you could start building applications that recompiled on many different operating systems. MS gutted Borland of it's top level design engineers and paid Borland a tiny fee to settle out of court. CORBA was another framework for building applications across a network of different operating systems and languages. We got 3+ years of intense MS-DCOM press coverage and CORBA eventually faded. I worked on a project which was to use CORBA for a large military hospital system but 1 year into it we were told to stop and start using MS tools and languages with no explainations. Java did/does the same thing( OS/API abstraction ) and it too was fought by Microsoft with incredible gusto.

    As long as Microsoft holds the major share of the desktop computing platform, they will not allow anybody else to decide what's to become a standard software component or API. And changing this every 2 years or so keeps the $$ flowing from your pockets into theirs.

    Sure you can do some of this within your own organization but as an industry, the largest software company in the world opposes such thinking. And with $30 billion in cash, they have the power to change the minds and directions of whole countries.

  • by mav[LAG] ( 31387 ) on Tuesday November 26, 2002 @01:57PM (#4760203)
    Immense productivity gains can be had by using Perl, Python, Tcl and similar scripting languages as high-level wrappers around existing code in C and C++. The fact that your code can be clean, readable, object-oriented if you wish and - most importantly - written and tested quickly, all while using existing code is IMO one of the great advances of software development we've ever seen.

    It's given rise to all sorts of cool stuff:
    • languages within applications for extending those applications beyond what the original authors intended
    • real time tweaking of running apps - think Quake consoles here
    • rapid use of large development libraries written in C and C++ (like pygame [] wraps SDL and its subsidiary libs) without the need to recompile
    • new leases on life for older code in C and C++ which might be hard to rewrite but simple to wrap and extend

    It's been revolutionary for old fogies like me. And just a minor quibble for the submitter - 1987 was 15 years ago - not 25.
  • No silver bullet. (Score:3, Insightful)

    by miffo.swe ( 547642 ) <> on Tuesday November 26, 2002 @01:58PM (#4760207) Homepage Journal
    I think that programming is like any other craftsmanship. If you automate it you loose some and win some. That said a poor programmer can make low level code stink. The only thing i can imagine that would make programs easier would be reuse and refinment of codesnippets or api's. Unfortenatly they all seem to be autdated long before they reach refinment and a new process begin.

    It should in theory be possible to make a programming language that works like lego bricks but no one has succeded in doing it successfully so far. All of the higher level languages have great shortcomings and what you win in speed you usually loose in speed and bloat.

    In x years ahead we might (emphasis might) have a stable enough enviroment to start making programming truly easy. As of today there are no shortcuts, you have to be skilled and suited for the job.
  • by rdean400 ( 322321 ) on Tuesday November 26, 2002 @02:06PM (#4760270)
    Unfortunately, the view of Computer Science that I get is that while the new methodologies are all well and good, what they really enable is to allow humans to manage ever-larger software projects without necessarily going down in quality.

    Quality won't improve as a result of methodology as long as humans are writing the code.
  • by Anonymous Coward on Tuesday November 26, 2002 @02:06PM (#4760271)
    As a Lockheed Martin employee, I can assure you that the high number of CMM-5 sites tells you nothing positive about our code.

    All it means is that some VP got hot for SEI-CMM and had 20 middle managers fill in paperwork for 2 years until the certification was approved. Our programming practices were poor before, and they've remained poor. (Several of our projects have formed the basis for Dilbert strips)
  • by hax4bux ( 209237 ) on Tuesday November 26, 2002 @02:12PM (#4760333)
    I don't agree. Tools today are much better and cheaper. 20 some odd years ago, I was writing ALGOL on a Burroughs box. It was an awful way to earn a living.

    The editor would take too long to explain and probably send me back to therapy. But each page of source code was sent to tape, the tape would advance and then you could get the next page. You could not go backwards so you had to plan your editing session by marking up a listing and then going by page number.

    The compiler was full of holes. For example, "if (a+5 == 7)" did not evaluate the same as if "(7 == a+5)" (I dunno if that is the correct syntax, but my point is that constants on either side of a compare were not treated the same). This forced us to review the generated code (manually) to ensure the compiler was behaving properly. This was before optimizing compilers and with practice it wasn't too hard to follow the binary.

    In this type of environment, you had to be "dedicated" but any little thing was such a huge struggle. We wasted a lot of time because you
    could not trust *anything*. Not the compiler, not the libraries (such as they were), not the hardware.

    I think software development is much easier and predictable today. Well, it's much harder to blame the tools anyway.
  • by Badgerman ( 19207 ) on Tuesday November 26, 2002 @02:15PM (#4760362)
    We seem to be running in circles and every loop around seems to require a toll at the Redmond Toll Booth.

    Which is definitely part of the problem.

    Software Companies do NOT neccessarily exist to produce good tools and products. They exist to make money and please the shareholders.

    This does NOT mean they are out to produce the best, they're out to sell. Hopefully it is the best, but . . . well, no more needs to be said.

    That clouds the initial question on the possibility of Silver Bullets. Even if they're out there, we've got to wade through tons of irrelevant stuff to find our particular Silver Bullet.

    And a lot of our vendors are NOT helping. That's one reason I like open source. People may participate for different reasons, many far from noble, but the product and usefulness are a major focus.
  • Answer: YES (Score:4, Insightful)

    by p3d0 ( 42270 ) on Tuesday November 26, 2002 @02:18PM (#4760393)
    Software development has most certainly improved, though that is not to say it's in good shape right now. We still have a long way to go.

    However, take a look at Parnas' classic paper on information hiding. [] In it, he makes the following statement (emphasis mine):

    The KWIC index system accepts an ordered set of lines, each line is an ordered set of words, and each word is an ordered set of characters. Any line may be "circularly shifted" by repeatedly removing the first word and appending it at the end of the line. The KWIC index system outputs a listing of all circular shifts of all lines in alphabetical order.

    This is a small system. Except under extreme circumstances (huge data base, no supporting software), such a system could be produced by a good programmer within a week or two.

    Is there anyone in the crowd that doesn't think he could write a shell script/perl script/etc. to accomplish this in 30 minutes or less? I'd be willing to bet I could do it in Python in under 15 minutes.

    This paper was written 30 years ago. It only pre-dated Brooks by 5 years. In that time, if my estimate is accurate, the development time for simple programs may have fallen by a factor of a hundred or more.

    Software doesn't appear to have improved in that time simply because we keep trying to do more complex things.

  • HDLs (Score:3, Insightful)

    by trumpetplayer ( 520581 ) on Tuesday November 26, 2002 @02:34PM (#4760543)
    May I point out that programming languages can be used to develop things other that software (although related) nowadays.

    Probably, the thing I find most interesting is hardware, which can be described using Verilog or VHDL, both Hardware Description Languages. That, together with technologies like FPGA, enables a programmer to design his own microprocessor if he wishes to do so, I find that revolutionary.

  • by tangi ( 592996 ) on Tuesday November 26, 2002 @02:45PM (#4760662)
    I may know of a silver bullet.

    Nothing new under the sun, I'm unlikely to be the first one to face any problem. Among those who already faced it, some managed to solve it. Many generous solvers live on the internet, share their production and I can usually find it in no time using Google. Isn't the systematic plundering of others' solution to the issue I'm facing the biggest improvement of the last quarter century? I definitely think so.

    Logic requires careful thought, and careful thought requires time.

    ... But others may already have thought carefully.

  • by Anonymous Coward on Tuesday November 26, 2002 @03:29PM (#4761115)

    Furthermore, very often the best people will completely founder under a PHB, so the talent is never recognized and goes to waste. They'll be identified as troublemakers, or worse as lazy, rather than the creative problem-solvers they are. What we really need to teach our brilliant CS students is how to stick up for themselves and known their own worth (and how to bring it out).

  • by Aron S-T ( 3012 ) on Tuesday November 26, 2002 @03:40PM (#4761219) Homepage
    That one sentance, which was Brooks' key insight, sums up why progress will always be limited. You can read commentary on that point in a paper I wrote here [].

    Other's have already pointed out the obvious corollary: good management practices are most important for successful, reliable software development.

    I don't think the Agile people need to be trashed as much as they are here. Sure, they are gurus. But they are emphasizing human-centric instead of "software engineering" which is a Good Thing (TM). Just don't get too religious about XP and you'll be fine.

    Other than that, the greatest thing for programmers since sliced bread is Python [].

    And yes, I also agree open source development has pushed this industry light years ahead. But it works because - it's human-centric programming!
  • by __past__ ( 542467 ) on Tuesday November 26, 2002 @04:34PM (#4761807)
    you would not program an OS in Lisp
    Um, it's perfectly OK to write an OS in Lisp. Remember Genera, or the other LispM OSes?

    Those operating systems were written in Lisp to about the same extent that, for example, Linux is written in C, that is, about everything except some assembler parts was Lisp. And it all was at the control of the user, at runtime. You could debug and browse the OS sources, modify them on the fly (no rebooting just because you changed something about the filesystem implementation or other trivialities), etc. Pretty impressive environments, even from todays point of view.

    You can even still buy Genera for Alphas, but it's not exactly cheap, and finding a Symbolics representative may prove harder than someone selling WinXP copies...

    But anyway, I'm not saying anything new in here, it's just that people wont get it, or that they'll forget.
    Sad, but true... It is really amazing to see what Lisp or Smalltalk users had years ago, and how only parts of this show up in recent "innovative" languages and tools. Sometimes I think that the history of computing should be made an obligatory part of CS education.

    But hey, should I care if my competitors still use half-assed languages and keep hiring 'Java-style' programmers?
    Quote Richard Gabriel [], "our competitors will be spending all their time trying to figure out that it's really possible we're doing what we're doing, because they will be thinking in terms of customization at compile time or link time, not at runtime." (Although I don't agree with every aspect of his rant.)
  • So, people go with what they know, even if 100 codemonkeys are hired as opposed to 50 talented people, even though the 50 talented people may save you 25% of your budget.


    Actually, often 3 great developers can do the work of 30 codemonkeys. It really is that much of a distinction.

    The problem with the advances in computing tools is that people have become dependent on the tools, and have no understanding of the underlying technology. Then, when they get stuck, they can spend weeks or months trying to figure it out, while it only took the good programmer about 30 minutes on Google, because he knew what to look for.
  • I think the tech market is very good right now. However, you have to look at it from the right perspective.

    I'm in a tech company that has been growing over the last 3 years. The problem is knowing how to find business.

    Every business wants to automate what they do. Every business wants to operate better and faster. The problem is, most tech companies know very little about how to actually do this, especially how to treat and care for customers.

    Get to know the people and companies around town. Find out what their problems *actually are*. Find a way to fix them, possibly using technology. Present your solution and your price.

    There are always people with problems that are willing to pay for them. What happened in the tech industry are:

    a) it grew so fast, "developers" were in such demand that completely unqualified incompetents were being hired. In addition, HR had no way of distinguishing between competent and incompetent people. This is probably the biggest problem.

    b) Even fewer people cared about actually serving consumer needs. Upgrading to Windows 2000 became more important than making my processes more efficient.

    c) Management finally wised up and stopped doing useless things with technology. However, there is still a demand for useful technology, you just have to be able to justify it to a much smarter crowd.

    d) Management refuses to go along with technology people who can't communicate.

    Tech people are used to being able to just make sales based on the fact that they knew more about technology than anyone else. Now they have to be able to actually help solve problems. I see this as a good thing.

    By the way, solving problems is something that can't be outsourced to third-world countries. It requires personal communication. It requires being able to see technology from the point-of-view of the business owner, and being able to speak intelligently and understandably about them and their problems, and only speak about technology when it's absolutely relevant.
  • by johnnyb ( 4816 ) <> on Tuesday November 26, 2002 @06:31PM (#4762884) Homepage
    Perl and Python are my two favorite languages. Them are fightin words....

    Perl is amazing, because you can get so much done in so little time. People say it's hard to read, but it really only takes one or two readings of the camel book to be able to understand all but the toughest perl.

    It has support for all stages of development, and is really useful as your project grows from procedural, to object-oriented, and even if it needs some functional aspects. A scripting language as easy-to-use as perl with lexical closures is simply amazing to build with.

    In addition, it has it's own documentation system that goes beyond what Javadoc does, and allows you to really document the what's and why's of your interfaces.
  • by Viol8 ( 599362 ) on Wednesday November 27, 2002 @08:15AM (#4766190) Homepage
    "designed for non CS graduates to work with. even though the quality of code is atrocious, they let the people who understood the domain (finance, dentistry, whatever) code what they wanted. " And that my friend is the whole problem with these sorts of 4Gl languages. You wouldn't not let a soccer mum or an enviromentalist design and build a car , you'd get an engineer to do it after he'd listened to their requirements. WHy are programs different? If the program turns out wrong then its (in my experience) usually the users fault for not specifying precisely what they want, the programmer just follows their instructions. Letter novices code up systems that may have financial consequences either directly or indirectly on the company if they go wrong is foolhardy in the extreme. Better the spec is wrong and the application is delayed for a month than the code is wrong and loses the company a million. Keep MBAs out of coding , they don't have a clue and in fuck up in 99% of cases.
  • by Aapje ( 237149 ) on Friday November 29, 2002 @07:02AM (#4779165) Journal
    I'm sorry for my late response, a virus penetrated my firewall and set the automatic defense system in motion: increased priority to the 'White cell' service at the expense of the Intelligence app.

    Named parameters is to *avoid* find-and-replace.

    Yes, I know. But the effect will probably be substantially smaller than using refactoring(s). My point was more that they don't exactly overlap, refactoring can do much more than named parameters and also a bit less ( named parameters can be arbitrarily ordered, may increase the readability of function calls and allow functions with the same name+parameter type(s) ).

    I actually consider this "refactoring" push a *failure* of OOP, or at least a bad idea. Rather than trying to avoid the Code Change Gods, it is giving into them big-time. I don't get it.

    I guess you don't like Extreme Programming then ;) The problem with avoiding change in your app is that you get code rot. The real world changes while the code stays the same. You add local hacks to support feature X, Y and Z, messing up your code. Changes become harder and harder to make. You start to get massive code duplication and (unnecessary) interdependencies. Your resistence to low-level change catches up with you because you are now forced to make all kinds of wide-spread, buggy hacks just to support the features that your customer really need. Finally you throw out the mess and start again.

    XP tries to avoid this as long as possible by making it much, much easier to keep your code up to date. Many of the XP rules support this:
    - no ownership of code
    - no clever tricks, KISS
    - rigorous unit tests
    - only create the code that you need now
    - integrate often
    - refactor regularly and in small steps

    Having unit tests for just about everything is a very important rule in this regard. It gives you confidence that you can change the implementation without changing the functionality. This allows you the maximum amount of freedom. You keep the code lean and mean. The design evolves along with the addition of new features. You won't be prevented from making necessary changes just because it's 'too hard to do' (which happens in real life - a lot). I understand why it is hard to grasp, this goes against the engineering principles that you where taught: gather all requirements first, make a detailed design, etc. Of course, we all know that the world doesn't conform to the waterfall model and that software is especially prone to changing requirements. There is only so much you can do to reduce the effects of this threat. Instead of fighting it, XP embraces this law of nature (which makes it very effective for some projects). Refactoring is an important part of this strategy and I don't see why you consider it pushing a failure. Is procedural programming immune to code rot and changing requirements? Is any programming paradigm?

Were there fewer fools, knaves would starve. - Anonymous